The legal world and the solicitors that work within it need to constantly adapt to cultural, technological and social changes, but exactly how and when these changes occur can be somewhat unpredictable.
After all, some laws in England date back to the Magna Carta, but society has moved on considerably since then and the common law legal system is designed with that in mind, with precedents from previous cases taken into account when establishing legal decisions in the here and now.
Of course, these changes need to be made with sensitivity and care, not to take advantage of a popular technological trend, which one legal professional found out to their horror when they tried to cut corners using technology and was fined thousands of pounds as a result.
Artificial Intelligence, Artificial Cases
The case of Mata v Avianca, initially filed in 2022 in the state of New York in the United States of America, was initially a very typical personal injury lawsuit filed by Roberto Mata against Avianca Airlines after a metal serving tray hit him in the knee during a flight in 2019.
This should have been an exceptionally simple case of proving wrongdoing, citing previous precedent for damages and getting the gentleman the damages he was legally entitled to.
The case was moved from a state court to a District Court, which meant that the statute of limitations had expired for the particular legal infraction. Avianca promptly filed a motion to dismiss the case that Mr Mata’s legal team appealed.
They presented a brief citing multiple previous examples of cases similar to this, most notably Varghese v China Southern Airlines. However, none of these cases actually existed.
The reason for this was that Steven A. Schwartz, the lawyer who initially represented Mr Mata at a state level and produced the legal documents on behalf of Peter LoDuca, the lawyer who represented him in District Court, had written the briefs with the help of the OpenAI tool ChatGPT.
ChatGPT, a large language model often marketed as “generative artificial intelligence”, is a tool that has been used in a number of fields because it has the ability to generate plausible-looking responses to questions.
The problem with tools such as ChatGPT is they prioritise answering the question over providing the correct answer, and will often make up plausible-sounding but made-up responses to the question.
Mr Schwartz claimed that he had never used ChatGPT before and had never read one of the many articles that described AI “hallucinations”, with some studies claiming that factual errors are found in nearly half of the generated texts.
Regardless, Judge Kevin Castel was not sympathetic to the grievous error and “gibberish” in the brief, issuing a $5000 (£3809) fine and requiring them to send letters to the six judges “falsely identified” as authoring fake opinions cited in the brief.
The two lawyers also lost the case, siding with Avianca that the statute of limitations had passed and as a result, the lawsuit could not progress.
There might be a future where some form of AI tool is used in the courtroom, but until guarantees can be made about the authenticity of everything in the training data, it is unlikely to be anytime soon.