In a personal injury lawsuit against Avianca Airlines, the use of ChatGPT, OpenAI’s popular chatbot, generated fake legal decisions. The lawsuit was filed by Roberto Mata and his attorney, Peter LoDuca, who claimed that Mata suffered injuries after being hit by a metal serving cart on a 2019 flight to New York. Avianca Airlines moved to dismiss the case, and Mata’s lawyers opposed the motion by citing legal decisions. However, everything fell apart when Avianca’s attorneys claimed that they couldn’t find numerous legal cases that LoDuca had cited in his response.

The Problem Deepens

Federal Judge P. Kevin Castel demanded that LoDuca provide copies of nine judicial decisions that were apparently used. In response, LoDuca filed the full text of eight cases in federal court, but the problem only deepened. Castel found that the texts were fictitious, citing what appeared to be “bogus judicial decisions with bogus quotes and bogus internal citations.” It was later revealed that ChatGPT had “hallucinated” and generated cases and arguments that were entirely fiction. It appeared that LoDuca and another attorney, Steven Schwartz, had used ChatGPT to generate the motions and the subsequent legal text.

LoDuca and Schwartz Face Consequences

Schwartz admitted to the court that he was the one using ChatGPT and that LoDuca had no knowledge of how the “research” was conducted. The use of ChatGPT has put LoDuca and Schwartz in hot water, as they may now face judicial sanctions, including disbarment. The motion from the defense was “replete with citations to non-existent cases,” according to a court filing. LoDuca and his firm doubled down on the use of ChatGPT, using it not just for the initial filing but also to generate false legal decisions when asked to provide them.

The Implications of AI on the Legal Industry

The use of AI in the legal industry is not new. Many law firms are using AI-powered tools to improve efficiency and accuracy in legal research. However, the use of chatbots like ChatGPT to generate legal text is concerning. The accuracy and reliability of AI-generated legal decisions are questionable, as demonstrated in this case. The incident raises questions about the role of AI in the legal industry and the responsibility of lawyers to ensure the accuracy and authenticity of legal text.

The use of ChatGPT in a personal injury lawsuit against Avianca Airlines has generated fake legal decisions, putting the attorneys involved in hot water. The incident highlights the need for caution when using AI-powered tools for legal research and the importance of ensuring the accuracy and authenticity of legal text. The use of AI in the legal industry has the potential to improve efficiency and accuracy, but it also raises concerns about the reliability of AI-generated legal decisions.

Enterprise

Articles You May Like

Meta Platforms Developing Custom Chip “Family” for AI Data Centers
Voicebox: A New AI Model for Text-to-Speech Generation
Instacart IPO Surges 40% in Nasdaq Debut
WhatsApp is Testing its Wear OS App for Smartwatches

Leave a Reply

Your email address will not be published. Required fields are marked *