Court rules against lawyers in unprecedented ChatGPT malpractice case


Update, June 23, 2023:

The verdict is in: the attorneys involved in the ChatGPT failure, Peter LoDuca and Steven A. Schwartz, and their firm, Levidow, Levidow & Oberman PC, neglected their responsibilities by first filing the wrong cases and then defending them when asked, according to New York Judge Kevin Castel.

Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance. But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings.

From the court ruling

The punishment is rather mild: the two lawyers involved have to pay $5,000 to the court, inform their clients about the misstep, who should already know about it because of the media frenzy, and inform the real judges who were cited by ChatGPT in the wrong boxes. The court itself refrains from demanding an apology “because a compelled apology is not a sincere apology”.

Original article from May 28, 2023:


Lawyer who used ChatGPT to file charges now faces trial himself

Since the advent of large language models, the media and experts have been sounding the alarm: The content generated may be wrong. This message has not been heard everywhere.

A New York attorney cited cases that didn’t exist using ChatGPT, not knowing it was making things up.

The case in question involves a man suing an airline for alleged personal injury because he was struck in the knee by a service cart during a flight.

The airline filed a motion to dismiss the case. The plaintiff’s legal team wanted to use ChatGPT to argue their client’s position.

Tea charging document filed by the legal team cites examples from non-existent legal cases made up by ChatGPT, with lengthy and seemingly accurate citations such as “Varghese v. China Southern Airlines Co Ltd, 925 F.3d 1339 (11th Cir. 2019).”


Order to Show Cause

In a subsequent affidavit, Steven A. Schwartz, the attorney responsible for the research, admits to using ChatGPT for the first time for the research without being aware that the system could generate false content. ChatGPT had “assured” the accuracy of its sources when asked.

Lawyer regrets using ChatGPT

Schwartz has been in the business for more than 30 years. In a statement, he apologized for using the chatbot. He said he was unaware that the information generated could be false. In the future, he said, he would not use AI as a legal research tool without verifying the information generated.

Aim OpenAI’s chatbot made it easy for Schwartz to believe the lies: screenshots show conversations between Schwartz and ChatGPT in which the lawyer asks ChatGPT for sources for cited cases and the system promises a “double-check” in legal databases such as LexisNexis and Westlaw. However, it is Schwartz’s responsibility not to rely on this information without verification.

The lawyer and his colleagues will have to justify in court on June 8th why they should not be disciplined. It is possible that the research for precedents with ChatGPT will itself become a precedent for why ChatGPT should not be used for research, or should not be used without authentication – and a warning to the industry.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top