leadership

leadership

news

article

- 14/06/23

Attorneys may be sanctioned for using ChatGPT indiscriminately. The artificial intelligence had made up court precedents and lied when asked about the authenticity of its information.

A man filed a lawsuit against the airline Avianca, saying he was injured when a metal serving cart struck his knee during a flight to New York.

Avianca’s lawyers moved for dismissal. The passenger’s lawyer, Mr. Steven Schwartz, an attorney with over three decades of legal experience, objected to the case dismissal submitting a 10-page brief that cites several relevant court decisions.

However, there was just one issue: “no one – neither the airline’s lawyers, nor the judge, and not even the Court of Appeals, which was cited in one of the decisions, – was capable of finding the precedents or the quotations cited and summarized on the legal brief. Avianca challenged the authenticity of the legal opinions. Mr. Steven Schwartz submitted an affidavit admitting that he had used ChatGPT in drafting the brief and evaluated the technology as “a source that has revealed itself to be unreliable.”

The affidavit contained excerpts of the dialogue established between the lawyer and ChatGPT. He questions if a particular court decision is real; the artificial intelligence replies that it is real; Mr. Schwartz then asks what is the source of the information, but ChatGPT apologizes for the confusion. The lawyer continues: “Are the other precedents that you provided also fake?”, to which ChatGPT responds: “No, the other cases I provided are real and can be found in reputable legal databases.”. However, they could not be found.

The judge ordered a hearing for June 8 to discuss potential sanctions to the lawyer.