The Georgia Court of Appeals’ decision this month to toss an earlier ruling that relied on fake cases generated by artificial intelligence (AI) could lead to legal mayhem in a field already overrun with AI-generated bogus citations.

The proliferation of AI, and a corresponding influx of cited court cases that do not exist, threaten to muck up the legal system as AI spreads its tentacles into each corner of American society.

AI Unleashed 2025

False citations in a divorce case, Shahid v. Esaam, appeared to have been “drafted using generative AI,” the appeals court found in what is believed to be the first court decision reversal because of AI.

“We are troubled by the citation of bogus cases in the trial court’s order,” the appeals court said in its decision, which directs the lower court to revisit the wife’s petition. “As the reviewing court, we make no findings of fact as to how this impropriety occurred, observing only that the order purports to have been prepared by Husband’s attorney, Diana Lynch.”

The appellate judges said Lynch repeated bogus citations in the trial court order to the appeals court and expanded upon them, even after the wife challenged the fictitious cases in the trial court’s order. The appeals court said Lynch’s appeals briefs contained “11 bogus case citations out of 15 total, one of which was in support of a frivolous request for attorney fees.”

The appeals court fined Lynch $2,500 as a penalty for filing a frivolous motion for attorney fees.

Welcome to the treacherous new climate of practicing law in the age of AI. Not only does the technology threaten to displace some working lawyers, but judges are increasingly chiding lawyers for citing AI falsehoods in legal filings.

When two New York attorneys in an aviation injury claim, Mata v. Avianca, were rebuked in August 2023 for citing an AI-imagined alternate history of the Montreal Convention in a court submission via their use of OpenAI’s ChatGPT, legal experts assumed it would mark the end of fake cases working their way into filings. (The Montreal Convention amended important provisions of the Warsaw Convention‘s regime concerning compensation for the victims of air disasters in 1999.)

A federal judge then hit the lawyers and a law firm with $5,000 fines in an unprecedented instance in which ChatGPT was blamed for their submission of fictitious legal research in the case. “Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance,” Judge P. Kevin Castel wrote in his decision. “But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings.”

But as of mid-June, there have been 156 cases in which lawyers cited fake cases generated by AI in court documents, according to a database created to track the problem.

Many lawyers now routinely use some form of AI such as document review programs like Relativity, which “amplify the manual review process,” or legal research products like Westlaw Edge that employ “AI-enhanced capabilities” to automate searches.

Contrast that with a March 2023 survey of more than 1,000 U.S. lawyers that found 80% had yet to incorporate Generative AI in their work, and 87% had ethical concerns.

“Law school 101 teaches us to Shepardize, research the validity of your cases,” Elaine Fraser, a family law attorney in Burlingame, Calif., said in a text message. “This is the foundation of good lawyering and failure to ensure the research is valid is unethical. Courts rely on attorneys — as officers of the court — to argue verified and vetted information. If you’re going to use AI in the law, do not trust it. Use it as a tool, but it does not take the place of good legal research and writing. Again, law school 101.”

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Tech Field Day Events

TECHSTRONG AI PODCAST

SHARE THIS STORY