Lawyers’ Wild AI Excuses Are Getting Them in Deeper Trouble

Lawyers' Wild AI Excuses Are Getting Them in Deeper Trouble - Professional coverage

According to Ars Technica, courts are dealing with what one judge called an “epidemic” of fake AI-generated case citations, with lawyers facing sanctions in at least 23 documented cases since 2023. The review found that judges typically impose around $1,000 fines, though one California lawyer got hit with a $10,000 penalty called “conservative” by the judge. Common excuses include lawyers claiming they didn’t know AI was used, blaming underlings or clients, or feigning ignorance about chatbots’ tendency to hallucinate. Some attorneys have gotten increasingly desperate, with one New York lawyer bizarrely blaming malware for fake citations, while an Alabama attorney claimed toggling between windows on his laptop was too difficult. Another lawyer in Illinois has been sanctioned at least three times, earning the label of “serial hallucinator” from one judge.

Special Offer Banner

The hacker defense falls flat

Here’s the thing about blaming hackers for your AI mistakes – judges tend to want evidence. In one particularly wild New York case, lawyer Innocent Chinweze first admitted using Microsoft Copilot, then pivoted to claiming malware allowed “unauthorized remote access” that supposedly inserted fake citations. The judge called this an “incredible and unsupported statement,” especially since there was no evidence of the supposed earlier draft with correct citations. Even more galling? Chinweze filed an 88-page document on his sanctions hearing day that still contained AI-written content, complete with a disclaimer about potential inaccuracies. He eventually retreated to his original excuse and got hit with a $1,000 fine plus referral to a grievance committee. Basically, if you’re going to claim you got hacked, you’d better have more than just a convenient story.

When toggling windows is just too hard

Then there’s the Alabama attorney who claimed the real villain was… his laptop touchpad. James A. Johnson argued that toggling between programs felt “tedious,” so he used an AI tool called Ghostwriter Legal that automatically appeared in Microsoft Word’s sidebar. He was apparently too busy caring for a family member after surgery to realize this tool used ChatGPT as its default AI program. The judge wasn’t buying it, especially since Johnson was being paid with public funds while letting AI do his work. His client was so horrified they fired him on the spot, and Johnson got hit with a $5,000 fine. Look, we’ve all struggled with annoying software interfaces, but when you’re handling someone’s legal case, maybe suffer through the extra clicks?

The excuse playbook keeps expanding

Ars found lawyers trying every excuse imaginable. Some claim they didn’t realize AI was being used at all, like the California lawyer who got stung by Google’s AI Overviews. Others blame login issues with legal databases like Westlaw, though courts have shown limited sympathy for that defense. One Oklahoma lawyer admitted he only asked ChatGPT to “make his writing more persuasive” and didn’t expect it to invent citations. Another California lawyer ran his AI-“enhanced” briefs through multiple AI platforms to “check for errors” but somehow never read the final product himself. And let’s not forget the lawyers who try blaming their clients – one Texas attorney had to put his client on the stand after claiming she helped draft the problematic filing. The judge’s response? “Is your client an attorney?” Spoiler: she wasn’t.

Why these excuses keep failing

So why are judges getting increasingly fed up? For starters, many courts have now issued explicit guidance about AI hallucinations. Ignorance is becoming less defensible by the day. As one judge noted in the USA v. McGee ruling, “basic reprimands and small fines are not sufficient to deter this type of misconduct because if it were, we would not be here.” The database compiled by French lawyer Damien Charlotin shows this is becoming a widespread problem that courts are taking seriously. And honestly, when you look at cases like the Mattox v. Product Innovations ruling or the Noland v. Land sanctions, the pattern is clear – judges want lawyers to admit mistakes early and take responsibility. The lawyers who come clean quickly and show they’re learning tend to get lighter penalties. Those who double down with increasingly creative excuses? They’re learning the hard way that judges aren’t impressed by technological incompetence or outright deception.

One thought on “Lawyers’ Wild AI Excuses Are Getting Them in Deeper Trouble

Leave a Reply

Your email address will not be published. Required fields are marked *