According to The Wall Street Journal, a growing contingent of judges are embracing AI to draft opinions and conduct legal research, despite widespread concerns over accuracy. U.S. District Judge Michael Rodriguez used an AI tool to produce a first draft of a 140-page opinion in minutes, a task that had taken his team 10 months. LexisNexis now offers its AI tool to all federal judges, and the startup Learned Hand has active pilots in 10 state court systems, including the Michigan Supreme Court. However, lawyers have been sanctioned for AI-generated errors, including one fined $85,000, and Senator Chuck Grassley admonished two federal judges for opinions with fictitious litigants and citations. Meanwhile, the American Arbitration Association launched an “AI Arbitrator” for construction disputes, trained on 1,500 past decisions, which has been flooded with requests since its debut.
The Speed vs. Accuracy Trap
Here’s the thing: the appeal is obvious. Judges are drowning in paperwork. When a tool can churn out a draft of findings of fact in minutes instead of weeks, it’s a siren song. The promise is better access to justice through faster resolutions. But the legal system isn’t built for speed; it’s built for meticulous, precedent-based accuracy. And that’s where the cracks are showing spectacularly. We’ve moved from lawyers getting busted for fake citations to judges issuing rulings with “nonexistent sworn declarations.” That’s a terrifying escalation. It forces you to ask: if the judge’s draft opinion is AI-generated and the lawyer’s brief is AI-generated, who’s actually doing the thinking?
Winners, Losers, and a New Competitive Front
The competitive landscape is fascinating. Legacy legal research giants like Thomson Reuters and LexisNexis are racing to embed AI into their expensive platforms, essentially defending their turf. But startups like Learned Hand are doing something smarter: going straight to the source by building tools specifically for judges and court systems. That’s a direct sales channel with immense leverage. If a state supreme court adopts your tool, you’ve got a powerful case study. The losers, in the short term, might be junior associates and law clerks whose traditional research and drafting tasks are being automated. But the real risk is to the litigants. When the errors are substantive enough to draw Senate scrutiny, public trust erodes.
The Rules Can’t Keep Up
So we’re in this chaotic middle period. Some judges, like Judge Michael Boyko in Ohio, have issued standing orders banning the use of AI by lawyers in their courtrooms. Others are tiptoeing in with strict oversight, like training AI on their own past rulings to maintain consistency. The response from the judges who got the Grassley letter is telling: they implemented “corrective measures.” Basically, they’re trying to build guardrails after the car has already left the road. This patchwork of personal rules is unsustainable. The system needs a coherent, top-down framework, and fast. Because the AI isn’t waiting.
The “Human in the Loop” Illusion
Everyone, from judges to the CEO of the American Arbitration Association, insists there will always be a “human in the loop.” But the AAA’s own AI Arbitrator tells a different story. It’s a system designed to replace a human arbitrator in specific disputes, trained to predict a human outcome. That’s a fundamental shift. It raises a huge question: is the goal to assist human judgment, or to simulate and eventually replace it for efficiency’s sake? For overburdened state courts handling 97% of the nation’s cases, the pressure to choose the latter will be immense. The technology is advancing faster than our ethical or procedural understanding of it. And in a domain where decisions alter lives, liberty, and property, that’s a dangerous place to be.
