Insights
I’ve been following recent reporting on errors generated by AI in legal filings, including a New York Times article titled “Prosecutor Used Flawed A.I. to Try to Keep a Man in Jail, His Lawyers Say,” which highlights attorneys who track AI-generated “slop” in court briefs, as well as a case in which a Nevada County prosecutor’s office filed a motion containing inaccurate legal citations.
This is a real warning sign, but not a reason to abandon AI.
The problem wasn’t that the prosecutors opened the door to a new research tool. The problem was that the workflow did not make accountability and verification the highest priority.
In criminal cases, the citations you put in a filing matter exactly because judges and opposing counsel rely on them to make decisions that affect liberty, bail, and sometimes the very structure of a case.
When AI invents a nonexistent case or misattributes precedent, that is not a minor formatting error. It is a substantive mistake that can undermine trust in justice itself. In my work now, we think very deliberately about how AI is integrated into legal and investigative workflows.
If AI is going to assist with research or drafting, these conditions are non-negotiable:
A human prosecutor or lawyer owns every sentence in the filing. There is no “AI by default.”
Every legal citation is verified against authoritative sources before it is submitted.
The workflow captures who checked what, what was generated by a model, and what was confirmed or corrected by a person.
The record is auditable so that another human looking at the same filing later can explain why each reference matters.
That is also why the tools matter.
There is a meaningful difference between open-ended chat systems and purpose-built platforms designed for criminal justice work. Tools like TimePilot operate in closed, CJIS-compliant environments and work only from the evidence and records that are intentionally loaded into the system. They do not invent facts or citations because they are not asked to. Instead, they help prosecutors and investigators examine large evidence sets, build timelines, test narratives against the record, and present findings in a way that can be reviewed, challenged, and defended.
When AI is constrained to the case file and paired with human judgment at every step, it reduces risk instead of amplifying it. That is not a “nice to have.” That is the only way AI can help without magnifying risk. Too many stories about AI in law focus on hallucination statistics or scary headlines. What matters most to justice and oversight is whether we have accountable processes that protect human judgment instead of replacing it.
If lawyers are going to use AI tools, they must be integrated into a workflow that preserves confidence in the outcome.
There is a broader story here than “AI went wrong.” It is the story of how the justice system can use modern tools responsibly, with transparency, restraint, and human accountability at the center.
The cautionary tales matter, but so do the examples of systems designed to prevent exactly the kinds of failures that make headlines.





