In district attorneys’ offices across the country, the warning rings out frequently: “Beware of AI,” we’re told. The reasons are clear. AI tools have been misused by prosecutors seeking shortcuts, resulting in legal disasters and failed cases.
One prosecutor resigned after he “used artificial intelligence to draft a brief, filed it without adequate review, and it contained fabricated quotations and misrepresented case holdings,” attorney Lance P. Martin wrote in a recent National Law Review column.
A recent Reuters report notes that the Georgia Supreme Court disciplined a prosecutor, “finding her misuse of artificial intelligence tools led to fake and misleading case citations appearing in a murder case ruling.” The prosecutor was suspended from practicing before the court for six months.
These cautionary tales serve as an important wake-up call to any prosecutor who might be tempted to let technology replace due diligence. But there’s also another concern: that wariness of AI could make prosecutors reluctant to use it in any capacity.
I’ve already heard this from some attorneys, who say, “I’m just not going to use it.” That’s a bad idea. Not only can AI tools help us do our jobs better, but failing to understand them could create dangers.
As with other technological revolutions, the AI revolution is opening up all sorts of new possibilities — not just for everyone committed to doing good, but also for criminals. The FBI has warned that “criminals exploit generative artificial intelligence (AI) to commit fraud on a larger scale which increases the believability of their schemes.” The Commodity Futures Trading Commission has warned that the technology “makes it easier than ever to create false images, voices, videos, live-streaming video chats, social media profiles, and malicious websites designed to look like financial trading platforms.”
AI is being used to automate phishing campaigns, launch cyberattacks, produce child sexual abuse material (CSAM), and more. Prosecutors need to understand all this so we can be on the lookout and fight back.
I was thinking about this when entrepreneur Mark Cuban posted recently on X, noting the challenges that AI presents to CEOs. There are two kinds of companies, he argued: those that “are great at AI,” and those that “will go out of business.”
As an elected prosecutor in an overextended west Texas office, I understand that AI can be the difference between success and failure. Prosecutors need to be using AI as effectively as the bad guys if justice is to prevail.
Prosecutors and our staff need training in AI. We need to know how it’s being used by bad actors and how to use AI to discover these problems. That’s only the beginning.
Ensuring legitimate evidence
The ability to create realistic material wreaks havoc in more ways than one. First, there are dangers of bad actors using these tools to provide fake evidence or alibis.
In a California case, self-represented plaintiffs in a housing dispute “submitted deepfake videos as authentic testimony,” leading the court to dismiss the action with prejudice. The National Center for State Courts (NCSC) notes that threats from AI go beyond “sophisticated video manipulation.” The group adds: “In Florida, a woman spent two days in jail after her ex-boyfriend allegedly fabricated AI-generated text messages that led to her arrest for violating a protective order.”
These technologies also decrease trust in real evidence. If a prosecutor, judge, and jury can’t know what’s real, evidence loses its value. The NCSC calls AI-generated evidence “a threat to public trust in the courts.”
Not only do prosecutors need the most sophisticated tools to detect fakes, we also need tools that can work very quickly. That’s especially important in Texas, where discovery laws are known for being one-sided, with the defense getting a great deal of access but prosecutors getting little to none. This means a defendant could provide AI-manufactured “evidence” at trial that we have had no opportunity to authenticate in advance.
All of these problems require real solutions and none would be solved by our offices staying away from AI altogether.
We must be wary of tools that make big promises, of course, but there’s a real AI gap in the criminal justice system. A recent report from Stanford Law School notes that the necessary tools remain out of reach: “Criminal-justice entities who encounter AI tools—thousands of under-resourced police departments, prosecutors’ offices, courts, and probation units—lack the technical expertise to evaluate these tools rigorously, while vendors market directly to practitioners.”
We need proven, trustworthy tools to help us do our jobs, and training in how to use these tools correctly. The Association of Prosecuting Attorneys offers some of this training, which is a good start. Small offices like mine in West Texas and big offices in major cities will surely use AI differently, but we should all operate with the same knowledge of what works when we need it.
The AI revolution marks a huge technological step forward. To refuse to participate would be short-sighted. We just need to ensure that our participation always serves the goal of justice.
Disclaimer: The opinions and views expressed in this article are those of the author and not necessarily those of The National Law Review.