The legal profession is grappling with a burgeoning problem: the increasing frequency of artificial intelligence (A.I.)-generated errors, particularly the fabrication of case law citations, in court filings. A growing network of lawyers has emerged, acting as “legal vigilantes,” to expose and document these instances of “A.I. slop,” raising concerns about the integrity of the legal system and the reputation of the bar.
The Problem of Fabricated Citations
The issue began to gain prominence earlier this year when a lawyer in a Texas bankruptcy case cited a nonexistent case called Brasher v. Stewart. The judge, blasting the lawyer for the error, mandated six hours of A.I. training and referred him to the state bar’s disciplinary committee. This incident highlighted a troubling trend: chatbots are frequently generating inaccurate information, including completely fabricated case law citations, which are then being incorporated into legal filings.
The Rise of Legal Vigilantes
In response, a group of lawyers has begun tracking and documenting these errors. Robert Freund, a Los Angeles-based lawyer, and Damien Charlotin, a lawyer and researcher in France, are leading efforts to create public databases to showcase these instances. They and others use legal tools like LexisNexis and keywords such as “artificial intelligence,” “fabricated cases,” and “nonexistent cases,” to find and flag these blunders, often uncovered by finding judges’ opinions publicly scolding lawyers. To date, they have documented over 500 cases of A.I. misuse.
Why This Matters
Stephen Gillers, an ethics professor at New York University School of Law, believes these errors are damaging the reputation of the bar, adding that lawyers should be ashamed of the actions of their colleagues. The widespread adoption of chatbots – tools many companies are experimenting with – alongside a generally agreed-upon need for lawyers to ensure the accuracy of their filings, creates a challenging environment.
A Double-Edged Sword: A.I. Assistance and Human Error
While chatbots can be valuable tools, helping lawyers and even pro se litigants (“those representing themselves”) articulate legal arguments effectively, the potential for error is significant. Jesse Schaefer, a North Carolina-based lawyer, notes that chatbots can help people “speak in a language that judges will understand,” even with their pitfalls.
However, the problem increasingly stems from legal professionals relying on A.I. without sufficient verification. The consequences can be severe, as seen in the case of Tyrone Blackburn, a New York lawyer who was fined $5,000 for incorporating numerous hallucinations and fabrications into legal briefs generated by A.I. His client subsequently fired him and filed a complaint with the bar.
The Limited Impact of Current Penalties
Despite the growing problem, court-ordered penalties have yet to act as a significant deterrent. Robert Freund believes that the continued occurrence of these errors demonstrates that current consequences are insufficient. The legal vigilantes hope that the visibility offered by public catalogs will increase accountability and encourage greater caution when using A.I.
The Future of A.I. and Legal Practice
Peter Henderson, a computer science professor at Princeton University, is working on methods to directly identify fake citations, moving beyond reliance on keyword searches. Ultimately, he and others hope that increased awareness and technological advancements will help to mitigate the problem of “A.I. slop” and preserve the integrity of the legal system.
“I like sharing with my readers little stories like this,” said Eugene Volokh, a law professor at the University of California, Los Angeles. “stories of human folly.”
In conclusion, the rise of A.I.-generated errors in legal filings presents a significant challenge to the legal profession. The emergence of legal vigilantes, coupled with ongoing research into detection methods, suggests a growing commitment to addressing this problem and safeguarding the accuracy and reliability of legal proceedings. The legal community must actively work to understand the limitations of these tools and prioritize human oversight to prevent these costly and damaging mistakes
