Consequences of AI Mistakes in Court Filings

Explore top LinkedIn content from expert professionals.

Summary

Using artificial intelligence (AI) tools for legal work, such as drafting court filings or conducting research, can lead to significant errors, including the creation of completely fabricated case citations. These mistakes, called "AI hallucinations," can result in professional consequences, fines, and damage to a lawyer’s reputation.

  • Verify all AI outputs: Always cross-check AI-generated content against credible sources to ensure accuracy before including it in legal documents.
  • Take responsibility: Do not sign or submit filings created using AI without thoroughly reviewing and understanding the content, as you are ultimately accountable for what is submitted.
  • Prioritize AI education: Build a strong understanding of how AI works and its limitations to minimize risks and ensure its responsible use in legal practice.
Summarized by AI based on LinkedIn member posts
  • View profile for Steven Callahan

    Dallas-based IP + business litigator at Charhon Callahan Robson & Garza

    26,218 followers

    The Morgan & Morgan Fake Case AI Sanctions Order is out. The attorney who did the deed (i.e., drafted and filed motions in limine where 8 of the 9 cases cited therein were fake) had his PHV admission revoked and was fined $3,000. Lead counsel, who didn’t review the filing and had no knowledge that AI was used in its creation, was hit with a $1,000 fine. Local counsel—who was in the same boat as lead counsel (no review of filing, no knowledge that AI was used in drafting it)—was fined $1,000 as well. Two takeaways here: 1. Don’t trust AI; verify everything that gets spit out of it as it literally makes up cases. 2. Don’t sign your name to filings (or allow your name to be signed to filings) if you aren’t reviewing them. If your name is signed, that comes with responsibilities, and it’s no excuse to say that you weren’t involved with the filing. 

  • View profile for Kassi Burns

    TEDx Speaker | Attorney | AI & ML Practitioner | Podcaster | Author ~Always Curious~

    6,420 followers

    🖨️ Hot Off The Presses 🖨️ a USDC judge in Wyoming has published an order on sanctions for the firm that recently filed Motions in Limine with GAI case citation hallucinations. A key quote from this order: "While technology continues to evolve, one thing remains the same-checking and verifying the source." The court found that the attorneys were in violation of Rule 11(b), which included three attorneys: the attorney who drafted the hallucinating Motions in Limine, that attorney's supervisor, and local counsel (all of whom were signatories to the MiL). This Order includes the prompts used that resulted in the citation hallucinations: ➡️ "add to this Motion in Limine Federal Case law from Wyoming setting forth requirements for motions in limine" ➡️ "add more case law regarding motions in limine" ➡️ "Add a paragraph to this motion in limine that evidence or commentary regarding an improperly discarded cigarette starting the fire must be precluded because there is no actual evidence of this, and that amounts to an impermissible stacking of inferences and pure speculation" ➡️ "Include case law from federal court m Wyoming to support exclusion of this type of evidence." Sanctions issued by the court include: For the drafting attorney: (1) revocation of pro hac vice admission and (2) a $3000 fine. For the supervising attorney: (1) a $3000 fine. For local counsel: (1) a $1000 fine. There are many takeaways here, not least of which the potential long term professional implications of falling victim to not validation sources (e.g., pro hac vice admissions). The fact that these cases keep popping up is a testament that AI literacy should be a priority for the legal profession, which was actually something of deep discussion in the Emerging Practice Trends subcommittee (of the State Bar of Texas' Legal Practice Management committee) I had today with fellow members Patrick Wright, Scott Skelton, Greg Sampson, and Trish McAllister. What are some ways we should be tackling this to help educate and inform the legal community beyond our immediate network? I would love to hear your thoughts 👏 #ailiteracy #legalethics #aiethics #genai

  • View profile for Oliver Roberts

    Co-Head, AI Practice Group @ Holtzman Vogel | Co-Director @ WashU Law AI Collaborative | Editor-in-Chief, AI & the Law @ The National Law Review | Founder @ Wickard.ai | Adjunct Professor @ WashU Law

    4,997 followers

    Even the most AI-savvy lawyer can fall into this hallucination trap. A Latham & Watkins associate already has. By now, almost every lawyer knows that AI chatbots generate hallucinations. They know they cannot ask an AI chatbot for caselaw and trust the output. They must verify it. But many lawyers have their guard down when *they input* legitimate caselaw citations and authority into an AI chatbot and ask the chatbot to act on that legitimate source (e.g., ask the AI chatbot to bluebook, reword, rephrase, etc.). But that is a grave mistake. Just days ago, a Latham & Watkins associate filed a document in the Northern District of California that included an AI-hallucinated citation. She had given Claude a legitimate article link and asked it to generate a citation. Claude preserved the correct publication title, year, and link—but fabricated an article title and listed incorrect authors. She didn’t catch it. The citation made its way into a court filing. Days later, she filed this declaration acknowledging the error: https://lnkd.in/gA42ij6c The attorney said this "was an honest citation mistake and not a fabrication of authority." Unlike prior incidents involving entirely fabricated caselaw, the lawyer started with a legitimate source and ended with an inaccurate output. Key Takeaway for Attorneys: Even when inputting legitimate sources or caselaw into tools like ChatGPT, Claude, or similar AI systems, there remains a significant risk that the output will include altered or fabricated information. This risk is not limited to citation formatting. It also applies when using AI to assist in drafting substantive legal content: even accurate case citations provided by the user (as the input) may be distorted by the AI in the output. I expect we’ll see more of these cases because this error is more subtle. Lawyers feel safe when they supply the source—but trust in output remains a risk. The question becomes: should judges treat these mistakes the same (or more leniently) vis-a-vis cases in which lawyers file entirely fabricated caselaw?

  • View profile for Ryan McCarl

    Author of Elegant Legal Writing and Partner at Rushing McCarl LLP

    10,560 followers

    It has become fashionable to play “gotcha” with opponents whose briefs include erroneous citations caused by reliance on artificial intelligence software. Such errors can lead to sanctions, bar discipline, and public embarrassment. I’ve heard of at least one website tracking cases where a lawyer was caught citing nonexistent authorities, and such incidents are often highlighted on LinkedIn. Given the professional consequences of these mistakes, the courteous approach is to notify opposing counsel of the error and ask whether they will withdraw or correct the filing before you bring it to the judge’s attention. There’s no excuse for uncritically copying AI-generated text into a brief without verifying the citations and quotations, reading enough of each cited case to confirm that it supports the point for which it’s cited, and ensuring that all facts and rules are accurately described. These quality-control steps are crucial for all litigation briefs, whether or not the attorney uses AI. And although wholesale invention of case citations is an AI-specific blunder, attorneys were misquoting, mischaracterizing, and failing to cite-check cases long before AI software existed. Even the most cautious attorneys sometimes make these mistakes. The risk increases when managing a large portfolio of cases, requiring you to delegate and giving you less time to personally check each citation. As AI-based tools spread in legal practice — not just through chatbots but through new features in widely used software like Microsoft Word, Google Docs, and Lexis — the chances increase that even cautious attorneys will find a mortifying AI-related blunder in a brief they’ve signed. Remember that possibility when deciding how to handle an opponent’s mistake. Criticizing opponents’ poor arguments is necessary in litigation, but the goal is to win the case, not to embarrass or ruin an opponent. In the long run, acting professionally when you catch opposing counsel making an obvious mistake — by bringing it to their attention before calling it out in a public filing — will make you a more respected advocate and better serve your clients. #legalwriting #legaltech #litigation

  • View profile for Sateesh Nori

    Lawyer | Professor | Nonprofit Executive | Author| Legal Strategist | TedX Speaker | ABA Legal Rebel | ABF Fellow | 5x Marathoner | LSC Leadership Council| Keynote Speaker | Creator RoxanneAI, Depositron

    3,983 followers

    AI Has Officially Reached Queens Housing Court—It’s Not Just Big Law Anymore This week, in a courtroom in Queens, a landlord’s attorney filed an affirmation in an eviction case in which they cited SEVEN fabricated cases. Not just factually incorrect—but entirely fabricated. A hallucination. The likely culprit? ChatGPT. The Judge in the case is recommending sanctions against that attorney. That decision is here: https://lnkd.in/eWkevnYM This marks a watershed moment. Generative AI has permeated the daily grind of housing court—not in a corporate skyscraper or an Ivy-clad appellate brief, but in the small, high-volume, under-resourced courtrooms where people’s homes are on the line. AI has reached solo practitioners and mom-and-pop landlords. It’s here. We should not be surprised. Generative AI tools are fast, persuasive, and free. They feel like legal assistants, but without the oversight or training. And in a court system already strained by volume and inequality, the temptation to lean on them—especially for time-strapped or inexperienced lawyers—is enormous. But this incident is more than a digital footnote. It’s a harbinger. It shows us that AI isn’t just a tool for high-end firms; it’s already reshaping the front lines of justice. And unless we move quickly to educate, regulate, and integrate these technologies responsibly, we’ll see more hallucinated citations, more procedural chaos—and more harm to the very people the legal system is meant to protect. BAD LAWYERS WILL ALWAYS EXIST, BUT WITH AI, THEY WILL BE DANGEROUS. This moment calls for vigilance, not panic. Innovation, not rejection. We need AI literacy across the legal profession, especially in the spaces where access to justice is already fragile. Because in Queens Housing Court—and courts like it across the country—the future has already arrived. #LegalTech #AccessToJustice #AIandLaw #QueensHousingCourt #ChatGPT #LegalEthics #HousingJustice #LegalInnovation #TenantsRights

  • View profile for Frank Ramos

    Best Lawyers - Lawyer of the Year - Personal Injury Litigation - Defendants - Miami - 2025 and Product Liability Defense - Miami - 2020, 2023 🔹 Trial Lawyer 🔹 Commercial 🔹 Products 🔹 Catastrophic Personal Injury🔹AI

    80,272 followers

    Updated: Lawyers from plaintiffs law firm Morgan & Morgan are facing possible sanctions for a motion that cited eight nonexistent cases, at least some of which were apparently generated by artificial intelligence. In a Feb. 6 order, U.S. District Judge Kelly H. Rankin of the District of Wyoming told lawyers from Morgan & Morgan and the Goody Law Group to provide copies of the cited cases, and if they can’t, to show cause why they shouldn’t be sanctioned. Law360 and Original Jurisdiction have coverage. The cases cited by the court had been “hallucinated” by an internal AI platform and were not legitimate, the firms said in a Feb. 10 response to the show-cause order. “This matter comes with great embarrassment and has prompted discussion and action regarding the training, implementation and future use of artificial intelligence within our firm,” the response said. “This serves as a cautionary tale for our firm and all firms, as we enter this new age of artificial intelligence.” The law firms’ brief had cited nine cases, but Rankin could locate only one of them. Some of the citations did lead to cases under different names.

  • View profile for Rohit Dusanapudi

    Aspiring Data Scientist | Computer Science SME at Chegg | Data Science | Machine Learning | Python | SQL | Visualization

    15,971 followers

    Every few weeks, it seems like there's a new headline about a lawyer getting in trouble for submitting filings containing, in the words of one judge, "bogus AI-generated research." The details vary, but the throughline is the same: an attorney turns to a large language model (LLM) like ChatGPT to help them with legal research (or worse, writing), the LLM hallucinates cases that don't exist, and the lawyer is none the wiser until the judge or opposing counsel points out their mistake. In some cases, including an aviation lawsuit from 2023, attorneys have had to pay fines for submitting filings with AI-generated hallucinations. So why haven't they stopped? The answer mostly comes down to time crunches, and the way AI has crept into nearly every profession. Legal research databases like LexisNexis and Westlaw have AI integrations now. For lawyers juggling big caseloads, AI can seem like an incredibly efficient assistant. Most lawyers aren't necessarily using ChatGPT to write their filings, but they are increasingly using it and other LLMs for research. Yet many of these lawyers, like much of the public, don't understand exactly what LLMs are or how they work. One attorney who was sa … Read the full story at The Verge.

Explore categories