Public Relations

7

min read

AI Slop in the Legal Sector: When Efficiency Becomes a Liability

Noxtua Team

AI Slop in the Legal Sector: When Efficiency Becomes a Liability 


The new year begins with a problem that has been building for some time in the legal AI scene: the management of AI-generated content flooding legal professionals. 

LinkedIn is full of frustrated outcries, like a German lawyer reporting in December 2025 about a custody case where a self-represented party used an LLM to "dismantle" a court-appointed expert report, producing a 38-page brief filled with citations and case law.[1] The result? Judges and opposing counsel must now read, verify, and respond to everything. And this, he warned, will only increase at local courts (Amtsgerichte) where legal representation isn't mandatory. The problem intensified on January 1st, 2026, when Germany raised the mandatory representation threshold from €5,000 to €10,000, a change the Federal Bar Association (Bundesrechtsanwaltskammer, BRAK) and German Bar Association (Deutscher Anwaltverein, DAV) had warned in 2025 could burden courts with lower-quality submissions.[2] 

And the burden cannot be discharged by superficial reading or simply forwarding the brief to the opposing party: As the Higher Regional Court of Karlsruhe (Oberlandesgericht Karlsruhe) reminded in 2022, courts must read voluminous submissions "regardless of workload" to respect parties' fundamental right to be heard.[3] But what happens when those submissions multiply exponentially, generated in minutes by ChatGPT, and filled with content the submitting party doesn't fully understand? 

Welcome to the era of AI slop in the legal sector. 


What Is AI Slop? 

The term "AI slop" captured 2025's zeitgeist so effectively that Merriam-Webster and Macquarie Dictionary both named it Word of the Year.[4] Mentions of the term increased ninefold from 2024 to 2025, according to online media company Meltwater, with negative sentiment hitting 54 percent by October.[5] The reason? AI-generated content now makes up more than half of all English-language content on the web, according to SEO firm Graphite.[6]  

But what exactly qualifies as "slop"? In German business contexts, the term has evolved into "AI Workslop": AI-generated content so insubstantial that it creates additional work rather than saving time. The Handelsblatt captured the frustration: Employees are "wasting hours on AI garbage" that requires more effort to correct than it would have taken to create properly from scratch.[7] 

The legal sector has become a particularly acute battleground for this phenomenon. Research librarians report wasting up to 15 percent of their work hours responding to requests for nonexistent records that ChatGPT or Google Gemini hallucinated.[8] Worse: Fake citations are being laundered through academic papers that cite other papers containing AI hallucinations, creating a self-reinforcing cycle that pollutes legal databases and scholarship.[8]   


AI Slop Hits Courtrooms 

In 2025, German courts began documenting the problem in published decisions. The pattern was consistent: Lawyers submitted professionally formatted briefs containing citations that didn't exist. 

Local Court Cologne (Amtsgericht Köln) (July 2, 2025): A family law judge discovered that a lawyer's brief contained scholarship citations and case law references that were "apparently generated by artificial intelligence and freely invented." The court noted that neither the cited books nor the legal decisions existed. The judge issued a public rebuke, stating such submissions "make finding justice more difficult, mislead the unknowing reader, and severely damage the reputation of the rule of law and especially the legal profession."[9] 

Higher Regional Court of Celle (Oberlandesgericht Celle) (April 29, 2025): Both parties in an appeal submitted briefs with fabricated case citations to non-existent OLG decisions. This case, which seems impossible to beat in terms of professional negligence, showed at least that when it comes to AI-generated legal research without verification, opposing counsel can find common ground.[10] 

Writing in Anwaltsblatt, legal technology expert Markus Hartung noted that similar cases have emerged across Germany and internationally.[11] The scale is striking: More than 773 cases globally now involve AI hallucinations in legal filings, with more than 250 in the United States alone.[12] 

But if you think only self-represented litigants and lawyers are producing slop, you're far too optimistic. Bloomberg Law's Eleanor Tyler documented cases where judges themselves have incorporated AI-generated fabrications into their decisions.[13] When even the judiciary, tasked with evaluating the quality of submissions, produces slop, the system's quality control mechanisms are in danger of breaking down. 


The Deterrence Gap: Why Slop Persists and How to Fight Back 

Those appalled by such shocking cases are probably looking forward to reading what sanctions were imposed on these sloppy lawyers and judges. The answer in Germany? Barely any.[11] 

In the United States, sanctions have ranged from $2,000 to $15,000 in fines – amounts that may not meaningfully deter bad actors, especially in high-stakes litigation. The most serious consequence to date: One Wyoming federal judge revoked an attorney's pro hac vice admission (the right to appear in that court despite being licensed elsewhere) for “unethical behavior”, after discovering that eight out of nine cases cited in the attorney’s filings were AI hallucinations.[14] 

The UK is taking a harder line. In June 2025, Dame Victoria Sharp, President of the King's Bench Division, warned that lawyers misusing AI could face sanctions ranging from public admonition to contempt proceedings, and in the most egregious cases, criminal prosecution for perverting the course of justice.[15] When a lawyer submitted 45 case citations – 18 of which didn't exist – Dame Sharp emphasized that declining to initiate contempt proceedings was "not a precedent." 

Without strong enough deterrents, Bloomberg Law's Eleanor Tyler warns, inundating opposing parties with unverified AI slop becomes a litigation tactic: Produce fake briefs in 30 minutes, force your opponent to spend hours verifying every citation, and face minimal consequences if caught.[ 13] 

But what about sanctions for self-represented parties who submit AI hallucinations? What about judges who incorporate fabricated citations into their own decisions? The current frameworks largely ignore these scenarios. Significant work remains in procedural law and court organization to address the full scope of the AI slop problem. 


Your Own Defense Strategy Starts Now 

In the meantime, German legal practitioners and commentators have begun developing practical guidance for lawyers navigating this new reality:[16][17] 

Verify everything, systematically. This is what the German Bar Association demands, what courts worldwide require from lawyers, and what professional responsibility obviously dictates. And this might be less time-consuming than you think: 

Use specialized Legal AI tools. Specialized Legal AI solutions built on comprehensive law and scholarship databases display citations verbatim during drafting rather than generating plausible-sounding fakes, making source-checking straightforward and efficient. They also enable you to verify all of opposing counsel's citations in a matter of minutes, making the cost of unmasking professional negligence remarkably low. The same verification applies to court opinions, though German courts appear more prudent in their use of AI and will hopefully arm themselves only with professional-grade tools. 

Check internal consistency. Besides hallucinated references, illogical paragraphs and contradictory facts are another typical product of AI-generated content. You can use agentic processes offered by specialized solutions to check for inconsistencies in both your arguments and your opponent's. But when it comes to logic and coherence, nothing replaces a well-trained lawyer's brain. 

To preserve public trust in the justice system while procedural law and judicial administration catch up, the ethical standards that have defined continental European legal practice must be maintained with renewed vigilance. Lawyers, courts, universities, legal publishers, and technology developers need to work together. Training matters – not just on tools, but on the professional values and intellectual discipline that distinguish lawyers from prompt engineers.  


Footnotes 

  1. Dirk Trieglaff, "LLM-Flut am Amtsgericht,"  LinkedIn, December 2025.  

  2. Bundesrechtsanwaltskammer, Höhere Streitwertgrenzen für Amtsgerichte und für Rechtsmittel ab dem 1.1.2026 | Bundesrechtsanwaltskammer, 04.12.2025. 

  3. OLG Karlsruhe, Beschluss v. 11.5.2022, 9 W 24/22, discussed in "Befangenheit: Richter muss Anwaltsschriftsätze vollständig lesen | Recht | Haufe," Haufe Recht, August 10, 2022. 

  4. Merriam-Webster’s 2025 Word of the Year is 'slop' | Euronews,  Euronews, December 15, 2025, and ‘AI slop’: Macquarie Dictionary’s Word of the Year is a sad reflection of modern anxieties | Euronews, Euronews, November 26, 2025. 

  5. What the Rise of AI Slop Means for Marketers, Meltwater, 27 November 2025, as cited in 2025 was the year AI slop went mainstream. Is the internet ready to grow up now? | Euronews, Euronews, 28 December 2025. 

  6. Jose Luis Paredes et al., More Articles Are Now Created by AI Than Humans, Graphite, as cited in 2025 was the year AI slop went mainstream. Is the internet ready to grow up now? | Euronews, Euronews, 28 December 2025. 

  7. Milena Merten, Mitarbeiter verschwenden Stunden mit 'KI-Schrott, Handelsblatt, 9 January 2026.  

  8. Miles Klee, "AI Chatbots Are Poisoning Research Archives With Fake Citations," Rolling Stone, 17 December 2025. 

  9. AG Köln, Beschluss vom 02.07.2025 – 312 F 130/25, discussed in Blamage vor Gericht: Wenn die KI „halluziniert“ – und der Anwalt den Schriftsatz nicht prüft - Kanzlei Update, 17 July 2025 

  10. OLG Celle, Beschluss vom 29.04.2025 – 5 U 1/25, discussed in "KI-generierte Schriftsätze, Fehlzitate und unsubstantiiertes Vorbringen," Dr. jur. Jens Usebach LL.M., September 3, 2025. 

  11. Markus Hartung, "Halluzinationen in Schriftsätzen," Anwaltsblatt, 2025. 

  12. "AI Hallucination Cases Database," Damien Charlotin, accessed January 2026. 

  13. Eleanor Tyler, "ANALYSIS: AI Slop Will Spread Until Courts Address Incentives," Bloomberg Law, July 29, 2025. 

  14. Wadsworth v. Walmart Inc., 348 F.R.D. 489 (D. Wyo. 2025) 

  15. The King (on the application of Ayinde) v London Borough of Haringey and Al-Haroun v Qatar National Bank [2025] EWHC 1383 (Admin), judgment delivered 6 June 2025. 

  16. Dr. jur. Jens Usebach, KI-generierte Schriftsätze, Fehlzitate und unsubstantiiertes Vorbringen, September 3, 2025. 

  17. SN 32/25: Einsatz von KI in der Anwaltschaft," Deutscher Anwaltverein, July 9, 2025