Yet another cautionary tale has emerged of AI misuse in legal proceedings.
The potential of generative AI tools in litigation is widely acknowledged, particularly when harnessed against closed data sets of reliable and verified material. However, it cannot be said often enough that AI is just a tool, and like any other tool it needs to be used properly. Relying solely on publicly available AI such as ChatGPT or Google Gemini for legal research – carries inherent risks, as an increasing number of litigants and legal practitioners are discovering to their cost.
In Ayinde v London Borough of Haringey [2025] EWHC 1040 (Admin), the High Court underscored the dangers of relying on AI-generated content in legal proceedings, warning that it may be negligent if misleading AI-generated submissions are put before the courts. In this case, the Defendant sought relief from sanctions after failing to meet key procedural requirements, but the court found the breaches serious and unjustified and barred them from participating in the hearing. Although the main issue was resolved by consent, the court proceeded to consider a wasted costs application due to alleged misconduct of the Claimant’s legal team.
AI or not AI? That was the question
In its written submissions, the Claimant had cited five non-existent cases – fictional authorities were woven amongst genuine ones. When the Defendant queried the citations prior to the hearing, the Claimants legal team responded to say that the erroneous citations could be easily explained but subsequently refused to explain the errors.
The Defendant alleged that the Claimant’s legal team had used AI to draft their submissions. While the judge could not confirm this, he nonetheless held that it would be negligent for a legal professional to use AI and put the results into a pleading without checking it.
In the incisive judgment, Mr Justice Ritchie deemed that it was “wholly improper”, to “put fake cases in a pleading” and that doing so “qualifies quite clearly as professional misconduct”.
AI and legal proceedings: ongoing concerns
This is not the first litigant to come unstuck when using AI in court proceedings – and is very unlikely to be the last. There are examples across multiple jurisdictions, for example:
Harber v Commissioners for HMRC [2023]: A litigant appeared to have relied on an AI tool to cite supportive case law, only to discover that the referenced cases were fictitious. The appeal was dismissed.
Mata v Avianca [2023]: A New York case in which lawyers used AI-generated case summaries and later doubled down by submitting fabricated judgments.[1]
Bradley & Chuang v Linda Frye-Chaikin [2025]: A case in the Grand Court of the Cayman Islands in which the content of some of the defendant’s written submissions contained “a number of hallucinations and erroneous material”.
Olsen & Anor v Finansiel Stabilitet A/S [2025] EWHC 42 (KB): the judgment debtors, who were litigants in person, relied on case law that did not exist. The appellant claimed that a case had been provided by an unnamed German lawyer, from whom they received informal advice, asserting they were entirely unaware of the case’s inauthenticity. Whilst the judgment is silent on whether the fake case relied upon could have been an AI hallucination, the appellants “narrowly” avoided a summons for contempt of court to be issued against them.
Crypto Open Patent Alliance v Craig Steven Wright: We recently covered the extensive litigation history of Dr Craig Wright, who has long claimed to be the creator of Bitcoin. That article is available here. In March of 2025 he was ordered to pay indemnity costs to his opponents, following an application for permission to appeal a judgment, partially for “improperly [using] AI to prepare his submissions”.[2] When making an application to appeal a judgment against him, the court found that he had relied upon “fictitious authorities, which appear to be AI-generated hallucinations”[3].
Judicial Guidance on AI has been in place since 12 December 2023[4], warning judges to be vigilant about the use of AI tools in court. Our analysis of the guidance is available to read here.
A delicate balance
It is important for litigants to be aware of AI’s limitations. Whilst it can be a helpful tool, particularly when used to structure documents or create chronologies, it is not suitable for conducting legal research without meticulous checking for accuracy and completeness.
AI is also no substitute for professional judgment and diligence. Reliance on AI-generated content without proper verification may not only undermine a party’s case but may also amount to professional misconduct. Courts are, unsurprisingly, taking a zero-tolerance approach to fabricated authorities and misleading submissions. For legal practitioners and litigants, the message is unambiguous: AI may help draft, structure, or brainstorm but it must never be allowed to hallucinate its way into the courtroom.
The challenge for the future is to harness AI’s benefits while continuing to uphold the integrity of legal process. How AI is used in litigation moving forwards will depend partly on what the tools can do, but more importantly, on how responsibly litigants and their legal representatives use them.
[2] https://www.thelawyer.com/bitcoin-inventor-dispute-scientist-hit-with-225000-costs-order-over-improper-ai-use/
[3] Crypto Open Patent Alliance v Wright [2025] EWHC 1139 (Ch)