Case Report: A Lawyer Cited Five Fake AI Generated Cases (Ayinde v London Borough of Haringey)
A shocking UK court case exposes how fake AI-generated legal cases slipped into court, raising serious ethical questions in legal professions.
A recent English court case has sent shockwaves through the legal world after a barrister submitted legal arguments based on completely made-up cases, likely generated by artificial intelligence. What unfolded was a gripping lesson in digital overreach, professional responsibility, and the real-world risks of relying on AI without fact-checking.
🏛️ Court: King’s Bench Division, High Court (Administrative Court)
🗓️ Judgment Date: 30 April 2025
🗂️ Case Number: [2025] EWHC 1040 (Admin)
The Misuse of AI in Legal Proceedings
In the Ayinde v London Borough of Haringey case, a major legal issue arose not from the substance of the homelessness claim, but from how the claimant’s legal representatives presented their arguments; specifically, their use (or misuse) of artificial intelligence (AI) to reference legal cases.
This spotlighted a troubling and increasingly relevant issue in the legal profession: the unsatisfactory reliance on AI tools for drafting legal case documents.
At the core of the controversy were five entirely fake legal cases cited in the claimant’s written arguments.
These cases were presented as if they were real High Court or Court of Appeal decisions, complete with legal principles supposedly extracted from them.
The five fictitious cases generated by AI, submitted to the court by the claimant’s legal team, were:
R (on the application of El Gendi) v. Camden London Borough Council [2020] EWHC 2435 (Admin)
R (on the application of Ibrahim) v. Waltham Forest LBC [2019] EWHC 1873 (Admin)
R (on the application of H) v. Ealing London Borough Council [2021] EWHC 939 (Admin)
R (on the application of KN) v. Barnet LBC [2020] EWHC 1066 (Admin)
R (on the application of Balogun) v. London Borough of Lambeth [2020] EWCA Civ 1442
When the opposing counsel checked these five cases, they didn’t exist. 😳📚
This was not just a minor mistake.
Submitting fake case law, knowingly or carelessly, can mislead the court, undermine justice, and violate a lawyer’s ethical duties.
While the barrister claimed the cases had been “dragged and dropped” from a personal archive, the court found this explanation unconvincing. The judge could not confirm whether AI like ChatGPT or another tool was used to generate these citations, but strongly suspected it was a possibility.
Why is this a big deal?
Because AI tools, while powerful, can “hallucinate”, meaning they sometimes invent facts, quotes, or in this case, entire legal decisions. Lawyers are expected to verify everything they submit, and failure to do so not only damages the credibility of the profession but also wastes the court's time and resources.
In this case, it led to a formal "wasted costs" order: the lawyers had to personally pay £4,000 in penalties and were referred to professional regulators for possible disciplinary misconduct.
The Facts and (In)Significant Risks of Using AI in Legal Proceedings ⚠️
One of the most cautionary aspects centred around the apparent use, or misuse, of artificial intelligence by the claimant’s legal representatives. While the case itself was concerned with a vulnerable individual's right to emergency housing, the real storm unfolded over how the legal arguments were presented, particularly through the inclusion of fictitious legal citations.
During the course of the proceedings, the claimant’s barrister, Sarah Forey, submitted a legal document, a statement of facts and grounds, that cited five cases which were later discovered to be entirely fabricated. These cases were referenced to support various legal arguments, complete with case names, court citations, and supposed legal principles.
When the defendant’s legal team sought copies of these authorities, they were not provided. Upon further investigation, it became clear that the cases did not exist in any legal database or official record. 😮
The court, led by Mr Justice Ritchie, was deeply concerned about this revelation. When questioned, Ms Forey explained that she had compiled the citations from a digital or handwritten list she had created over time.
She mentioned a method involving a “drag and drop” from her personal archive. However, this explanation failed to convince the judge, who concluded that these were not simple citation errors but serious professional failings. The court stopped short of formally determining that artificial intelligence had been used to generate the content, but strongly implied that it may have been a source.
This aspect of the case has drawn attention because it speaks to a growing issue in legal practice: the use of AI tools to aid in drafting and research.
AI, particularly large language models, can generate text that appears legally plausible but is not always factually accurate. These so-called "hallucinations" can include entirely invented cases or legal principles, which, if used uncritically, can mislead the court and undermine the integrity of proceedings.
In this case, the judge ultimately imposed financial penalties and referred both the solicitor and barrister to their professional regulatory bodies.
Legal Principles in Ayinde v London Borough of Haringey
In the judgment of Ayinde v London Borough of Haringey, the legal reasoning concerning the potential use of AI by legal professionals was not only significant but also deeply instructive for the legal community.
Mr Justice Ritchie did not definitively conclude that AI had been used to generate the fictitious case citations, but the nature of the errors strongly suggested that a generative AI tool, such as a large language model, may have played a role.
What mattered most to the court was not whether AI had been used, but rather the failure of the legal representatives to ensure the accuracy and legitimacy of the material submitted to the court. 🤔
The court’s reasoning began by recognising the serious nature of the issue before it. The claimant's legal team, particularly barrister Sarah Forey and Haringey Law Centre, had filed a statement of facts and grounds containing references to five completely non-existent legal cases.
These references were not minor errors or misquotes; they were elaborate inventions, presented with fabricated case names, citation numbers, and summaries of legal principles. They were passed off as real authorities, supposedly supporting the claimant’s arguments in a judicial review regarding the provision of housing to a homeless individual.
The judge found this conduct deeply troubling.
It was not the fact of using AI if it was used that formed the basis of the legal condemnation, but rather the absence of any meaningful verification or accountability.
The court stated clearly that it was improper, unreasonable, and possibly negligent for a legal professional to include fictitious cases in a pleading, regardless of how they were created. If, as was implied, the barrister had used an AI tool to assist with drafting, then she bore the responsibility to independently verify any references it produced. This duty of care, the court reasoned, is fundamental to the practice of law.
Mr Justice Ritchie observed that the barrister's explanations were vague and unconvincing. She claimed the citations had been copied from a digital or handwritten list she maintained, and suggested a “drag and drop” process from that archive.
However, the judge rejected these accounts, concluding that it was implausible to maintain a professional reference list containing fabricated cases. He reasoned that no reasonable barrister could claim to have photocopied or catalogued a non-existent case.
The judgment explicitly described this defence as lacking credibility, and pointed out that no written explanation or correction was ever provided, even after the errors were pointed out by opposing counsel. 📑🧑⚖️
The legal reasoning then moved to assess professional conduct standards. The judge applied the test for a wasted costs order, which requires showing that a legal representative has acted improperly, unreasonably, or negligently, and that their conduct has caused unnecessary cost or prejudice. All three elements were found to be satisfied.
The barrister and solicitor were held jointly liable for the errors because both had a duty to ensure the accuracy of the pleadings. Moreover, the initial refusal to admit fault, the attempt to minimise the errors as “cosmetic,” and the failure to issue a prompt correction, compounded the misconduct.
Importantly, the judgment underlined the danger of uncritical use of emerging technologies such as AI in legal practice. While the court stopped short of condemning the use of AI per se, it was emphatic that technology cannot replace the human responsibility of verification.
A legal professional may use tools to assist in research or drafting, but those tools must not be relied upon blindly. If AI produces content, it must be treated as a draft to be carefully reviewed, not as a finished product to be submitted to the court.
The court’s reasoning highlighted that the integrity of the judicial process depends on trust in the truthfulness and accuracy of legal submissions. Failing to uphold this standard undermines not only the particular case but also the credibility of the legal profession as a whole.
The judge’s decision to impose a financial penalty and refer the legal representatives to their respective professional regulators was rooted in this fundamental principle: justice must be based on facts, not fabrications, whether generated by a human or a machine.
Thus, the case serves as a warning.
Artificial intelligence may become increasingly prevalent in legal work, but it can never absolve lawyers of their ethical and professional duties. In the end, it is not about whether AI was used, but about the choices that professionals make in trusting and presenting its output.
The court made clear that in the eyes of justice, accountability remains human.
This is actually not that shocking (though no less inexcusable). It’s so, so easy to generate enormous amounts of output with AI. Few proofread carefully. But I can’t imagine, especially after the US example of the lawyer who submitted a brief based on a made up case, that there isn’t a dedicated associate whose sole job is to independently verify each and every case cited in legal pleadings. It’s really inexcusable.
The irony is that AI could be super useful for case summarisation and first drafts if used properly and appropriately. But somehow we keep seeing it misused in high-stakes scenarios like this one. Makes me think we are still in the “copy-paste without thinking” phase of adoption, even in law.