6 Comments
User's avatar
Stephen Fitzpatrick's avatar

This is actually not that shocking (though no less inexcusable). It’s so, so easy to generate enormous amounts of output with AI. Few proofread carefully. But I can’t imagine, especially after the US example of the lawyer who submitted a brief based on a made up case, that there isn’t a dedicated associate whose sole job is to independently verify each and every case cited in legal pleadings. It’s really inexcusable.

Expand full comment
Tech Law Standard's avatar

You make a valid point, Steve. What is especially concerning in this English case is that the barrister not only included multiple fake citations, but also failed to verify them even after they were challenged, then dismissed the issue as “cosmetic.” The judge rightly found this professionally unacceptable. Regardless of how the text was generated, AI or not, the duty to verify every cited authority remains a non-negotiable part of legal practice. We will be interested in reading the US examples, but we have seen a few as well, and these are quite disturbing trend of events.

Expand full comment
Stephen Fitzpatrick's avatar

In the US case, something somewhat similar happened and the judge was rightfully irate. I don't think they admitted it right off either - but how do you deny using AI when there is literally no case to point? I think I recall the lawyer claimed he didn't know AI could make things up - this was in 2023, but still entirely ignorant if true.

Here is just one piece about it:

https://www.cnbc.com/2023/06/22/judge-sanctions-lawyers-whose-ai-written-filing-contained-fake-citations.html

I wrote about the hallucination issue in my most recent post.

https://fitzyhistory.substack.com/p/ais-two-weeks-of-reckoning

It's amazing to me this is still happening. I would think AI is great for some aspects of rote legal documents (in another life I practiced in a big firm litigation department) but to not check cases??? I'm curious - are there LLM's specifically tied into Lexis/Westlaw (or whatever UK database is used)? I would think that would cut down some on the issues.

Expand full comment
Tech Law Standard's avatar

You raise some sound points, Steve. The UK case mirrors the US cases you have kindly shared, in many ways, especially the lawyer’s failure to verify and their strange reluctance to acknowledge AI involvement. What is interesting is how preventable it was. The reluctance to check the case databases like WestLaw or Lexis or BAILII, to verify, which could have flagged the errors instantly, is a sad reality. Perhaps it is time for the databases to integrate LLMs and enable them verify legal sources as that would be a game-changer. There is "Lexis+ AI" but the extent to which it verifies sources generated by AI is still questionable. Until then, legal professionals just need to treat AI outputs as unverified drafts, not trusted sources.

Expand full comment
Hannah P.'s avatar

The irony is that AI could be super useful for case summarisation and first drafts if used properly and appropriately. But somehow we keep seeing it misused in high-stakes scenarios like this one. Makes me think we are still in the “copy-paste without thinking” phase of adoption, even in law.

Expand full comment
Lyan T.'s avatar

Never include AI-generated information in professional work that you have not verified yourself. It might seem like a harmless shortcut, but submitting fake or inaccurate details, especially to official bodies, can damage your credibility, cost you money, and even lead to professional sanctions. Always check your sources. Twice. ✅

Expand full comment