Technology Law Weekly Briefing: GPAI's in Litigation, AI Regulation in the EU, and Other Updates
Newsletter Issue 91: A collection of recent artificial intelligence case-related judgments, the use of GPAI's in litigation
Your weekly briefing on technology law and regulation, AI governance, fintech policy, and career opportunities.
Newsletter Issue 91
This week in Technology Law:
1. AI chats may become evidence in court: A New York federal ruling has put lawyers and clients on notice that conversations with AI systems may be discoverable in litigation, especially where users assume privacy that the law may not recognise.
2. Australian courts have issued a Practice Note (GPN-AI): It states that the Federal Court of Australia now permits AI to be used in litigation, but only within existing duties of accuracy, disclosure, confidentiality, and procedural fairness. The Practice Note neither bans AI nor romanticises it.
3. Case law: Nippon Life Insurance Company of America v. OpenAI Foundation puts the question of unauthorised practice of law by AI squarely before a federal court, at least at the pleading stage, and it does so in a way that could matter well beyond legal AI.
Lead Story
Your AI chats may become evidence in court!
Conversations with AI chatbots may not be treated like “confidential” conversations with a lawyer, doctor, or other protected professional. In a recent case of United States v. Heppner, No. 25-cr-503 (JSR) that brought this issue into sharper view, U.S. District Judge Jed Rakoff ruled in February that Bradley Heppner, the former chair of GWG Holdings, had to hand over 31 documents generated with Anthropic’s Claude in the course of a criminal securities fraud case.
Heppner had used Claude to prepare reports on his case for his lawyers, but the court was not persuaded that those exchanges were protected.
Judge Rakoff’s reasoning is important because he addressed the issue directly. He did not treat AI systems as something unusual or outside existing legal rules. In a written order, the judge explained that conversations with an AI system such as Claude cannot be treated as confidential legal advice. In other words, a person using the chatbot does not have an attorney-client relationship with the AI company (Anthropic).
The judge also looked closely at Anthropic’s privacy policy. That policy explains that the platform may collect the questions users type into the system and the responses the system generates. It may also use that information to improve the model and may disclose it to third parties when required in disputes or legal proceedings.
Because the company reserves the right to access and share that information, the judge concluded that users cannot reasonably expect those conversations to remain private in the legal sense required for attorney-client privilege.
It was reported that on the same day as Rakoff’s ruling, U.S. Magistrate Judge Anthony Patti in Michigan took a different view in Warner v. Gilbarco, Inc., No. 2:24-cv-12333 (E.D. Mich. 2026), involving a self-represented litigant’s use of ChatGPT.
Patti treated those AI chats as part of the litigant’s own work product rather than a conversation with another person. In other words, chatbots are tools, not persons. That means some AI-generated material may still attract protection depending on who created it, why it was created, and how it was used.
Courts are not building a standalone doctrine of AI confidentiality all at once. They are doing something much more granular and much more consequential. They are applying existing rules on privilege, confidentiality, waiver, and work product to new patterns of behaviour.
Some uses of AI may be treated as internal drafting assistance. Other uses may look like disclosure to a third party. A great deal will depend on the system used, the provider’s terms, the purpose of the interaction, and the extent of lawyer involvement.
Implications for legal practice
Legal practice has already spent 2+ years focusing on hallucinated citations and unreliable outputs. That concern remains genuine, but the more immediate risk may now be "confidentiality. Material typed into a public or broadly accessible LLM model may be discoverable later, and that risk applies long before a pleading is filed. Intake notes, litigation strategy, witness issues, settlement positions, and sensitive commercial facts can all become part of an evidential dispute if they are entered into the wrong system. Judge Rakoff’s order gives litigators a concrete reason to revisit internal AI policies, client onboarding language, and instructions on what may and may not be pasted into external LLM tools.
Regulators and courts
Judges are being pushed to decide whether public AI systems are more like notebooks, search engines, professional consultants, or communications intermediaries. Courts do not need a new AI statute to implement the rules. Existing legal doctrine is enough to produce serious consequences. For instance, Australia’s new Federal Court practice note shows the same instinct from a different angle. It accepts that AI may assist in litigation, but insists that use of AI must not adversely affect the administration of justice, that lawyers remain responsible for legal authorities and evidence cited in submissions, and that disclosure is required in some evidential contexts.
A warning for corporate AI policies
Enterprise legal teams need to distinguish between public LLM tools, closed LLM tools, and private-internal systems. For instance, the Australian Note explicitly warns that generally accessible tools may expose information to other people, that users may not know where information is stored or how it is used, and that confidential, privileged, suppressed, or otherwise restricted material must not be entered into a system in a way that breaches those obligations. Corporate AI policies that say only “use approved tools” are not enough unless teams also understand what must never be entered into any LLM tool outside a controlled environment.
Many people use AI systems as if they were private advisers. That assumption is understandable because the interface feels conversational, personal, and responsive. Legally, that feeling may be meaningless. Employees, founders, claimants, defendants, and ordinary users may all reveal sensitive material to an AI system that does not provide them with the legal protection they believe they have.
The deeper point is that AI use is now starting to alter the evidence record itself. An AI chatbot can help draft a witness outline, summarise internal documents, suggest a line of argument, or produce a statement of claim. Each of those outputs can later matter in disclosure fights, sanctions disputes, or credibility challenges. That is why the discoverability question matters far beyond the privilege doctrine.
Courts are beginning to treat AI chats as legally consequential records, not private thought spaces, and that changes how lawyers, clients, and companies need to use these tools.
Can you sue an LLM? Read our newsletter on the legal challenges of suing ChatGPT and Gemini.
Key Regulation Tracker
1. Australia Federal Court issues AI practice note
The Federal Court of Australia’s new Generative AI Practice Note does not prohibit AI use. It does something more useful. It states that anyone using generative AI in connection with legal proceedings must understand its capabilities, limits, and risks, and that any use must not adversely affect the administration of justice. The Note also makes clear that the court may require disclosure of how AI was used in a proceeding.
The AI Practice Note also mentions that lawyers remain responsible for ensuring pleadings are supportable, authorities cited in submissions exist and support the stated proposition, and cited evidence exists and is likely to be admissible.
It also requires disclosure where AI has been used to summarise or analyse information relied on by a witness or expert, and warns against entering privileged, confidential, suppressed, or otherwise restricted information into generally accessible tools. Non compliance may lead to adverse costs and other consequences. That is a clear framework other courts will be studying closely.
2. DOJ settles IBM case over anti-discrimination compliance certifications
The U.S. Department of Justice settlement agreement with IBM states that the United States contends IBM knowingly submitted false claims and knowingly made false statements in connection with federal contracts that incorporated anti-discrimination requirements, including Title VII requirements as reflected in federal procurement rules. The settlement amount is USD 17.1 million.
This agreement matters for tech companies because public contracting now carries heavy compliance exposure that goes well beyond cybersecurity representations or data handling promises.
Employment practices, internal governance, and certification processes can all become legal issues once they are baked into government contracts and digital service procurement. Compliance officers and legal teams should treat certification language as an operational commitment, not a box ticking exercise.
Case of the Week
Case: Nippon Life Insurance Company of America v. OpenAI Foundation
Citation: No. 1:26-cv-02448 (N.D. Ill. filed Mar. 4, 2026), United States District Court, Northern District of Illinois.
Facts
The complaint, filed on 4 March 2026, was brought by Nippon Life Insurance Company of America against OpenAI Foundation and OpenAI Group PBC.
According to the complaint, Nippon alleges that ChatGPT provided legal assistance to a user, Graciela Dela Torre, in connection with litigation and post settlement filings, and that this assistance contributed to tortious interference with a settlement agreement, abuse of process, and the unlicensed practice of law under Illinois law. The complaint says ChatGPT gave legal advice, legal analysis, legal research, and drafting help for motions and requests for judicial notice.
Legal Issue
The central question of the case was whether an AI tool that generates legal guidance, analysis, research, and draft legal documents can be said to be practicing law without a license when it is offered to members of the public.
The complaint relied on Illinois authority stating that the practice of law is not limited to courtroom appearances and includes out of court legal services and advice. They argued that ChatGPT is not admitted to practice law in Illinois or elsewhere, yet provides legal services to users who ask for them.
That issue sits at the intersection of several long standing legal concerns.
One is professional licensing.
Another is consumer protection.
A third is court integrity, especially where AI generated legal assistance may influence filings or judicial process without the controls applied to licensed practitioners.
The complaint also raised a crucial question that regulators have not resolved yet: when does general information become personalised legal assistance that the law is willing to regulate?
Judicial Decision
There has not yet been a merits decision. What exists at this stage is the complaint and the claims it advances. The case therefore matters as a filing to follow. The complaint seeks relief that includes an injunction against continuing to provide legal assistance and a finding of unlicensed practice of law, among other remedies.
Courts still have to decide questions of standing, causation, duty, statutory scope, and the practical meaning of “practice of law” in the context of software outputs. Even so, the filing is notable because it states the claim plainly and in terms that another plaintiff, regulator, or bar authority could adapt.
Why stakeholders must take note of this case
The case tests whether general purpose AI systems can remain legally characterised as neutral tools when they are marketed and used in ways that resemble professional assistance. Liability, regulatory design, product warnings, and system limitations may all depend on where that line is drawn.
Financial services firms are already using AI in customer interaction, compliance triage, and document support. A court willing to look closely at AI outputs in the legal context may encourage similar scrutiny in regulated financial advice settings where licensing and consumer harm concerns are also acute. Of course, this case is not about financial advice, but the logic of role substitution matters across regulated services.
Platform operators often describe AI as assistance rather than advice. This case shows why that distinction will not always end the debate. Claimants will look at what the system actually does, how it is presented to users, and what consequences follow. Product labels alone may not carry much weight if the system is doing work users would usually associate with a licensed professional.
A licensed profession and a general LLM model tool produce very different governance expectations. The former leans toward competence, supervision, ethics, and record-keeping. The other leans toward consumer disclosures, safety testing, and platform moderation. Courts do not have to choose a full regulatory model in one case, but each procedural step is starting to affect the space companies have to design and market AI systems that touch professional decision making.
Latest Policy Insights and Updates
1. The European Commission Launches Age Verification App to Protect Children Online
The European Commission’s new age verification app seeks to facilitate child safety, and also support states in adopting stronger online age checks without building a system that normalises unnecessary data collection.
The European Commission (EC) says the app is ready for deployment, will allow users to prove age when accessing platforms, can be set up using a passport or ID card, works on any device, is anonymous, and is fully open source. Some EU countries are already planning to integrate it into national digital identity wallets.
Age verification has often produced a false choice between child safety and privacy. The EC is trying to present a third route: age assurance with limited data exposure. Whether that works in practice will depend on design, uptake, interoperability, and whether platforms accept the tool as a sufficient compliance mechanism. The real policy issue is not whether age checks will exist, but rather, whether they can exist without becoming an identity system for ordinary internet use.
2. New Technical Standards Planned for Autonomous Driving in China
China’s Ministry of Industry and Information Technology has also closed consultation on draft mandatory national standards, including draft safety requirements for autonomous driving systems of intelligent and connected vehicles.
The ministry’s notice set 13 April 2026 as the deadline for comments, and the draft standard establishes technical and functional safety requirements for autonomous driving systems.
3. UK Competition Authority Begins Review of Paramount and Warner Bros Discovery Deal
In the United Kingdom, the Competition and Markets Authority has opened a consultation to comment on the anticipated acquisition involving Paramount and Warner Bros. The consultation runs from 13 April to 27 April 2026. The CMA notes that the formal phase 1 investigation has not yet launched and that the invitation to comment is the first part of its information gathering process.
Control of content libraries, streaming reach, advertising inventory, recommendation architecture, and audience data can all carry competition implications that spill well beyond traditional broadcasting. The point is not only who owns what studio, but how concentration affects distribution power in digital media markets that increasingly overlap with platform, advertising, and AI enabled content environments.
4. EU Develops Metadata Standards for the European Health Data Space
The European Commission (EC) has also closed consultation on a draft implementing regulation on minimum metadata elements for the European Health Data Space, including interoperability requirements for dataset descriptions used in the secondary use of electronic health data. Digital Policy Alert says the consultation closed on 12 May 2026 and links the proposal to Article 77(1) of the EHDS Regulation 2025/327.
This is exactly the kind of measure that determines whether data policy succeeds or stalls. Secondary use of health data depends on discoverability, standardisation, interoperability, and trustworthy governance. Metadata rules can decide whether researchers, hospitals, regulators, and innovators can actually find and reuse datasets in legally compliant ways.
Other important developments
1. The EU proposes measures requiring Google to share certain search data with third party search engines under the Digital Markets Act: The European Commission says the draft measures cover ranking, query, click, and view data, with feedback open until 1 May (read here).
2. EU presses Meta over WhatsApp access for rival AI assistants: The European Commission intends to order Meta to reinstate rival AI assistants on WhatsApp after Meta imposed an access fee that regulators say may exclude competitors (read here).
3. European AI Office opens a targeted consultation on measuring the energy consumption and emissions of AI models and systems: The consultation runs from 7 April to 15 May 2026 (read here).
4. SEC staff issues a statement on certain crypto transaction interfaces: The Division of Trading and Markets says the statement sets out staff views on broker dealer registration requirements for some interfaces used to prepare transactions in crypto asset securities (read here).
5. FTC secures a USD 1.5 million settlement with Publishing.com. The FTC says the online self publishing company misled consumers about likely earnings and must substantiate earnings claims going forward (read here).
6. EDPB advances new guidance on scientific research data processing. Its Guidelines 1/2026 on personal data for scientific research were adopted on 15 April 2026 in a version for public consultation (read here).
7. SEC’s 2025 enforcement results underline continued focus on emerging technology risk. The European Commission highlighted the Cyber and Emerging Technologies Unit, which covers blockchain technology, AI, account takeovers, and cybersecurity (read here).
Latest opportunities
Jobs, conferences, fellowships, and calls for papers
1. DSIT Fellowship Cohort 4: Applications are open for 2026 to 2027 secondments in AI, data, economics, international work, horizon scanning, and science ecosystem roles. Deadline: 6 May 2026 (further details).
2. Allen Institute for AI, Legal Counsel: Applications are due by 30 April 2026. The role calls for technology transactions experience and familiarity with data governance, privacy law, and the EU AI Act (further details).
3. Benchling, Legal Counsel, Product and AI: Benchling is recruiting product and AI counsel in San Francisco to advise on AI governance, privacy, intellectual property, licensing, and product legal issues (further details).
4. Lecturer in Law and Technology, Aston University: Applicants required with expertise in areas such as AI and law, digital regulation, data governance, cyber law, platform regulation, or the legal implications of emerging technologies (further details).
5. GitLab, Legal Counsel, Commercial (Canada, US, Remote): The role includes negotiating complex commercial and technology agreements, including data privacy and AI related contracts (further details).
6. AIDA2J Workshop at ICAIL 2026: Papers, short papers, and demo proposals are invited for the Artificial Intelligence for Access to Justice workshop in Singapore. Submission deadline: 1 May 2026 (further details).
7. PLSC Europe 2026: KU Leuven’s Centre for IT and IP Law is hosting this privacy scholarship workshop in Leuven on 29 and 30 October 2026. Abstract deadline: 15 May 2026 (further details).
Disclaimer
This weekly briefing and newsletter is provided for informational and educational purposes only. It does not constitute legal advice, and readers should not rely on it as a substitute for professional legal counsel. The views expressed are those of the author and do not necessarily reflect the views of any affiliated institution.
/////////////////////END///////////////////////






