Weekly Briefing: Elon Musk v. Altman in Court, Meta Ads Scam, Cyber-slavery in Cambodia, Attorney in Trouble for Using Fake AI Sources
Newsletter Edition 94: Musk v Altman trial, Meta scam ads lawsuit, South Africa’s AI policy failure, Cambodia cybercrime law, and landmark AI hallucination sanctions in the US.
This briefing examines the Musk v Altman dispute over AI governance and organisational restructuring; regulatory failures exposed by South Africa’s withdrawal of its AI policy; Cambodia’s aggressive cybercrime response to cyber-slavery; and rising sanctions for AI-generated legal errors, revealing systemic weaknesses in oversight, incentives, and professional responsibility.
Newsletter Edition 94
🔥 This edition includes many important technology law news and updates. Also, keep reading to view free courses and events, as well as a list of remote tech lawyer jobs.
This week in Technology Law:
1. Musk v. Altman: The trial pits Elon Musk against OpenAI’s board over whether OpenAI breached its charitable charter when it pivoted to a for‑profit model. The case examines Elon Musk’s donations, his knowledge of the restructuring and the future of OpenAI’s ownership.
2. South Africa withdraws AI policy after fake AI-generated sources exposed: South Africa withdrew its draft national artificial intelligence policy after discovering that it contained fabricated academic references generated by AI tools.
3. Cambodia passes landmark cyberlaw targeting scam compounds and cybercrime networks: Cambodia has enacted a major cybercrime law aimed at dismantling scam compounds linked to human trafficking and online fraud.
4. Couvrette v. Wisnovsky: An Oregon judge imposed more than $110,000 in sanctions after lawyers filed briefs containing 15 fake cases and eight fabricated quotations, making it one of the worst cases of AI hallucination ever seen in a US court.
Lead Story
Elon Musk v. Altman: the case so far
The courtroom battle between Elon Musk and OpenAI’s co‑founder, Sam Altman, has captivated observers of AI and corporate governance. Elon, who invested tens of millions of dollars in the early days of OpenAI, claims that the organisation betrayed its charitable mission by spinning out a for‑profit arm and selling equity to investors.
At stake is not only the fate of a generative‑AI powerhouse, but also questions about donor intent and the governance of technology nonprofits.
Background and legal claims
Elon co‑founded OpenAI in 2015 as a nonprofit research lab, pledged to develop artificial intelligence for the benefit of humanity. Elon Musk and other donors, including Peter Thiel and Reid Hoffman, contributed funds with the understanding that OpenAI would remain non‑profit and open.
As computing demands grew, the board created a for‑profit subsidiary in 2019 and allowed investors to purchase capped equity. The restructure attracted billions of dollars from Microsoft and other investors, enabling OpenAI to build large language models such as GPT‑4.
Elon alleges the shift to a for‑profit entity violated the original charter and seeks the removal of Altman and unwinding of the structure. OpenAI counters that Musk knew about the plan and that the restructure was necessary to fund the expensive research.
What are the economic stakes?
The case could impact OpenAI’s valuation, which climbed to roughly $80 billion after recent funding rounds. A judgment in Elon Musk's favour could force the organisation to revert to a nonprofit model or return donor funds, potentially derailing its initial public offering.
It has been noted that the cost of training cutting‑edge AI models can exceed hundreds of millions of dollars, making purely charitable funding impractical. This case reinforces the tension between mission and commercialisation.
While Elon Musk contends that AI safety requires nonprofit stewardship, Altman argues that the investment structure allowed them to compete with corporations such as Google and Meta.
What happened in court?
During the first week of the trial, the judge repeatedly emphasised that the case was about alleged deception. Elon Musk’s team attempted to argue that unrestricted AI posed an existential threat, but the judge directed them to focus on whether Musk had been misled about the restructure.
Elon testified that he discovered the for‑profit arm only after the fact and that he never consented to it.
Under cross‑examination, however, lawyers presented emails suggesting Elon was aware of the new structure. Elon also admitted that his new company, xAI, uses OpenAI models and described his company as socially beneficial.
The defence argued that Musk left the board before the restructure and that he previously praised hybrid models similar to those adopted by competitor Anthropic.
Wider implications
The trial has become a proxy battle over AI governance. It is natural to worry that a sweeping verdict could discourage philanthropy in cutting‑edge science/tech if donors can later unwind corporate decisions.
On the other hand, nonprofit advocates argue that donors have a right to ensure their contributions are used as promised.
The case also surfaces conflicts of interest: Elon Musk now heads xAI, a competing AI company, and has indicated he would invest personally if OpenAI remains nonprofit.
This trial could open the door to similar claims by other donors and test whether creative corporate structures, such as capped‑profit entities, can satisfy charitable obligations.
What happens next?
The trial is expected to last several weeks (or months), with testimony from Altman, board members, and experts on charitable law. Even if Elon prevails, it is unlikely the court will unwind the entire restructuring, as investors, including Microsoft, may negotiate settlements.
If Altman’s camp wins, it could strengthen hybrid models for AI research and solidify OpenAI’s path to a public offering.
Regardless of the outcome, the case will shape conversations about governance of AI labs and the responsibilities of tech philanthropists.
Technology Law Tracker: Other Significant Updates
1. Meta: a playground for scammers?
A class‑action complaint filed by the Consumer Federation of America (CFA) accuses Meta Platforms of turning its social media advertising system into what plaintiffs call a “pillar of the global fraud economy.”
The lawsuit, filed under Washington, D.C.’s consumer protection law, alleges that between 2021 and 2023, consumers lost at least 2.7 billion US dollars to scams on platforms like Facebook and Instagram.
Internal documents cited in the complaint purportedly show that Meta managers knew that a significant percentage of advertising revenue came from scam or high‑risk advertisers and that the company charged those advertisers higher rates rather than banning them.
The CFA claims Meta calculated that more than one-third of scam victims interacted with ads on its platforms and that the company collected around 16 billion US dollars from scam and banned‑goods ads during the period.
The lawsuit alleges that Meta publicly promoted its anti‑scam efforts while privately deeming certain advertisers “high value accounts” and granting them special leniency; such accounts could accumulate hundreds of strikes before being removed, whereas small advertisers were blocked after eight violations.
The plaintiffs argue that Meta’s advertising policies created a perverse incentive by profiting from fraud while diminishing consumer trust.
They seek damages, punitive damages, and an injunction to reform the company’s advertising practices. Meta has not publicly commented on the details, but the case highlights the legal risks for online platforms whose revenue models rely heavily on targeted ads.
Regulators and consumer advocates view the complaint as a test of how consumer protection laws apply to digital advertising and of social media companies' responsibilities to police fraud.
2. The South African government withdraws its AI policy because of AI-generated content
South Africa’s government withdrew its draft national AI policy after journalists discovered that parts of the document were generated by AI and cited fake research.
The draft AI Policy included references to nonexistent journals and misattributed articles, revealing a failure of human oversight.
Communications minister Solly Malatsi acknowledged that unverified citations compromised the policy’s integrity and said that those responsible would face consequences.
The AI policy aimed to establish a National AI Commission, an ethics board and a regulatory authority, while offering tax incentives and grants to promote AI development. However, the reliance on AI‑generated text raised questions about authenticity and transparency.
The incident is worrying, exposing how easily generative AI content can produce plausible, but false references and how governments must verify sources before publishing.
Policy development agencies should emphasise information integrity, accountability, and human oversight. AI hallucinations can quickly undermine trust in governance and highlight the risk of embedding errors into future regulations.
The government has promised to revise the policy with proper vetting, setting an important lesson for policymakers: AI tools can assist research, but cannot replace careful human scrutiny.
3. Cambodia targets scam compounds with new cybercrime law
Cambodia’s parliament approved a cybercrime law aimed at dismantling scam compounds that have enslaved tens of thousands of people and lured victims worldwide with romance scams and bogus cryptocurrency schemes.
Authorities estimate that up to 150,000 people have been forced to work in these compounds, which are often run by transnational organised crime syndicates.
The new law makes online scams punishable by prison terms of two to five years and fines up to 125,000 US dollars; for operations involving multiple victims or gangs, the penalties rise to 10 years and 250,000 US dollars; ringleaders who traffic or torture workers face up to 20 years and fines up to 500,000 US dollars.
The U.S. Department of Justice reports that scam centers across Southeast Asia coerce workers into conducting fake investment schemes, often operating through encrypted messaging apps and websites.
The DOJ’s Scam Center Strike Force recently charged Chinese nationals who managed a cryptocurrency fraud compound and seized a Telegram channel that recruited victims. U.S. officials estimate the strike force has frozen more than $ 700 million in funds linked to scams.
Cambodia’s law complements these international efforts by criminalising scam operations, providing legal tools to prosecute traffickers, and mandating support for victims. We hope the legislation will deter abuses and restore Cambodia’s reputation, though effective enforcement and cooperation with foreign partners will be essential, of course.
Case of the Week
Litigants: Couvrette v. Wisnovsky et al
Citation: No. 1:2021cv00157 - Document 227 (D. Or. 2026)
Background
Couvrette v Wisnovsky originated as a dispute among members of an Oregon family that owns Valley View Winery. The plaintiffs accused their relatives of breaching fiduciary duties and mismanaging trust assets.
As litigation progressed, plaintiffs’ counsel used generative‑AI tools to draft three summary judgment briefs. These filings included 15 nonexistent case citations and eight fabricated quotations, leading the court to sanction the lawyers for misusing AI.
Sanctions and fee awards
In a December 2025 order, U.S. Magistrate Judge Clarke held that the misrepresentations amounted to a failure to comply with the court’s local rules and the attorneys’ duty of candor.
The judge imposed a $15,500 penalty on pro hac vice counsel Michael Brigandi for the false citations and ordered defendants to file a motion detailing reasonable attorney’s fees and costs.
Clarke emphasised that lawyers cannot delegate their professional responsibility to verify the accuracy of citations to artificial intelligence and described the errors as part of a pattern of disregard for procedural rules. The court also vacated the withdrawal of local counsel Michael Murphy and instructed him to show cause why he should not be sanctioned.
In March 2026, the court granted the defendants’ motion for fees and costs. The court records show that the judge awarded approximately 94,704 US dollars in attorney fees and costs: 80,498.72 US dollars against Brigandi and 14,205.66 US dollars against Murphy.
The order further required Murphy to file the opinion with any future pro hac vice motions and noted that he failed to meaningfully participate as local counsel.
Combined with the earlier penalty, the monetary sanctions exceed 110,000 US dollars, making it one of the most significant AI‑related content hallucination sanction orders in U.S. courts.
Legal issue
The key legal question was whether the attorneys violated professional rules by submitting briefs with fabricated authorities and whether local counsel sufficiently supervised pro hac vice counsel.
The court held that attorneys cannot rely on AI tools without verifying their output and that local counsel must review all filings prepared by out‑of‑state lawyers. The court treated the attempt to file corrected briefs as part of the misconduct, not a cure, and dismissed the plaintiffs’ claims with prejudice.
The significance of this case
Couvrette v Wisnovsky exposes the ethical risks of generative AI in legal practice. Courts are increasingly sanctioning lawyers for citing hallucinated cases, reminding practitioners that AI may assist research but cannot replace professional judgment.
The case also highlights local counsel's responsibility to supervise filings; they cannot serve merely as names on a docket.
The significant monetary sanctions show that courts will impose meaningful penalties to deter misuse and protect the integrity of judicial proceedings. AI‑assisted tools will become commonplace; lawyers must implement robust verification processes and maintain accountability to avoid similar embarrassing outcomes.
Other Developments in Technology Law
1. A decade of the GDPR
Last week marked the 10th anniversary of the General Data Protection Regulation (GDPR), adopted on 27 April 2016. The GDPR codified clear rights for individuals, such as access, erasure, and data portability, and established the European Data Protection Board (EDPB) to coordinate cross-border enforcement.
Over the past ten years, national data protection authorities have used the regulation’s “one‑stop shop” mechanism to cooperate on large‑scale cases. The EDPB notes that the GDPR now interacts with the Digital Services Act, Digital Markets Act and AI Act, providing a foundation for a broader digital regulatory ecosystem.
2. Microsoft’s Legal Agent enters Word
On 30 April 2026, Microsoft introduced a “Legal Agent” for Word through its Frontier early‑access program. Unlike general chatbots, the Legal Agent follows structured workflows tailored to how lawyers work.
Microsoft explains that legal documents require precise clause analysis, redlining and adherence to internal policies. To meet these needs, the agent uses deterministic insertion layers rather than free‑form text generation.
It can review contracts against playbooks, suggest redlines with tracked changes and provide citations for each recommendation. The legal agent operates within the Microsoft 365 security and compliance framework.
3. Swiss competition authority investigates search advertising pacts
The Swiss Competition Commission (ComCo) opened two investigations on 30 April 2026 into potential coordination in search engine advertising. The probe examines whether travel companies and online casino operators agreed not to bid on each other’s brand names in keyword auctions on platforms like Google and Bing.
Such agreements could distort competition by limiting the visibility of competitors’ offers and making it harder for consumers to compare prices. Reuters reports that ComCo is questioning three travel firms and nearly all online casinos in Switzerland and will consult search engines as part of the investigation. If ComCo finds an unlawful agreement, companies could face fines and injunctive orders.
4. Crime and Policing Act 2026 brings digital protections in the UK
The United Kingdom’s Crime and Policing Act 2026 has received Royal Assent and introduces a suite of measures to address digital harms and modern crimes.
Among its provisions, the Act criminalises the manufacture and distribution of “nudification tools” that use AI to simulate clothing removal from images.
It makes it an offence to take or share screenshots of intimate images without consent and requires online platforms to remove non‑consensual intimate images within 48 hours of notification. Courts can order deletion of such images.
The Crime and Policing Act also extends offences related to paedophile manuals to include instructions on generating child sexual abuse material with AI and holds website moderators and administrators liable for hosting this content.
Beyond digital harms, the Act gives police new powers to compel drug testing and seize stolen goods via warrantless entry and expands data‑sharing with the Driver and Vehicle Licensing Agency.
It introduces “respect orders” to tackle antisocial behaviour and creates specific offences for assaulting retail workers.
Latest Opportunities
Jobs, conferences, fellowships, and calls for papers
1. Legal Expert (Remote UK): A remote‑based legal expert role advertised by a UK consultancy seeks professionals with expertise in technology law and digital regulation (find out more).
2. Junior/Mid AI Legal Specialist, EverAI (Remote UK): This role is aimed at lawyers or legal specialists interested in AI governance and product support (find out more).
3. AdTech Lawyer, Axiom (Remote US): Axiom is hiring an AdTech Lawyer to advise on digital advertising agreements, data sharing, campaign data flows, and marketing compliance (find out more).
4. Smart Contract Legal Architect, Loti AI (Remote US): Loti AI is hiring for a legal engineering role focused on likeness rights, on-chain rights enforcement, Solidity, and digital identity. This may interest lawyers who combine intellectual property, technology law, and smart contract literacy (find out more).
5. Business Teacher, Postsecondary AI Training, Alignerr (Remote Australia): This opportunity appears to involve teaching business for AI training, with some connections to other disciplines such as privacy and data protection (find out more).
6. Lawyer, Global Licensing, Coins.ph (Remote Hong Kong): Coins.ph is hiring a lawyer to lead licensing work across payments, remittance, e-money, and virtual asset services. The listing seeks experience with financial regulation, licensing applications, regulators, external counsel, and multiple jurisdictions (find out more).
7. Call for papers on Quantum Technology and Law, Leiden Law School: Leiden Law School invites chapters for an edited volume on quantum technology and law. Topics include legal, ethical, social, and regulatory issues linked to quantum technology. Abstracts should be 150 to 250 words, and chapters should be around 8,000 to 10,000 words. (find out more).
8. Free Microsoft Virtual Training Day: Introduction to Azure (18 May 2026): Microsoft’s free virtual event offers a foundational overview of cloud concepts, core Azure services across compute, networking, storage and identity and demonstrates AI tools. Participants who complete the session receive a 50 percent discount on the Azure Fundamentals certification exam (find out more).
9. AI, Justice and the Rule of Law (Free Course): UNESCO and the University of Oxford are offering a free self‑paced online course launched on 27 April 2026 to equip judges, lawyers and students with an understanding of how AI interacts with human rights and legal reasoning (find out more).
This newsletter briefing highlights a world where law struggles to keep pace with technological ingenuity. From billion‑dollar disputes over AI governance to class actions against social platforms and sanctions for AI‑generated errors, courts and regulators are grappling with new challenges.
Responsible development demands integrity, transparency and deliberate oversight.
Post your comments to join the conversation.
Disclaimer
This newsletter is provided for informational and educational purposes only. It does not constitute legal advice, and readers should not rely on it as a substitute for professional legal counsel. The views expressed are those of the authors alone.



