Tech Law Global Roundup – 2025.04.23 Edition
From Switzerland's AI governance and the EU's tech transfer overhaul, to Australia's regulatory clash with X Corp and China's anti-sanctions compliance challenges—global tech regulation intensifies.
Welcome to this edition of Technology Law Standard, where we discuss critically the latest legal and regulatory updates shaping technology across the globe. In this edition, we unpack four recent developments:
Switzerland Joins the AI Governance Frontier: What the New AI Convention Means for Tech.
EU Rethinks Tech Transfer Rules and Why Data Licensing and IP Collaboration Matters.
Australia vs X Corp: Whose Rules Govern the Internet?
Making Sense of China's Anti-Foreign Sanctions Law: Strategic Dilemmas for Global Tech.
☕ Grab a cup of tea (or coffee) and let’s explore how these new regulations and policies in AI, data protection, cybersecurity, and digital markets might affect your business strategy, audits and compliance efforts.
🇨🇭Switzerland Embraces Global AI Governance: Early Ratifier of AI Convention
Switzerland is making a bold play to stay ahead in tech policy – this time in the realm of Artificial Intelligence governance. The Swiss federal government (Federal Council) has decided to ratify the Council of Europe’s Convention on AI, Human Rights, Democracy and Rule of Law.
This new “AI Convention” is an intergovernmental treaty championed by the Council of Europe (CoE) to create the first international legal framework for AI. By signing on early (Switzerland formally signed it on 27 March 2025), Switzerland is signalling its commitment to shaping AI with an ethical compass – and it’s giving its companies and citizens a head start in adapting to upcoming rules.
What is the Council of Europe’s AI Convention? It’s officially called the Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law, and was opened for signature in late 2024. Think of it as a cross-border agreement to ensure AI systems respect fundamental values. The Convention aims to set baseline standards for AI transparency, accountability, and non-discrimination among signatory countries.
Unlike the EU’s AI Act (which is binding law within the EU single market), the CoE Convention is broader – CoE includes 46 countries (basically most of Europe, and others can join). Notably, non-European states like Canada, Japan, and even the EU as a whole have been involved. Japan and Canada also signed in Feb 2025 as early adopters. So this is shaping up to be a global framework, or at least a transatlantic/pacific one, not just European.
For Switzerland, a non-EU country that often aligns with European standards, ratifying the Convention fits its strategy of being proactive yet keeping options open. On 12 Feb 2025, the Swiss Federal Council announced a national AI strategy which heavily features joining the Convention. They recognized that Switzerland’s existing laws aren’t fully equipped to handle AI’s challenges (like algorithmic transparency or bias).
By committing to the Convention, Switzerland is essentially agreeing to implement whatever obligations it entails – likely requiring AI impact assessments, transparency measures for high-risk AI, oversight bodies, and cooperation on AI research governance.
According to the Swiss government’s own statements, this will mean adapting Swiss law by end of 2026 to meet the Convention’s requirements. Federal departments are tasked with preparing draft legislation. For the Swiss financial sector, healthcare, and other AI-heavy industries, new rules on how AI can be used (and audited) are on the horizon.
The Convention emphasizes human rights by design in AI and could ban certain harmful AI practices outright (for instance, a ban on AI systems with no human accountability, or those that perpetuate illegal discrimination).
One key aspect highlighted is transparency and non-discrimination in machine learning and AI development.
In practice, that could mean companies deploying AI in Switzerland might need to provide explanations for algorithmic decisions (at least for important decisions affecting people, like loan approvals or job screenings). They may also need to audit datasets for biases to ensure AI doesn’t treat people unfairly based on protected characteristics.
The Convention likely aligns with Europe’s values – e.g., it references rule of law, so expect a requirement that AI decisions be contestable and subject to human review in sensitive areas.
Switzerland being one of the first to ratify is strategic. It can influence the convention’s implementation details and show leadership. It also ensures Swiss companies will early-adopt AI best practices and thus be globally competitive on trustworthy AI.
From a business perspective, what does this mean? If you’re a tech company or startup in Switzerland (or plan to operate there):
Prepare for AI Regulations: While the EU’s AI Act gets more press (expected to be finalized in 2025 too), Switzerland’s route via the Convention will result in similar rules. The Federal Council clearly sees gaps in current law and is moving to fill them. By 2026 or 2027, you’ll likely need to comply with specific requirements for AI systems. This could include registering high-risk AI applications, conducting risk assessments, ensuring human oversight, and possibly licensing or certification of certain AI systems (for example, AI in medical devices or finance might need government approvals). Start documenting your AI models’ development processes and decisions now – that documentation will help in compliance.
Ethical AI as a Competitive Edge: Swiss authorities frame this as future-proofing Swiss innovation. If you bake in non-discrimination and transparency in your AI products now, you’ll not only meet coming rules but also gain trust from users and clients. For instance, a Swiss AI startup that can say “our algorithms are audited for bias and explainable, in line with the new Convention standards” might have an edge in selling to government or large corporates concerned about ESG (Environmental, Social, Governance) criteria. Trustworthy AI is becoming a marketable quality.
Global Interoperability: By aligning with an international convention, Swiss companies’ AI products may more easily enter other markets that also follow these rules. Think of it like ISO standards but for AI governance. If Canada and Japan also ratify, a Swiss AI solution that meets the Convention’s bar might find a smoother reception in those countries. Conversely, companies from jurisdictions that don’t have equivalent standards might face scrutiny trying to sell into Switzerland once these rules are in place. So there’s a hint of a club of trust: join early and help set the rules, or join later and adapt to others’ rules.
AI and Human Rights: The explicit mention of human rights and democracy means certain AI uses could be off-limits. For example, an AI system that ranks citizens’ behaviour (a la social credit) or pervasive facial recognition surveillance might violate human rights principles and be banned or strictly limited. If your startup had plans for, say, an AI that scrapes social media to profile people extensively, you might need to reconsider if that profile could affect someone’s rights without due process. The Convention’s ethos is that AI should augment human rights, not erode them.
Switzerland also benefits diplomatically. It’s often seen as a neutral broker. By pushing the AI Convention, it continues that role in the digital age. It may also subtly differentiate itself from the EU by saying: We’re not in the EU, but we adopt high standards too through other means. (The EU itself has signed the Convention as well in 2024, but EU law – the AI Act – will be the main tool inside the EU.)
One should note that the Convention likely will establish some form of oversight committee where countries report progress. Switzerland will have to periodically show it’s adhering to what it signed. This adds accountability beyond just national politics – an international peer review element. Companies could even have opportunities to participate in consultations around these implementations, to ensure the rules are practical.
For AI developers, a concrete example: If you develop an AI system in Switzerland that helps with hiring by scanning CVs, under these emerging rules you might have to conduct a bias audit to ensure it’s not unfairly excluding candidates of a certain gender or ethnicity, provide a way for rejected candidates to get an explanation or have a human review, and register the system if it is deemed high-impact on people’s lives. It might seem onerous, but it aligns with global trends (even the U.S. is moving in this direction, with draft AI Bill of Rights and NYC’s bias audit law for hiring algorithms). So, aligning with the Convention could actually keep Swiss companies ahead of the curve globally.
Also, by joining the Convention early, Switzerland can shape its implementation such that it dovetails with Swiss legal culture (e.g., leveraging existing institutions like their strong data protection authority or their human rights frameworks). They avoid being a passive rule-taker.
On the “power shifts” front, this is an interesting one: It shows middle powers and international bodies stepping up to regulate emerging tech, not leaving it solely to tech superpowers or big tech companies themselves. The Council of Europe (known for human rights treaties like the famous European Convention on Human Rights) is extending its influence to tech. Switzerland, in ratifying, helps legitimize that role.
For founders, it indicates that soft law is becoming hard law in AI – voluntary ethical principles (like OECD’s AI principles) are crystallizing into binding commitments. Your AI ethics team can’t be just a PR exercise; it needs to anticipate real legal requirements.
Finally, consider the optics: Swiss banking and pharma are huge; both sectors are deploying AI (for fraud detection, algorithmic trading, drug discovery, etc.). By adopting the Convention, Switzerland is assuring the world that AI used in its influential sectors will respect fundamental rights. This can be reassuring to foreign customers or partners. For instance, a foreign regulator might trust a Swiss bank’s AI-driven decisions more knowing Switzerland has these legal guardrails in place.
🇪🇺 Europe Reassesses Tech Transfer Antitrust Exemptions
Over in Europe, regulators are sharpening their pen to rewrite rules that have quietly underpinned tech innovation deals for years: the Technology Transfer Block Exemption Regulation (TTBER) and its Guidelines. If your startup licenses technology or IP – think patent licensing, technology know-how sharing, software SDK agreements – these rules matter! The European Commission has just closed a consultation on revising the TTBER, signalling potential changes by 2026 when the current regulation expires.
First, a bit of background: TTBER (Regulation (EU) No. 316/2014) provides a safe harbour under EU antitrust law for certain licensing agreements. It basically says, if two companies enter a license for technology (like a patent license, or software distribution agreement) and abide by certain conditions – market share limits and no hard-core restrictions on competition – their deal is exempt from the usual antitrust ban on restrictive agreements.
The accompanying Guidelines help interpret grey areas. The idea is to promote R&D and innovation by giving legal certainty to collaboration, as long as it doesn’t severely harm competition. The current TTBER has been in force since 2014 and is due to lapse on April 30, 2026.
Why the fuss now? Because tech has evolved a lot in the past decade. The Commission launched a review in 2022 to evaluate if TTBER is still fit for purpose or needs tweaks. After gathering evidence last year (stakeholders filed feedback through July 2023), the Commission found the TTBER framework largely useful but with gaps, for example:
Overall Effectiveness: The evaluation confirmed TTBER meets its core goals – it block-exempts pro-competitive tech agreements and provides significant legal certainty to companies. In plain terms, businesses have been able to share technology under this safe harbour without constant fear of antitrust blowback. This is good for innovation: so, expect the general mechanism to continue rather than scrapping it.
Coverage of Data and New Tech: A glaring gap identified is that TTBER doesn’t explicitly cover data licensing or data sharing agreements. In today’s AI and big-data-driven economy, licensing “data sets” or algorithms can be as crucial as licensing patents. Stakeholders flagged uncertainty on whether data falls under TTBER – and there is no clear consensus… what types of data rights should be covered. The EU Commission is now pondering how (or whether) to extend the block exemption to certain data sharing arrangements. For founders, this could be big: clearer rules might ease collaborations for AI training data or allow firms to pool anonymized datasets without antitrust panic. Conversely, regulators will be careful – data exchanges can also be a way to collude or foreclose competition, so any extension will likely come with strings attached.
Market Share Threshold Woes: TTBER only protects deals if the parties’ market shares stay under 20% (if they are competitors) or 30% (if non-competitors) in relevant markets. The evaluation heard complaints that these thresholds are tricky to apply in fast-moving tech markets. Defining the “market” for a novel technology or an innovation can be fuzzy, and calculating shares in dynamic or nascent tech fields is difficult. The Commission noted difficulties in applying… market-share thresholds, particularly in technology markets. It might consider adjusting how market power is assessed or provide more guidance for cases like platform technologies or ecosystems where traditional market share is a poor indicator. This could give growing startups more room to partner with bigger players on R&D without immediately breaching the limits.
Clarity on New Forms of Agreements: Since 2014, we’ve seen a rise of things like open-source collaborations, patent pools, standard-essential patents (SEPs) licensing, and even AI research partnerships. While the evaluation found TTBER largely coherent with other recent rules (e.g. it lines up with the newer EU rules on R&D and specialization agreements from 2023, and the Vertical Block Exemption revised in 2022, with only minor differences in definitions), stakeholders likely pushed for the Guidelines to address emerging scenarios. For instance, licensing in the context of joint ventures or broader ecosystems might need specific mention. Also, the Commission noted the TTBER is “generally consistent” with the rest of the competition framework, but small tweaks (like aligning definitions of exclusive territories with the vertical rules) could improve consistency. Don’t be surprised if the new Guidelines explicitly discuss data, SEPs, or multi-party agreements like open source contributor licenses.
So, what happens next? The Commission moved into an “impact assessment” phase in 2025 – it published a call for evidence and fresh consultation documents on 31 Jan 2025, with feedback closed on 25 April 25. This suggests they’re now formulating concrete proposals for the revised TTBER and Guidelines. By late 2025 or 2026, we might see a draft of the new regulation.
The big question the Commission must decide: renew or revise? Letting TTBER expire is highly unlikely given the support for it. More likely, they will prolong it with amendments to address the gaps identified.
Margrethe Vestager (EU’s competition chief) isn’t going to throw away a tool that stakeholders broadly like, but she also won’t ignore the call to modernize it.
For businesses, especially startups and scale-ups in tech, we think the implications are likely to be twofold:
Continued Safe Harbour – but Check the New Terms: The comfort of having a block exemption for licensing deals will remain. If you’re negotiating tech transfer agreements (patent cross-licenses, software OEM deals, etc.) that facilitate innovation, you can likely keep doing so under a renewed TTBER. However, the detailed terms might change. For example, the last revision in 2014 made exclusive grant-back clauses (forcing licensees to give back improvements exclusively) ineligible for exemption – meaning contracts had to be adjusted. The upcoming revision could impose new conditions or lift some. Stay alert for changes around data sharing – if the new TTBER covers certain data licenses, it may also impose obligations to ensure that doesn’t become a sneaky way to exchange competitively sensitive info. Likewise, any tweak to market share thresholds or hardcore restrictions (like no-hire or non-compete clauses in licenses) will directly affect how you draft agreements. Get your legal team ready to parse the new rules by 2026.
A Signal on Data and IP Convergence: The fact that EU competition regulators are considering data explicitly in an IP licensing rulebook is telling. It shows an emerging consensus that data is a key asset in tech competition, not just an afterthought. Jurisdictionally, the EU is often first to regulate in tech (see GDPR, AI Act, DMA). Here, by potentially integrating data into TTBER, the EU could set a tone that other regions follow for antitrust and data intersection. Founders should interpret this as: if your business model relies on exchanging data with others (say training AI on partner data, or industry-wide data pools for research), pay attention to competition law just as much as data protection. Regulators are increasingly viewing those data arrangements through an antitrust lens (is it excluding others? could it facilitate collusion? etc.). Transparent, pro-competitive justifications and safeguards will be crucial.
Bottom line: The Tech Transfer rules are getting a 2020s refresh. While this might not grab headlines like a Big Tech fine, it’s hugely important for the plumbing of innovation in Europe.
If you are a startup forming partnerships to scale your tech or an investor valuing a startup’s IP strategy, keep an eye on the Commission’s next steps. The new TTBER (likely to come into force in 2026) will influence how freely companies can collaborate on technology in the EU and under what conditions.
We’ll know more once the Commission digests the consultation input. For now, take comfort that the safe harbour will persist, but be prepared to adapt to a more data-aware and future-proofed framework that mirrors today’s tech realities.
🇦🇺 Australia’s eSafety vs X: Jurisdiction in the Digital Wild West
In Australia, a legal showdown is highlighting the tension between national online safety laws and global tech platforms’ terms of service. The country’s eSafety Commissioner – a regulator championing user protection from harmful content – is squaring off against X Corp (formerly Twitter) in a case that asks: Whose rules reign supreme, the platform’s or the country’s?
This month, eSafety Commissioner Julie Inman Grant filed an appeal to the Federal Court challenging a decision related to X’s Terms of Service and violent content.
Here’s the backstory: In 2024, a horrific terrorist attack in Sydney led to an extremely violent video circulating on X (Twitter). The eSafety Commissioner, empowered by Australia’s Online Safety Act 2021, issued a “removal notice” to X, ordering it to take down or block access to that content – classified as “Class 1” extreme violence material. Initially, Australian courts backed eSafety with interim injunctions compelling X to hide the video. X did remove some content but also fought back legally, challenging the process.
Fast forward: X Corp brought a case to the Administrative Appeals Tribunal (AAT) – which reviews government decisions – arguing against eSafety’s actions. X’s argument boiled down to jurisdiction and procedure: essentially asserting that eSafety’s informal practice of “alerting” platforms to content that violates their own Terms of Service exceeded her authority or should be subject to review.
In a twist, the Tribunal (recently reconstituted as the Administrative Review Tribunal) decided it did have jurisdiction to scrutinize eSafety’s decision to merely notify X of a possible TOS breach. That was a win for X – it meant the platform could get a domestic legal forum to second-guess the regulator’s nudge.
But eSafety wasn’t having it. On 28 March 2025, the Commissioner announced she is appealing that Tribunal finding to the Federal Court. She believes the Tribunal made “legal errors about jurisdiction” and even made factual findings unsupported by evidence.
The core of the appeal is to clarify whether a regulator can informally prompt a platform to enforce its own rules without getting dragged into lengthy litigation.
As eSafety’s statement put it: this case will provide further clarity around the practice of regulators bringing to a platform’s attention material that potentially breaches their terms of service.
To explain further: Australia’s online safety watchdog wants to ensure she can send a notice to, say, X or Facebook, saying “this post likely violates your policies – take a look” as a quick way to protect users, without that act itself being bogged down in court appeals. X, on the other hand, seems to be using its terms of service and the courts to push back against being policed in this manner.
This tussle is fascinating for several reasons:
Jurisdictional Reach: The case shows how global platforms are subject to local laws, even if their TOS claim otherwise. X’s terms might stipulate California law or arbitration for user disputes, but that doesn’t immunize it from Australian regulatory orders when harm occurs in Australia. By appealing, eSafety is reinforcing that point: platforms operating in a country must comply with that country’s content rules. The Federal Court’s eventual decision will set an important precedent. If it sides with eSafety, it strengthens regulators’ hands internationally – showing that a platform’s fine-print cannot oust a nation’s jurisdiction over online harms on its soil.
The Enforcement Dance: X’s resistance also hints at the broader issue of how far a country can enforce its will on a foreign-headquartered social media firm. Initially, eSafety had to go to the Federal Court to enforce content removal, which it did successfully via injunctions. X complied for a time but then the injunction lapsed. X’s legal maneuvres (like contesting jurisdiction in the AAT) may be partly about asserting that it, not a foreign regulator, controls how its service is governed. This push-pull is something we’re seeing globally – from India to the EU, regulators are asserting authority over content moderation, while platforms guard their autonomy and worry about a patchwork of interventions.
Terms of Service as Shield? A striking element is X effectively saying: “Our terms of service and user agreements mean an Australian regulator can’t just interfere – any issue should be handled per our contract (which points to California).” If that argument were accepted, it would create a loophole where platforms could evade local accountability by invoking private contracts. The Tribunal seemed open to at least reviewing eSafety’s step, but the Federal Court will likely take a harder line. After all, Australian law (the Online Safety Act) explicitly gives the Commissioner authority to issue removal notices for prohibited content, and doesn’t exempt platforms that have their own TOS process. Allowing TOS to override law would set a dangerous precedent – imagine every social media giant effectively choosing which country’s rules to follow based on their TOS jurisdiction clause. The appeal is set to slam that door shut, reaffirming the primacy of law in protecting citizens.
This case reflects a power shift in favour of regulators and courts stepping into the content moderation arena. Not long ago, platforms could say “we’ll police ourselves.”
Now, democratic governments (and even some not-so-democratic) are asserting that they have a say – whether through new laws (like Australia’s Online Safety Act, Germany’s NetzDG, the EU’s Digital Services Act) or through creative use of existing laws. The eSafety Commissioner’s determination to appeal shows regulators can be quite aggressive too. They are willing to litigate to ensure their mandates are respected, even by the mightiest tech companies.
Ultimately, the Federal Court’s upcoming ruling will be one to watch. If it overturns the Tribunal, eSafety will effectively have a green light to continue flagging content to platforms without fear of legal challenge – a quick, cooperative enforcement model.
If it somehow upheld the Tribunal, it could throw a wrench in that model, potentially forcing the Commissioner to issue formal removal notices every time (slower and heavier) rather than friendly “heads-up” messages. Given the independent review of the Online Safety Act has already backed eSafety’s approach, the winds favour the regulator.
🇨🇳 China’s Anti-Sanctions Law: New Compliance Headaches via Decree 803
Amid rising geopolitical tensions, China has been crafting tools to push back against foreign sanctions – and now it’s rolling them out. In March 2025, Beijing enacted State Council Order No. 803, officially promulgating the Regulations for the Implementation of the Anti-Foreign Sanctions Law. This update takes the sweeping principles of China’s 2021 Anti-Foreign Sanctions Law (AFSL) and translates them into actionable measures.
For foreign tech companies operating in or with China, these rules could pose strategic dilemmas: comply with Western sanctions and risk Chinese retaliation, or comply with China’s edicts and risk violating Western laws. It’s the latest development in the fracturing of the global legal environment for tech firms.
What is the Anti-Foreign Sanctions Law? Passed in June 2021, the AFSL created a legal basis for China to counter foreign sanctions that it deems unjustified. It was a direct response to U.S. and EU sanctions on Chinese entities over issues like Xinjiang and Hong Kong.
The law authorizes various countermeasures – from denying entry visas to seizing assets in China – against individuals and organizations involved in implementing foreign sanctions against China. However, until now it was a broad framework. Companies wondered: how exactly might China enforce it? Order No. 803 (the AFSL Implementation Regulations) answers that by detailing who, what, and how.
Key highlights of the new Regulations (which took effect immediately upon announcement on 24 March 2025) include:
Clarity on Countermeasures: The rules enumerate specific countermeasures China can impose. These include freezing or seizing an entity’s assets in China, and prohibiting or restricting transactions and cooperation with that entity. For example, if a tech company complies with U.S. sanctions by halting supplies to a Chinese telecom firm on a U.S. blacklist, China might retaliate by freezing that company’s bank accounts in China or banning Chinese firms from doing business with it. The regulations also define “other necessary measures” catch-all, giving flexibility to Chinese authorities to get creative.
Procedural Mechanisms & Coordination: The implementation rules designate how Chinese government departments will work together on anti-sanctions actions. The Ministry of Foreign Affairs (MOFA) typically takes lead on placing entities on China’s “Countermeasure List” (effectively a sanctions list of China’s own). The new regs clarify that relevant departments under the State Council coordinate on enforcement – so expect agencies like MOFA, Ministry of Commerce, financial regulators, etc., to have a joint role. For businesses, this means multi-faceted impacts: if listed, you might face visa bans (MOFA), asset freezes (courts or financial regulators), and restrictions in commerce (Commerce Ministry).
Private Right of Action – Lawsuits Encouraged: Perhaps the most dramatic element is that these rules encourage private lawsuits in Chinese courts by parties harmed by foreign sanctions. In fact, when the regulations dropped, China’s Supreme People’s Court issued a guiding decision in a first such case. What does this mean? Imagine a Chinese tech company that is cut off by a foreign partner due to U.S. export controls. Now, that Chinese company can sue the foreign partner in a Chinese court for damages, claiming the foreign company’s compliance with foreign sanctions infringed the Chinese company’s lawful rights. The new regs explicitly encourage private persons to bring lawsuits against individuals or companies that execute or assist in the execution of discriminatory restrictive measures of a foreign country. Therefore, if you, as a business, toe the line on a U.S./EU sanction in a way that hurts a Chinese entity, you could be sued in China for that act. This opens a new front of legal risk – not just government action, but litigation from counterparties.
Unfair Trading Conditions via Data: The regs prioritize stopping practices like using foreign sanctions as an excuse to not do business with Chinese firms. To enforce this, Chinese authorities might consider it unfair discrimination if, say, a cloud provider refuses service to a Chinese tech company citing U.S. sanctions. The implementation rules build on Article 15 of AFSL, which broadly prohibits “organizations or individuals” from implementing foreign restrictive measures. If a company complies with a foreign sanction, China now considers that an actionable violation domestically. Companies might find themselves between a rock and a hard place – literally facing legal requirements from two giants that conflict.
From a compliance standpoint, foreign tech firms must now treat the AFSL regime as very real. Before, it was somewhat theoretical, with no clear procedures. Now:
Companies must reevaluate risk exposure: Any company with significant presence in China (assets, employees, key customers) that also could be subject to foreign sanctions controls needs a game plan. For instance, a semiconductor firm exporting to China that is told by the U.S. government not to sell to certain Chinese clients may now risk Chinese retaliation or lawsuits if it complies with the U.S. order. Some may consider restructuring operations, such as using separate subsidiaries or intermediaries, to try to shield the China-facing part of the business from sanction compliance decisions. Others might seek specific licenses or exceptions from sanctioning governments to continue certain dealings, to show Chinese authorities they tried to mitigate harm.
Data and IP seizure risk: The regulations explicitly mention seizure of intellectual property as a countermeasure. This is striking for tech firms – patents or technology licenses could potentially be nullified or appropriated as retaliation. In extreme cases, a company placed on China’s Countermeasure List might see its local patents or trademarks compulsorily licensed to Chinese companies (just as one hypothetical). It underscores that assets aren’t just cash – IP counts too.
Alignment with Other Chinese Blocking Measures: Note that China also has the 2021 Blocking Rules (MOFCOM’s rules to resist extraterritorial application of foreign laws) and the Unreliable Entity List framework. The new AFSL regs complement these. For example, under the Blocking Rules, China can issue orders saying certain foreign laws (sanctions) are not to be recognized. Now, the AFSL regs add the mechanism for punishment and lawsuits if someone does follow those foreign laws. Essentially, China is weaving a net of laws to deter businesses from complying with Western sanctions – a kind of legal counter-coercion. It’s an assertion of Chinese jurisdiction: “If you hurt our interests because of someone else’s law, we will hurt you.”
The jurisdictional power shift here is clear. Traditionally, companies largely prioritized compliance with U.S. sanctions, given the dominance of the U.S. financial system and fear of OFAC penalties. China is signalling that ignoring its anti-sanctions law will carry a price too. Multinationals may end up performing a delicate balancing act or even face a lose-lose choice. Some might decide certain markets or relationships are too costly to maintain under this duelling compliance pressure.
Already, since AFSL took effect in 2021, China’s MOFA has sanctioned 59 entities and 63 individuals (mostly foreign politicians, think tanks, defense companies, etc.). Those were largely symbolic. But now, with private lawsuits on the table, even companies not explicitly listed by MOFA could get dragged into court by a Chinese partner if they cut off supply due to foreign sanctions. For instance, if a European software company stops providing updates to a Chinese client on the U.S. Entity List, that client might sue for breach in China, citing AFSL regs to bolster their case that the cut-off was unlawful in China.
For tech founders and legal teams, it’s imperative to map out scenarios: If country A’s law says “Don’t ship product X to client Y” and China’s law says “Failure to ship to Y could get you sued or sanctioned,” what do you do? This might involve geostrategic segmentation of business lines.
Some companies might err on the side of over-compliance to U.S./EU rules (given enforcement track record) but quietly prepare contingencies if China retaliates. Others might seek quiet diplomacy – e.g., ask Chinese authorities for exemptions or understanding in specific cases, or lobby Western governments for waivers for contracts in China.
The big picture is that tech companies are increasingly caught in the crossfire of geopolitics. Laws like AFSL (with its new implementation teeth) illustrate how national security and economic conflicts manifest as compliance nightmares for industry. It’s reminiscent of the dilemma European firms faced with the U.S. Iran sanctions vs. EU blocking statute, but on a much larger scale given China’s economic weight.
Moving forward, any company with East-West exposure should track who gets put on China’s Countermeasure List and who is winning or losing lawsuits under this regime. Also, it would be wise to review contracts with Chinese counterparts: some are now adding clauses addressing sanctions compliance (or non-compliance). For example, a contract may stipulate that if one party is unable to perform due to foreign sanctions, the parties will consult – trying to preempt a lawsuit by showing intent to cooperate.