Issue 24: AI Export Laws, Judicial Guidance, EU Monitoring, and Biased Votes by AI Bots
The newsletter reviews new United States AI export controls, EU AI enforcement research, UK judicial AI guidance, and Dutch regulator warnings about biased political AI chatbots.
Artificial intelligence governance is developing in a manner that exposes an uncomfortable truth about contemporary regulatory ambition, which is that states are now asserting control over digital systems with an intensity previously reserved for physical infrastructure. This trajectory reflects neither technological optimism nor pessimism. It reflects a growing insistence that AI must fit within existing structures of authority even when those structures are under strain.
United States: H.R. 5885 - GAIN AI Act of 2025
In the United States, the House of Representatives introduced the GAIN AI Act of 2025 on 31 October, under bill number H.R. 5885.
The text mandates that entities seeking export licences for advanced artificial-intelligence-related chips must provide certification that United States persons enjoy priority access to those chips.
The bill was referred to the House Committee on Foreign Affairs.
What this measure highlights is the increasing linkage between export controls, chip supply chains, and the governance of AI.
The bill reflects a recognition that advanced semiconductors and AI hardware count as strategic assets, and that policy-makers are prepared to place conditions on their export in order to safeguard national interest.
At the same time, the certification requirement implies that firms will need to structure their compliance mechanisms and their internal disclosure processes to satisfy governmental oversight.
Though still at an early stage in the legislative process, this bill tells us that AI regulation in the United States is no longer confined to model behaviour or data privacy, but is extending into hardware, supply-chain resilience, and the conditions under which AI capacities are exchanged internationally.
It would be prudent for organisations engaged in AI hardware design, export, or supply-chain logistics to monitor not only the final outcome of the bill but also its drafting of licence-criteria, definitions of “advanced AI chips,” geographic scope of “countries of concern,” and how priority access is defined and enforced.
European Union: Artificial Intelligence Act (EU) and monitoring of EU law
The European Parliament’s Policy Department for Justice, Civil Liberties and Institutional Affairs published a study dated 30 October 2025, entitled “AI and monitoring the application of EU law”.
The report surveys how AI techniques are being used, and might be used to support the monitoring and enforcement of EU law, including in legislative drafting, transposition of directives, administrative decisions, compliance tracking, and social-behavioural governance.
This work draws attention to the fact that regulation of AI is not only about controlling AI systems themselves but also about using AI systems as regulatory tools.
The symmetry is significant: AI becomes both the object of governance and a means of governance.
The study points out that while there are existing AI-monitoring applications in EU law enforcement and administrative practice, the potential for more advanced deployment remains under-analysed, including how fundamental rights and legal principles should be respected when AI assists regulatory monitoring.
This suggests two key observations. First, regulated entities in the EU should anticipate that AI governance may include oversight via AI-driven monitoring, meaning that their compliance processes may themselves become subject to enhanced scrutiny through technological means.
Second, the EU’s legislative timetable for AI regulation and related frameworks, such as the risk-based classification and obligations under the AI Act, continue to generate implications for how AI can be deployed in public-administration and regulatory enforcement settings.
United Kingdom: Judicial Guidance on AI Use by the Judiciary
In England and Wales, the Judiciary of England & Wales (courts and tribunals) published updated guidance on 31 October 2025, addressing the use of AI by judicial office-holders.
The document replaces the April 2025 edition, enlarging the glossary of terms, expanding sections on bias in training data and “hallucinations” (incorrect or misleading outputs), and emphasising confidentiality obligations: judicial officers are reminded not to input private or non-public information into publicly accessible AI tools, and to treat any inadvertent disclosure as a data incident.
The guidance underlines that any use of AI by or on behalf of the judiciary must be consistent with its overarching obligation to protect the integrity of the administration of justice and to uphold the rule of law.
Lord Justice Birss, Lead Judge for Artificial Intelligence, emphasised the personal responsibility of judicial office-holders for material produced in their name.
This means that even within the judicial context, where the use of AI might appear less commercially oriented and more internal or procedural, strong guidelines are being imposed.
The guidance shows that where AI is used, it must be carefully controlled, with human accountability non-negotiable.
As AI enters institutional and public sectors, the threshold for documentation, oversight, verification, and risk mitigation increases.
While the guidance is directed at judicial office-holders, its principles may serve as a benchmark for other professions engaged in regulatory or public functions.
Netherlands: Autoriteit Persoonsgegevens (AP) Warns of Biased Voting Chatbots
In the Netherlands, the data-protection authority issued a warning on 21 October 2025 that AI chatbots used as voting-aid tools produced highly distorted and biased results.
The AP’s study involved four major chatbot platforms tested against various fictional voter profiles. It found that the systems often offered advice heavily favouring just two political parties, regardless of user input.
While such tools may appear to offer impartial guidance, the AP concluded that they present a threat to electoral integrity and democratic participation.
The experiment revealed that the chatbots, contrasted with established voting-advice applications, lacked representativeness of all parties and funnelled users disproportionately to the same two options.
The AP labelled the advice “unreliable and clearly biased.”
This development is quite significant because it touches on the intersection of automated systems, political influence, transparency, and fairness.
The findings reflect risks associated with using generative-AI tools or chatbots in sensitive contexts such as elections, where the decisions impacted are inherently public and governed by principles of fairness and neutrality.
Organisations operating chatbots or AI-based recommendation systems in civic or electoral domains will need to consider how these systems are designed, how their outputs are audited, and how they are communicated to users.
Final Thoughts
These developments, while distinct in jurisdiction and focus, together map out an important dimension of current tech-law development: regulation and governance of AI are expanding beyond traditional domains of data privacy and algorithmic fairness to include hardware exports, institutional use of AI within justice systems, monitoring of regulatory compliance via AI, and algorithmic impact on democratic processes.
What emerges is a reminder of the importance of governance frameworks that address the full lifecycle of AI, from hardware and supply-chain, through software and model training, to public-deployment and institutional adoption.
What remains clear is that the legal and regulatory environment for AI remains multifaceted including export-control law, public-sector AI governance, electoral regulation, fundamental-rights frameworks, and compliance monitoring are all part of the field.
I will close this edition here with an invitation to continue the discussion. Readers who wish to reflect on any part of this analysis are encouraged to share considered thoughts in the comments so that future editions benefit from a wider range of perspectives.




I have always been sceptical about how AI might influence election results, more so for countries that do not have a free and fair election process.