Policy Update: Facial Recognition, AI and Data Protection Under Scrutiny
Global regulators confront AI through human rights, data protection, transparency laws, and international cooperation statements.
The past weeks have brought an unusual convergence of debates across the world on the limits of artificial intelligence and digital governance. From London to Bonn, from Buenos Aires to the APEC ministerial in Singapore, regulators and lawmakers are grappling in reforms never before seen. These developments are in tandem with how the world is trying to hold together common principles of accountability while responding to very different political and institutional pressures.
In this newsletter, we will cover:
How the UK regulator confronted the Metropolitan Police over live facial recognition, warning that surveillance without strict human rights safeguards risks losing legitimacy in the very society it claims to protect.
Why Germany’s data protection authority is asking hard questions about memorised data in large AI models, challenging the tech industry to prove that rights of access, correction, and deletion are not an illusion.
Argentina’s attempt to regulate artificial intelligence with transparency rules, risk categories, and a public registry of high-risk systems, setting a precedent that other countries in the region may not be ready to follow.
The latest APEC ministerial statement on digital and AI governance, where member states agreed on trust and inclusivity yet stopped short of enforceable commitments, leaving many to wonder what cooperation without accountability really achieves.
Facial Recognition and Human Rights in the United Kingdom
The Equality and Human Rights Commission in the United Kingdom has issued a clear warning to the Metropolitan Police on its use of live facial recognition technology.
The regulator state that this form of surveillance cannot exist outside the framework of human rights law and that the Met Police must be able to show compliance with requirements of necessity, proportionality, and legality.
The Commission’s position is significant because it places responsibility not only on the technology itself but also on the operational and policy decisions around deployment.
The Commission pointed out that facial recognition is not simply a tool that operates neutrally; it raises profound concerns about privacy, discrimination, and due process.
This is not a call for a ban. Rather, it is a reminder that in the United Kingdom, any policing technology that identifies people in public spaces must be embedded within the legal guarantees of the Human Rights Act.
The Met Police has defended the usefulness of facial recognition for public safety, but the regulator has set a marker: effectiveness cannot override the need for safeguards.
The Commission insists that legitimacy depends on transparency and accountability, not only efficiency.
In the immediate future, law enforcement agencies in Europe will face increasing scrutiny when they rely on biometric technologies.
Public trust will be fragile if these systems are not transparent.
The Commission’s insistence on embedding rights-based principles into everyday policing shows that human rights law can be a practical check on algorithmic surveillance.
Germany’s Consultation on Data Protection and AI Models
Germany’s Federal Data Protection Commissioner has launched a public consultation on how personal data is handled in large language models and similar artificial intelligence systems.
The document recognises what is now widely discussed but still unresolved: when models memorise data, including fragments of personal information, what obligations arise under data protection law.
The consultation raises questions about whether training data can ever be truly anonymised, whether differential privacy or deduplication techniques can sufficiently mitigate risks, and whether the outputs of a model that has memorised personal data constitute a form of processing under Article 4(2) of the GDPR.
Importantly, the consultation acknowledges that affected individuals have rights of access, rectification, and erasure, yet it admits that enforcing those rights against the architecture of current models is extremely difficult.
How can someone demand deletion of their personal data if no one can locate it within billions of parameters?
Germany is opening the door for civil society, academia, and industry to provide practical evidence and technical assessments before regulatory approaches harden.
The questions are deliberately wide-ranging, covering anonymity, memorisation, and privacy attacks.
By doing so, the consultation highlights the tension between innovation and compliance with the GDPR.
The deeper story is that Europe is beginning to test the limits of its flagship data protection regime against new forms of artificial intelligence.
The GDPR was drafted before large language models were devised at scale.
Now, regulators are asking whether rights like deletion and access can be operationalised in practice.
Argentina’s Proposed AI Data Protection Law
In Argentina, a Bill has been introduced in the Chamber of Deputies to regulate the use of personal data in artificial intelligence systems.
The proposal covers developers, providers, and operators, and applies to any AI system that processes personal data in Argentina or targets individuals in the country.
The draft law sets out obligations of transparency, requiring that operators explain in accessible language the purpose, logic, and level of automation in AI systems.
It mandates risk assessments for all systems, with categories ranging from low to high risk.
High-risk systems in sensitive sectors such as health, finance, education, and justice will be subject to special controls.
Perhaps most notably, the bill creates a National Registry of AI Systems. Medium and high-risk systems must be registered with details of the developer, risk evaluation, and data protection policies.
The registry will be public, establishing a form of democratic oversight.
The law also grants the regulator power to demand audits, suspend systems posing grave risks, and impose significant fines proportional to business revenues.
The proposal aligns Argentina with emerging trends in international regulation, drawing inspiration from the European GDPR and the proposed EU AI Act.
Yet, it is also tailored to local specificities, inviting provinces and the city of Buenos Aires to adhere.
What stands out is the insistence that AI development cannot be a private domain of companies alone.
Argentina is asserting that AI is a matter of public interest.
The bill sends a message that technological innovation must advance alongside accountability to citizens.
The APEC Digital and AI Ministerial Statement
At the Asia-Pacific Economic Cooperation meeting from 4-6 August 2025, ministers adopted a Digital and AI Ministerial Statement that seeks to set a cooperative agenda across member economies.
The statement stresses trust, inclusion, and sustainable growth in the digital economy.
It acknowledges the opportunities of artificial intelligence while calling for frameworks that manage risks, safeguard human rights, and ensure interoperability across borders.
While the statement is political rather than binding, it represents a collective effort to maintain common ground among economies with very different regulatory traditions.
It refers to the need for responsible AI governance, cross-border data flows with trust, and capacity building for smaller economies.
APEC’s strength is that it creates consensus language that can guide regional cooperation. However, its weakness is that it rarely sets enforceable obligations.
The ministerial statement does not commit members to specific laws, but it shows that even in a diverse region, governments recognise the need to discuss responsible AI and digital governance in the same breath as economic development.
The interesting point here is that Asia-Pacific economies are asserting their priorities in terms of trust and inclusivity, while avoiding the prescriptive model of the European Union.
This could produce a distinct regional voice that emphasises voluntary cooperation rather than strict uniform rules.
Artificial intelligence is not advancing in a legal vacuum. In reality, it is advancing in defiance of rules that were never built to contain it. Governments are scrambling to retrofit protections, yet corporations exploit loopholes faster than regulators can respond. The result is a precarious bargain with public trust.
We welcome your reflections and experiences on these developments. Reply to this newsletter or leave a comment. Your perspective will help sharpen the debates that matter most.