Legal Update: Europe’s AI Act is Not Playing Nice with GDPR and That is a Big Problem for You
Did you think the AI Act will overshadow the GDPR? Not so fast. This post delves into their tangled relationship, and what it means for you.
In case you missed the emerging developments from the EU, the implementation of the AI Act is approaching at warp speed. But in its slipstream, it is kicking up a lot of questions, especially for those of us already wrestling with the GDPR. Now that the Council has released a fresh summary of how these two “titans” will interact, it's time for us to dig in.
The document is a confidential Council of the EU summary from April 2025 outlining the key takeaways from a joint debate on how the Artificial Intelligence (AI) Act will interact with the General Data Protection Regulation (GDPR).
Held under the Polish Presidency, the meeting brought together data protection and telecom policymakers from EU Member States.
The discussion focused on practical challenges of implementing both laws simultaneously, particularly for regulators and AI system providers.
Key concerns included the differing approaches of the two laws: the GDPR centres on data protection and fundamental rights, while the AI Act takes a product safety and risk-based approach to regulating AI systems.
This creates legal uncertainty, risk of double sanctions, and overlapping compliance obligations, especially for high-risk AI applications using personal data.
Member States called for clearer EU-level guidance, shared templates for assessments, closer cooperation between data protection and AI authorities, and integrated national governance structures.
The role of regulatory sandboxes was also discussed as a way to support innovation while ensuring legal compliance.
The document reflects growing pressure on EU institutions to harmonise interpretations, streamline enforcement, and clarify expectations ahead of the AI Act’s full implementation, particularly in scenarios where both laws apply at once.
Two Tech Laws, Two Philosophies 😵💫
Let us not sugar-coat it. The AI Act and the GDPR might both be EU laws, but they don’t exactly operate on the same wavelength.
Start with the GDPR 🛡️. This is a regulation built to protect people, and more specifically, their personal data. It’s about rights, consent, fairness, transparency, and accountability.
The GDPR applies whenever personal data is processed, no matter the technology involved. It has a clear focus: your data should only be collected and used if it serves a specific, lawful purpose. If that changes, you might need new legal grounds. If the risk increases, you might need to do a full assessment. It's rule-heavy but well understood, at least by now.
Now look at the AI Act 🤖. This one is all about systems. It isn’t focused on data subjects, it is looking at whether the AI tool you are developing or using is potentially dangerous or deceptive.
The key trigger under the AI Act isn’t whether you are processing personal data, but whether your AI system falls into a high-risk category. If it does, you will need to tick a different set of boxes: documentation, monitoring, risk management, human oversight. It's less about privacy, more about accountability in development and deployment.
What’s causing headaches for everyone, from regulators to companies, is that these two laws overlap just enough to create confusion.
You could be deploying an AI system that’s fine under the AI Act, but that breaches data protection rules under the GDPR. Or vice versa.
At a recent EU Council discussion, Member States didn’t hold back. They said loud and clear: we need to sort this out.
The differences in logic and enforcement could lead to double compliance obligations, clashing rulings, and legal uncertainty for businesses.
Until there is legal harmonisation, developers, deployers, and regulators will all need to tread carefully.
Same AI, Two Sets of Rules 😬
One of the most pressing concerns raised by EU Member States is the looming risk of double compliance, or even worse, double penalties. And this isn’t theoretical. It is a real risk for anyone developing or using AI systems in Europe.
Here’s the situation: under the AI Act, certain systems are labelled as “high-risk” because of their potential to impact safety or fundamental rights. That includes systems used in hiring, credit scoring, education, law enforcement, and more. These systems must meet a strict set of technical, documentation, and oversight requirements. That’s one set of obligations.
But many of these same systems also rely on personal data. And the moment they do, the GDPR comes in. Now you are also responsible for purpose limitation, legal grounds, data minimisation, transparency, and user rights. That’s a whole second framework, with its own requirements, procedures, and consequences for non-compliance.
What if your AI system satisfies the AI Act but violates the GDPR? You could face enforcement action from your national market surveillance authority (MSA) under the AI Act and from the data protection authority (DPA) under the GDPR 🚨. Each of these bodies can impose serious sanctions and they are not required to coordinate.
This overlapping exposure is one of the biggest headaches for developers and legal teams. Member States flagged this problem clearly: without structured cooperation between MSAs and DPAs, we risk legal confusion, inconsistent rulings, and avoidable costs for businesses.
Some countries suggested practical fixes, like joint inspections, shared guidance, and common templates.
But the broader message was simple: if you are building or using AI in the EU, you need to treat both laws as active, enforceable, and equally serious. Ignoring one isn’t an option anymore.
So, What’s the Plan? 🗺️
There is no one-size-fits-all fix for the clash between the AI Act and the GDPR, but the European Commission is not standing still. Work is underway to bring clarity, reduce confusion, and help both businesses and regulators find a practical path forward.
🛠️ First, new guidance is in the pipeline. The Commission, together with the European Data Protection Board (EDPB), is developing materials to clarify how the AI Act interacts with the GDPR. This includes detailed explanations of where obligations overlap, how they differ, and how to handle risk assessments and subject rights without duplication. The guidance will likely include templates, sector-specific models, and implementation tools that can serve as a practical reference point for organisations building or deploying AI systems in the European Union.
🎓 Secondly, training and workshops have been proposed. Several Member States suggested that both public and private actors would benefit from targeted training sessions. These could provide clarity on regulatory expectations and offer a much-needed space for dialogue between regulators and industry.
📑 Third, standardised documentation tools are under discussion. The idea is to streamline compliance by offering joint templates that cover both the AI Act’s Fundamental Rights Impact Assessment and the GDPR’s Data Protection Impact Assessment. This would save time, reduce paperwork, and help businesses meet both legal requirements more efficiently.
Whether all of this will be enough to solve the dual compliance challenge remains to be seen. But it is a serious attempt to reduce confusion, increase consistency, and provide more certainty for everyone involved. It is a step in the right direction.
Can’t Train AI with Just Any Data 🧠
If your AI model relies on personal data for training, the GDPR’s “purpose limitation” principle is non-negotiable. It requires that personal data must only be used for the specific purpose it was originally collected for. If you want to use that data for something else, the new purpose must be compatible or you must obtain a new legal basis, such as fresh consent .
For example, if you collected data from users to improve customer service interactions, you cannot simply repurpose that data to train an AI model for automated emotional recognition. That would likely exceed the original purpose, especially if users were never informed or asked about it.
Many developers continue to overlook this. The temptation to reuse data, especially large datasets collected from previous interactions, is strong. But under the GDPR, that is highly risky. The law does not tolerate vague or shifting justifications for personal data use.
Member States have highlighted that there is still no consistent EU-wide interpretation of what counts as a "compatible" purpose. This legal uncertainty leaves organisations in a precarious position, either trying to guess the limits of the law or halting innovation altogether due to compliance fears.
Until the European Commission or the European Data Protection Board publishes practical guidance on this issue, businesses must tread carefully. It is crucial to document purpose decisions clearly, conduct impact assessments when necessary, and seek legal advice when in doubt. Using personal data to train AI systems is not off-limits, but it comes with strict rules and regulatory scrutiny.
Sandboxes: The AI Experiment Zone
By August 2026, each EU Member State is required to establish at least one AI regulatory sandbox. These sandboxes are designed to allow businesses to test AI systems in real-world conditions, but within a controlled and supervised setting. The goal is to support innovation without sacrificing legal compliance or the protection of fundamental rights.
However, the moment an AI system being tested involves personal data, data protection authorities (DPAs) must be part of the process from the outset.
This is not optional.
It is necessary to ensure that businesses do not unintentionally breach the GDPR while experimenting with new AI technologies.
Some countries (e.g., United Kingdom, Germany, France) have already begun laying the groundwork for their sandboxes. Others are referencing their past experiences in financial technology regulation to shape their AI-specific frameworks.
Despite the different starting points, the shared message is clear: these sandboxes will only work if they are built on cooperation, legal clarity, and transparency.
To make this possible, Member States have suggested several practical components:
Early legal advice from both AI and data protection regulators to avoid later enforcement issues.
Integrated impact assessments that meet both AI Act and GDPR standards.
Published summaries of each sandbox project to ensure public trust and oversight, without revealing proprietary data.
These regulatory sandboxes have the potential to give businesses confidence and direction as they develop AI tools in a compliant manner. But they must be implemented with real oversight, clear conditions, and meaningful support. Without these elements, they will risk becoming procedural exercises with limited value.
Two Impact Assessments, One Problem 📋
Under the GDPR, organisations are already required to conduct a Data Protection Impact Assessment (DPIA) when they process personal data in ways that are likely to result in high risks to individuals. This is a familiar process for many teams working in compliance, data protection, and technology. It requires a clear explanation of the risks to data subjects, the purposes of the processing, the lawful basis, and the safeguards in place to protect rights and freedoms.
The AI Act, however, introduces an additional requirement for certain high-risk AI systems: a Fundamental Rights Impact Assessment (FRIA). This new assessment looks beyond just data protection to examine the wider implications of the AI system on people’s fundamental rights. It includes factors like fairness, equality, non-discrimination, and human dignity ⚖️.
Although the objectives of the DPIA and FRIA partially overlap, the structure, format, and legal framing are different. Member States are concerned that developers will have to complete two sets of documentation, repeating much of the same analysis in two different formats. This could lead to unnecessary administrative burden and compliance fatigue.
To avoid duplication, the European Commission is currently developing a standardised FRIA template designed to complement the existing DPIA. The aim is to make it possible for organisations to perform a single, integrated assessment that satisfies the requirements of both laws.
Member States have also called for practical guidance and examples of how to carry out both assessments efficiently. If implemented thoughtfully, this could help ensure that impact assessments are not reduced to bureaucratic exercises but serve as meaningful tools to improve the safety, fairness, and accountability of AI systems.
What Should You Be Doing Right Now? 🕵️
If you are building, deploying, or even planning to use AI systems in Europe, this is not a moment to wait on the sidelines. The AI Act and the GDPR are both moving forward, and the overlap between them is no longer something you can ignore.
Start by mapping your AI systems. Identify which of them are likely to be classified as high-risk under the AI Act. At the same time, assess which of these systems process personal data and therefore fall under the scope of the GDPR. There will almost certainly be overlap. Knowing exactly where that overlap occurs will help you avoid blind spots in compliance planning.
Next, review your existing Data Protection Impact Assessments (DPIAs) 📄. Ask yourself whether they can be expanded or adapted to include the new Fundamental Rights Impact Assessment (FRIA) requirements under the AI Act. The aim should be to streamline, not duplicate. But at this stage, you will need to be thorough.
You should also prepare for dual reporting. Until regulators align their processes, you may need to engage both your national Data Protection Authority and your Market Surveillance Authority on the same AI project. Keep documentation clear and accessible for both.
We will be keeping a close eye on further guidance from the European Commission 📘. Subscribe to our newsletter so you don’t miss out on updates, especially once templates and model compliance documents are released.
Do not assume passing AI compliance checks means you are fine on privacy. They have different goals. The AI Act focuses on risks to the system itself. GDPR Privacy rules focus on how data is collected and used. Both matter. Get advice that covers the whole picture: tech, data, and user rights.
When your team does risk assessments for AI, combine your efforts. You will likely need two versions: one for system impact and one for data use. Reusing documentation is fine, if done correctly. But make sure each assessment speaks the right language. That will save time, stress, and possible enforcement later.