Law Reform: AI in Europe Is Gradually Becoming Over-regulated
A new study analyses how the EU AI Act overlaps and interacts with other EU digital laws, urging coherence, simplified compliance, and coordinated governance.
Europe has regulated AI into a corner. The Artificial Intelligence Act, though hailed as a global benchmark, exemplifies the tension between ambition and practicality that defines modern EU digital regulation. Several laws also promise data protection and AI innovation, however together they form an administrative maze that risks stifling the very technological progress Europe claims to champion.
Access our EU AI Tracker here to monitor the latest developments in EU artificial intelligence laws and regulations.
The AI Act and Europe’s Digital Legal Framework
When the European Union adopted the Artificial Intelligence Act in June 2024, it presented the world’s first comprehensive attempt to regulate artificial intelligence in a structured and rights-based way.
The AI Act’s ambition is significant. It aims to ensure that AI systems placed on the European market are trustworthy, safe, and compliant with fundamental rights.
Nonetheless, this new framework operates alongside a web of other digital laws that already govern data protection, cybersecurity, market competition, and digital services.
The study prepared for the European Parliament’s Committee on Industry, Research and Energy (ITRE) examines this web in detail.
Its findings are important for everyone.
At the core of the study lies a simple insight that every law in the EU digital framework is individually well designed, but together they create a structure that is heavy, overlapping, and sometimes unclear.
The AI Act was intended to complete the EU’s digital single market, yet its interactions with other frameworks have introduced a new kind of complexity.
Understanding the Purpose and Logic of the AI Act
The AI Act rests on a risk-based approach.
It prohibits a narrow category of AI practices that are deemed unacceptable, imposes strict controls on high-risk systems, and sets transparency duties for low- and medium-risk uses such as chatbots or generative tools.
It also creates separate obligations for general-purpose AI models that are powerful enough to have systemic effects.
This tiered logic mirrors the EU’s long-standing product safety regime but extends it to an entirely new domain.
The study explains that this approach was a deliberate choice.
Before the AI Act, artificial intelligence was largely unregulated, and Member States risked introducing divergent national laws.
The new regulation harmonises the internal market by setting a single framework.
Nonetheless, that harmonisation comes at a cost.
The report highlights that the AI Act stretches the boundaries of traditional product regulation.
It asks conformity assessment bodies to evaluate not only technical safety but also respect for fundamental rights and societal values.
That task, the authors note, may prove difficult because human rights compliance cannot easily be reduced to technical checklists.
For high-risk systems, the obligations are extensive.
Providers must establish risk management systems, data governance protocols, technical documentation, human oversight, and quality management frameworks.
They must demonstrate accuracy, robustness, and cybersecurity, and affix the familiar CE marking.
These are familiar requirements in manufacturing but novel in software development.
The study observes that while the AI Act promotes trust, it also places significant compliance responsibilities on providers and deployers alike, particularly smaller businesses without specialised legal teams.
An additional layer applies to general-purpose AI models, known as GPAIs.
These are the large, adaptable models that underpin tools for text, image, or speech generation.
The AI Act introduces transparency, copyright compliance, and cybersecurity duties for all GPAI providers, and enhanced obligations for those with systemic risk.
The report notes that a model trained with computing power above a specific threshold is presumed to be of systemic importance.
This presumption was designed to ensure oversight of the most powerful systems but may prove too static as technology progresses.
The Act also includes innovation measures.
Open-source AI systems are mostly exempt from its scope, and each Member State must establish a regulatory sandbox to allow experimentation under supervision.
The study recognises these as positive steps but cautions that their success will depend on national authorities’ willingness to interpret the rules in an enabling way.
Intersections Across the Digital Legal Framework
The central part of the study examines how the AI Act interacts with other EU digital laws.
The most significant areas of intersection involve the GDPR, the Data Act, the Cyber Resilience Act, the Digital Services Act, the Digital Markets Act, and the NIS2 Directive.
Each of these laws is vital in its own domain, but together they form a dense compliance environment that can be difficult to understand.
Under the GDPR, many AI systems that process personal data already fall within an existing legal structure that emphasises lawfulness, transparency, and accountability.
The AI Act introduces additional assessments, particularly the Fundamental Rights Impact Assessment (FRIA), which often overlaps with the Data Protection Impact Assessment (DPIA) required under the GDPR.
The study observes that both tools serve legitimate goals but differ in supervision and procedure.
As a result, organisations may find themselves conducting two separate yet similar evaluations for the same system.
The report recommends aligning these assessments through joint guidance and mutual recognition.
The Data Act establishes rules for access to and sharing of data generated by connected products and services.
AI providers who hold or use such data may therefore face obligations under both Acts.
For instance, they must ensure that data is shared in a fair and secure way while also maintaining compliance with AI risk and transparency duties.
The study points out that these cumulative requirements could discourage experimentation and impose a heavy administrative load, especially in cross-border contexts or when data flows from third countries.
In the field of cybersecurity, the Cyber Resilience Act imposes mandatory security standards for digital products.
Many of these overlap with the AI Act’s own cybersecurity and robustness requirements.
Although the CRA allows for a presumption of compliance under certain conditions, the partial alignment of the two frameworks can still produce uncertainty.
The report suggests that greater coordination between the responsible authorities would help prevent duplicated audits or inconsistent expectations.
For large online platforms and search engines, the Digital Services Act and the Digital Markets Act add yet another layer.
These instruments require transparency and accountability for algorithmic systems used in content moderation, advertising, or recommender functions.
The study notes that such platforms might be subject to simultaneous risk assessments under both the AI Act and the DSA.
Moreover, gatekeepers under the DMA must meet obligations relating to data access and interoperability, which could overlap with AI governance duties.
While the DMA does not yet classify AI systems as core platform services, their growing economic role makes such an intersection increasingly relevant.
The NIS2 Directive, which focuses on network and information security, introduces reporting and risk management obligations for essential and important entities.
Many of these entities will also deploy AI systems covered by the AI Act.
The report finds that the incident reporting and supply-chain management requirements of both frameworks often intersect, potentially creating parallel channels of compliance.
Across all these examples, the study reaches a consistent conclusion: the EU’s digital regulatory framework has become internally complex.
Each law serves a clear purpose, but their cumulative effect risks slowing innovation and discouraging smaller actors from entering the market.
Governance and Implementation Challenges
A key section of the study analyses the institutional architecture created by the AI Act.
At its centre stands the European AI Office, which operates within the European Commission.
The AI Office is responsible for supervising general-purpose AI models, supporting regulatory sandboxes, and coordinating national authorities.
It works alongside the European AI Board and interacts with data protection authorities, market surveillance bodies, and other sectoral regulators.
This structure reflects an attempt to balance central oversight with national autonomy.
However, the study raises concerns about potential overlaps of competence.
The AI Office’s mandate intersects with that of existing regulators, yet it lacks a distinct legal personality and depends on Commission resources.
Without sufficient independence or capacity, its effectiveness could be limited.
The authors warn that fragmented governance may lead to inconsistent enforcement and uncertainty for businesses seeking to comply across multiple jurisdictions.
At the national level, market surveillance authorities will play a key role in enforcing the AI Act’s provisions.
These bodies are accustomed to monitoring tangible goods such as machinery or medical devices.
The report suggests that applying the same methods to algorithmic systems will require substantial training, technical expertise, and coordination.
Inconsistent capacity among Member States could produce uneven application of the law.
The study also identifies the importance of cooperation among supervisory authorities.
Given that AI systems often implicate personal data, cybersecurity, and consumer rights simultaneously, regulators must establish clear protocols for joint inspections and shared investigations.
The report encourages the development of consistent EU-level guidance to avoid contradictory interpretations.
Toward a More Coherent Digital Future
The final section of the study turns from analysis to reflection.
It recognises that the EU’s digital legislative framework represents an extraordinary achievement in scope and ambition.
However, the authors argue that the cumulative regulatory weight could unintentionally hinder innovation, particularly among small and medium-sized enterprises.
Short-term progress, they suggest, can come through practical coordination.
Supervisory bodies should publish joint guidance, share expertise, and recognise equivalent compliance efforts.
For instance, if a company completes a DPIA under the GDPR that already covers the relevant risks, it should not be required to repeat the process for a FRIA under the AI Act.
Similarly, Member States could align their sandbox procedures to ensure consistent conditions for testing and experimentation across borders.
In the medium term, modest legislative clarifications could help.
These might include refining definitions of high-risk systems, clarifying the interaction between cybersecurity obligations, and specifying the responsibilities of different actors in the AI supply chain.
The goal would not be to weaken regulation but to make it more understandable and executable.
Over the longer term, the study recommends a more strategic review of the EU’s digital architecture.
It argues that Europe should consider consolidation and simplification of its legal instruments, aiming for coherence rather than volume.
The authors envision a framework where data, AI, cybersecurity, and digital market rules operate as complementary parts of a single ecosystem rather than as separate compartments.
This integrated model would better support agile compliance while preserving the Union’s core values of trust, rights, and safety.
The report concludes with a broader reflection on competitiveness.
The European Union’s AI investment levels remain lower than those of the United States and China.
The authors caution that complex regulation may further reduce the continent’s ability to attract and retain AI innovation.
Simplification, coordinated enforcement, and strategic alignment are therefore not merely administrative improvements; they are essential to ensuring that the EU can sustain a globally competitive AI industry.
A Framework in Need of Balance
The European Parliament’s study makes one theme clear: the AI Act embodies both the promise and the challenge of Europe’s digital governance.
It is unprecedented in global regulation, but its effectiveness will depend on how well it integrates with surrounding laws and institutions.
Each layer of the digital framework; privacy, data access, cybersecurity, and platform regulation, serves a vital purpose, but the combined system must remain useful, especially for innovators and smaller enterprises that form the backbone of Europe’s technology sector.
Achieving that balance requires coordination, clarity, and a shared vision of digital policy.
The authors’ recommendations are measured and pragmatic.
They do not call for deregulation but for coherence.
They remind policymakers that trust and innovation are not mutually exclusive.
By streamlining its legal framework and ensuring consistent enforcement, the EU can maintain its commitment to fundamental rights while enabling a dynamic and competitive AI ecosystem.
In essence, the study offers a roadmap for governance that is both principled and practical. It recognises that the challenge is not the existence of rules but their integration.
If the EU succeeds in aligning its digital laws, the AI Act could stand not only as a regulatory milestone but also as the foundation of a genuinely coherent European digital future.
The questions raised by the AI Act’s complexity deserve sustained discussion. Readers are invited to share perspectives or industry experiences that can inform future editions.





