Policy Update: Your Doctor Is Using AI, But Did You Sign Up for That?
The real risk of AI in healthcare is legal, not technical. Who is to blame when the AI algorithm gets it wrong?
Hospitals are using AI to read medical scans, guide diagnoses, and speed up clinical decisions. Behind the technology is a growing pile of legal issues no one seems ready to face. Consent is unclear. Accountability is disputed, and the rules overlap in ways that create more confusion than clarity. This newsletter looks at what happens when three major legal frameworks meet modern medicine, and why patients, doctors, and regulators are all feeling the pressure of a system that is moving faster than the law.
When the AI Algorithm Knows Your Body Better Than You Do
Across Europe, deep learning models are being integrated into hospital systems with a level of enthusiasm that radiologists might find either flattering or mildly alarming.
Diagnostic tools powered by artificial intelligence are now capable of interpreting complex medical imaging data such as X-rays, CT scans, and MRIs, often identifying patterns in patient conditions with impressive levels of accuracy.
This is particularly relevant in the fields of oncology, cardiology, and pulmonology, where detection and prediction models are already being used to support early diagnosis and clinical decision-making.
Material facts from the recent European Commission Joint Research Centre report paint a very direct picture.
AI tools used for triage and prioritisation in radiology are already in deployment across several member states.
These systems analyse imaging results, flag urgent cases, and even suggest probable diagnoses.
In some countries, AI is being used to optimise patient pathways, assess risk in stroke cases, and provide decision support in breast cancer screening programmes.
The underlying data sources are extensive, involving structured datasets, imaging records, and anonymised patient files.

The effectiveness of these models depends on continuous access to diverse data, which raises immediate legal concerns about data protection, secondary use of health data, and whether patients have a genuine choice in whether or not their data is used by the AI.
EU law is struggling to keep pace with the sophistication of these applications. Medical imaging data is considered sensitive personal data under the General Data Protection Regulation (GDPR), and any AI system processing it must meet strict compliance requirements.
Article 9 of the GDPR permits processing of such data under conditions such as explicit consent or substantial public interest, but how those principles are applied in AI-driven healthcare is far from straightforward.
Most patients are unaware that their data may be used to train diagnostic models that are neither transparent in their methodology nor subject to consistent human oversight.
Furthermore, the Medical Devices Regulation (MDR) and the Artificial Intelligence Act (AI Act) are attempting to bring order to a chaotic system of approvals, certifications, and risk classifications.
Under the MDR, AI-based imaging software is likely to be classified as a high-risk device. This classification requires robust clinical evidence of safety and performance.
However, there is little clarity about how that performance should be measured when systems are continually learning and adapting.
Regulatory authorities are grappling with a scenario where the technology evolves faster than the documentation required to assess it.
The increased reliance on these systems creates significant accountability questions under medical law.
If a model incorrectly flags a tumour as benign and a patient is sent home, who is responsible for the misdiagnosis? Is it the developer of the AI, the hospital that implemented it, the clinician who relied on it, or the data protection officer who signed off on the initial privacy impact assessment?
Currently, there is no single regulatory mechanism that offers comprehensive answers to these scenarios.
The European Commission has recommended stronger testing standards, more transparent audit trails, and legal obligations for explainability.
The reality remains that most patients have no idea when an AI system has been used in their diagnosis or treatment plan.
This lack of transparency is a significant ethical and legal issue.
Trust in healthcare depends on openness, and legal frameworks must respond accordingly.
Who Gets Sued When the AI Commits Medical Malpractice?
In the common scenario where AI systems assist in diagnosing medical conditions, legal responsibility has taken a backseat to technical innovation.
However, once the software provides an inaccurate diagnosis or overlooks a critical anomaly in a scan, the legal consequences are neither theoretical nor entertaining for the patient on the receiving end.
The issues are immediate, tangible, and as the European Commission’s Joint Research Centre rightly highlights, unresolved.
Several Member States are already deploying AI tools in diagnostic imaging, with systems analysing scans, prioritising patient cases, and even offering predictive suggestions to clinicians.
These systems are intended to support medical professionals, not replace their judgment, but when a model produces a flawed output and the human in the loop accepts it, legal clarity disappears rather rapidly.
The case becomes less about technological failure and more about who failed to act reasonably under the circumstances.
Medical malpractice law is built on the assumption of human error.
The introduction of software with decision-making capabilities requires a rethinking of how liability is allocated.
For instance, if an AI system used in breast cancer screening underestimates a tumour’s severity, the patient may pursue litigation for delayed treatment, but pinpointing the legally responsible party is far from straightforward.
The clinician might claim the tool influenced their interpretation.
The hospital may argue the system was CE-marked and thus presumed safe.
The manufacturer might say the clinician was supposed to exercise independent judgment.
Under the current EU framework, AI used in diagnostics is generally regulated under the Medical Devices Regulation (MDR), which focuses heavily on safety and performance requirements.
AI systems considered high risk will also fall under the scope of the AI Act.
While these laws introduce pre-market obligations and post-market surveillance duties, they do not create clear paths for injured patients seeking redress.
The patient cannot sue the software itself. Litigation will inevitably be directed at the parties closest to the clinical decision.
One key fact from the JRC report is that legal systems across the EU currently have no harmonised approach to determining accountability for harm caused by AI in healthcare.

Some countries lean towards strict liability, others maintain fault-based models. Few have case law directly addressing misdiagnosis involving machine learning systems.
In practice, this results in uncertainty for both patients and practitioners. Insurance providers are already factoring these risks into coverage plans, but their capacity to absorb the legal ambiguities depends on whether courts begin to produce consistent judgments in this field.
The European Commission has proposed a Product Liability Directive revision and a new AI Liability Directive to address these gaps.
If adopted, these measures would offer some relief to patients by lowering the burden of proof and allowing fault to be inferred more easily in cases involving opaque AI systems.
However, the proposed directives do not displace national tort laws.
Instead, they layer additional procedural rules without resolving the fundamental issue of attribution of responsibility in a multidisciplinary clinical setting.
Ultimately, without clearer definitions of professional duty, technical accountability, and institutional liability, disputes involving AI-assisted diagnoses will continue to create friction in healthcare delivery.
Legal systems must evolve to handle the reality that patients harmed by an AI-influenced decision will still seek someone with a license, a title, or a corporate letterhead to answer for it.
That wraps up this newsletter. If you have thoughts, observations, or mildly panicked questions about AI, consent, or liability in medical imaging, we would love to hear them by leaving a comment below. Your insights (and observations) are welcome and may even make it into the next newsletter.
Ah yes, the problem of consent.
As with anything but especially medical AI needs to be checked and rechecked before any medical life alliterating decision is made. If there was a medical lawsuit then having AI in the mix would definitely make an already complicated situation even more so.