Case Report: Meta Can Legally Use Your Instagram and Facebook Posts to Train Its AI (Consumer Advice Centre NRW v. Meta)
A court in Europe just handed Meta a legal win, ruling that your public Facebook and Instagram posts can train its next-generation AI.
Meta just got the green light from a German court to use public user posts to train its AI systems. This decision is a big deal, not just for tech companies, but for anyone sharing content on Facebook and Instagram. In this newsletter, we break down what the court ruled, why it matters for digital rights in Europe, and what it could mean for the future of AI and privacy.
🧠 AI Training and Your Facebook Posts? What This Case Was All About
Meta Platforms Ireland, the company behind Facebook and Instagram in Europe, has recently been at the centre of a legal case in Germany.
The Consumer Advice Center of North Rhine-Westphalia challenged Meta’s plan to use publicly available user data for training artificial intelligence systems. This includes information such as names, profile pictures, posts, and comments that users have made visible to the public on their accounts.
The consumer group raised objections on the basis of the EU General Data Protection Regulation (GDPR) and the Digital Markets Act (DMA).
The primary concern was that Meta’s updated privacy policy did not provide users with a clear, active choice to consent before their public data was used for AI training. They argued that this kind of data processing requires users to explicitly agree in advance. The group also claimed that the use of the data in this way could not be justified by Meta under the legal concept of "legitimate interest".
The Cologne Higher Regional Court reviewed the complaint and the legal documents provided by both parties.
The court found that Meta’s updated terms of service did not violate GDPR or the DMA. The court considered that public data, which users have deliberately made visible, can be used for AI training if it meets specific legal requirements.
The judgment highlighted that Meta’s privacy policy update and its explanation of data processing were sufficiently transparent.
The court accepted Meta’s position that the use of public user data for AI training could be based on a legitimate interest, provided that appropriate safeguards and user rights were respected.
Importantly, the court noted that users had the ability to object to this data use and that Meta had provided a clear way to do so.
Meta had also paused its data processing for AI training in Germany pending the outcome of the case, which showed it was responsive to the ongoing legal process.
This ruling marks one of the first decisions in the EU that tests how data protection laws apply to large technology companies using public data to develop artificial intelligence. It also reflects how national courts are beginning to interpret the rules under the DMA.
⚖️ The Lawsuit: Who Sued Meta and Why
On 23 May 2025, the Cologne Higher Regional Court dismissed an urgent application by the Consumer Advice Center of North Rhine-Westphalia (Verbraucherzentrale NRW) against Meta Platforms Ireland Limited.
The consumer group sought to prevent Meta from using publicly available user data from Facebook and Instagram to train its artificial intelligence systems.
The case arose after Meta announced in April 2025 that, starting 27 May, it would utilize public content from adult users in the European Union for AI training purposes. This includes posts, profile information, and interactions that users have made publicly accessible. Meta stated that users would be notified of this data usage and offered the ability to opt out.
Verbraucherzentrale NRW filed for an injunction on 12 May, arguing that Meta's plan violated the General Data Protection Regulation (GDPR) and the Digital Markets Act (DMA).
They contended that Meta had not adequately informed users or obtained proper consent before using their content for AI training. The organization also challenged Meta’s legal basis for processing the data, particularly in light of the GDPR and the DMA.
The court, after a preliminary assessment, found that Meta's practices were lawful under the GDPR.
It concluded that Meta had a legitimate interest in processing publicly available data for developing AI systems, and that this interest could be balanced with users’ data protection rights.
The court emphasized that the data used was only content that users had already made public, and that Meta provided clear information through its privacy policy. The judges did not find a violation of transparency or consent requirements, as the affected data was not sensitive and had already been made accessible to the public by the users themselves.
Regarding the DMA, the court determined that Meta's use of public data for AI training did not breach the obligations of gatekeeper platforms under the act. No unfair self-preferencing or misuse of user data was identified. The DMA obligations were not seen as overriding Meta’s rights under the GDPR when processing public content for technological development.
The Consumer Advice Center expressed disappointment over the decision, labelling the situation as highly problematic. They highlighted ongoing uncertainties regarding the legality of data use in this manner.
📌 The court's decision allows Meta to proceed with its AI training practices within the European Union, using publicly available user data, provided users are informed and given the opportunity to opt out.
🇩🇪 The Cologne Ruling: Meta Did Not Break the Rules
According to the Cologne Higher Regional Court, the judges concluded that Meta had a legitimate interest in using this data to improve and train its AI systems. The court assessed the company’s privacy policy updates and determined that the information provided to users was sufficiently clear.
Users were informed that their public content might be used for AI development. The court accepted Meta’s argument that the purpose of improving AI capabilities qualifies as a legitimate interest under the GDPR.
The court also found no breach of the DMA. It stated that Meta’s data usage did not involve self-preferencing or unfair treatment of other businesses, which are the key concerns under the DMA. The judges underlined that the company did not gain an undue advantage by using public user data, and the processing was not seen as abusive or excessive.
For users, the outcome means that as long as content is made public on Meta’s platforms, it can be used for the company’s AI training purposes. This includes posts and photos shared with a public visibility setting. Users still retain control over their data by adjusting their privacy settings to limit visibility.
📢 The ruling reinforces the idea that companies can use public data for AI development when they meet transparency requirements and operate within legal boundaries.
🧩 Why This Decision Matters Beyond Germany
The ruling by the Cologne Higher Regional Court on Meta’s use of public user data for artificial intelligence training reflects legal and regulatory trends in Europe and may determine how other courts and authorities interpret similar cases.
The decision is likely to influence discussions on the boundaries of digital rights, corporate responsibility, and data governance across the European Union.
📌 Here is why the outcome carries wider importance:
Legal clarity for AI developers:
The court confirmed that companies like Meta can rely on legitimate interest as a legal basis for processing publicly available data to develop AI, as long as users are informed and the data is not sensitive. This provides guidance for other companies developing AI technologies in the EU.Interpretation of GDPR and DMA:
The court explicitly addressed how Meta’s data practices fit within both the General Data Protection Regulation (GDPR) and the Digital Markets Act (DMA). It found no breach of DMA rules concerning gatekeeper obligations, and it accepted that GDPR allows the use of public data in certain contexts. This may help other digital platforms understand the balance between data innovation and user rights.Support for responsible AI development:
By upholding the legitimacy of AI training using public data, the court indirectly supports continued innovation in artificial intelligence, provided that user transparency and safeguards are respected.Impact on future litigation:
Consumer rights organisations across Europe are closely watching how courts respond to corporate data practices. The ruling in this case may affect how other national courts assess similar lawsuits against tech companies. It might also influence enforcement decisions by data protection authorities.
📖 The Cologne court's press release explains that the judges carefully reviewed the transparency of Meta’s data usage disclosures and found that the company provided users with sufficient information. The court also considered the proportionality and scope of data processing.
If you post something publicly on Facebook or Instagram, Meta can now use it to help teach its AI systems.
A court just approved this, saying it is fine because the content is already out there and Meta gave users adequate notice!
It does not include private messages between app users or hidden/private posts. Still, this ruling will very likely have ripple effects on how public social media content is reused across the internet.
Subscribe for more updates.