AnwaltssucheUnsere KompetenzenDeine Karriere
Standorte
Unsere Kompetenzen
Media Center

Select language:

Standorte
Unsere Kompetenzen
Media Center

Select language:

hamburger menu showcase image
  1. Higher Regional Court of Cologne backs Meta’s AI training: A landmark for innovation and data protection
8MIN
Higher Regional Court of Cologne backs Meta’s AI training: A landmark for innovation and data protection
Dec 8 2025

Public attention has been focussing on Meta’s intention to use publicly shared content from European users of Meta products aged 18 or older to train its AI systems.

The use of large and diverse datasets that reflect Europe’s linguistic and cultural variety is intended to enable Meta’s AI to produce content that is mindful of cultural nuances and aligns with local values. To achieve this, Meta initially planned to start training its AI with data shared publicly by European users on Meta products (hereinafter called first party data) in June 2024. Following feedback from regulators, Meta voluntarily postponed the rollout. Since then, Meta has worked closely with the Irish Data Protection Commission (DPC) and other EU authorities. It further developed safeguards such as a right to object, and introduced technical de-identification measures to address concerns about user rights and data protection.

In April 2025, Meta announced that it would begin training its AI using public first party data on 27 May 2025, i.e. around one year later than initially planned. Shortly thereafter, the German consumer association Verbraucherzentrale NRW (VZNRW) filed for an injunction seeking to prevent Meta from using such data for AI training based on alleged violations of the General Data Protection Regulation (GDPR) and the Digital Markets Act (DMA).

The Higher Regional Court of Cologne convincingly dismissed the injunction in its entirety. In its 23 May 2025 decision (case number 15 UKl 2/25), it ruled that Meta’s approach complies with both the GDPR and the DMA. The Court’s reasoning aligns with the regulatory assessments of European Data Protection Authorities, including the Irish DPC and the European Data Protection Board’s (EDPB) AI Opinion from December 2024.

Meta’s AI training based on legitimate interest is lawful

The Court confirmed that based on the legal standards applicable in injunction proceedings, Meta’s use of public first party data for AI training is lawful under Article 6(1)(f) GDPR (“processing is necessary for the purposes of the legitimate interests pursued by the controller”). The Court applied the three-part test established by the European Court of Justice (CJEU): (i) there has to be a legitimate interest, (ii) the processing has to be necessary to achieve that interest, and (iii) when balancing that legitimate interest against the rights and freedoms of the data subject, there is  no overriding interests of data subjects. The Court found that Meta’s AI training meets all three conditions: It concluded that the processing is necessary to pursue Meta’s legitimate economic interest in developing generative AI, and that the rights of data subjects do not override Meta’s legitimate interest.

In its reasoning, the Court emphasised that the use of broad, representative datasets from the EU is essential for developing AI systems that are culturally and linguistically relevant. It considered this interest to be not only legitimate but also clearly and precisely articulated. The Court’s interpretation aligns with the position of the CJEU and the EDPB’s AI opinion of December 2024, which confirmed that legitimate interest may serve as a valid legal basis for training AI models, provided that appropriate safeguards are implemented and the rights of data subjects are respected.

The Court also highlighted the importance of the safeguards implemented by Meta to mitigate the risk of data being processed without the data subject’s knowledge, explicitly deeming these safeguards appropriate and effective. The Court confirmed that users are provided with clear and accessible options to object to the use of their data or to adjust the visibility settings of their posts. Further, the Court specifically noted that there are no relevant technical hurdles to exercising the right to object, that the objection can be exercised easily by an average user, and that the six-week period following user notification is sufficient for making an informed decision.

No Article 9 GDPR violation

The Court rejected the VZNRW’s argument that the inclusion of special categories of personal data in AI training datasets violates Article 9(1) GDPR, which prohibits processing data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, health, or sexual orientation. It held that where users themselves publicly share such data (e.g. by posting it in a public profile or comment) this constitutes a clear, affirmative act of making the data accessible to the general public, thereby satisfying the exemption under Article 9(2)(e) GDPR (“processing relates to personal data which are manifestly made public by the data subject”).

In relation to special categories of personal data shared publicly by others, the Court relied on the principles established in the CJEU’s Google/Costeja ruling (C-131/12): In this decision, the CJEU clarified that data processing by search engines must be assessed in light of its impact on individual rights, especially when the data originates from public sources. The Court applied this reasoning to AI training datasets, concluding that Article 9 GDPR is not applicable in this context because the data processing is not targeted at specific individuals. According to the Court, the training of AI models is aimed at generating general patterns for probability-based outputs. In such cases, the data subject specifically has to request that their data be removed. The Court also highlighted the numerous protective measures implemented by Meta, including the removal of identifiable data (e.g. phone numbers and credit card numbers), the use of tokenised and unstructured datasets, and robust user controls such as the ability to object and adjust visibility settings. It further noted the low likelihood of individual identification due to the scale and structure of the training data.

Finally, the Court placed its interpretation within the broader framework of the EU Artificial Intelligence Act (AI Act). It stressed that an overly extensive interpretation of Article 9 GDPR would undermine the objectives of the AI Act, including the promotion of a human-centric approach to AI and the establishment of Europe as a global leader in trustworthy and ethical AI development. The Court pointed to Recital 105 of the EU AI Act, which acknowledges the need to train generative AI on large volumes of data, and concluded that the legislator did not consider such training to be fundamentally unlawful under the GDPR.

DMA claim rejected

The Court dismissed VZNRW’s claim under Article 5(2)(b) DMA which provides that a “gatekeeper shall not […] combine personal data from the relevant core platform service with personal data from any further core platform services or from any other services provided by the gatekeeper or with personal data from third-party services“. It held that Meta’s use of data from different Meta products does not constitute a prohibited “combining” of data. The Court emphasised that the DMA provision in question prohibits the targeted linking of personal data from the same user across services, not the inclusion of de-identified data in an unstructured training dataset. In the Court’s view, Meta’s approach lacked the kind of user-specific data fusion that would trigger the prohibition under the DMA.

This interpretation aligns with the European Commission’s decision of April 2025 (C(2025) 2091), which also distinguished between structured, user-level data combinations and the use of aggregated or de-identified data for broader purposes such as AI training. The Court further clarified that the DMA does not impose a general ban on data aggregation. Rather, its purpose is to prevent anti-competitive personalisation practices – not to restrict the development of AI systems that rely on large-scale, non-targeted data inputs (i.e. data that is de-identified, not linked to specific individuals, and not structured around user profiles, but instead used in bulk to train AI systems without combining or targeting data from the same user across services).

Why this ruling matters: Legal clarity for AI in Europe

The decision of the Higher Regional Court of Cologne marks a pivotal moment in the legal treatment of AI training practices in the EU. It provides much-needed legal certainty by confirming that the use of publicly shared data for AI training (when accompanied by appropriate safeguards) is compatible with both the GDPR and the DMA.

The decision navigates the tension between data protection and AI training. It underscores the necessity of balancing these interests to ensure both robust data protection and technological advancement. Crucially, the Court affirms that companies can rely on legitimate interests according to Article 6(1)(f) GDPR as a valid legal basis for processing public first party user data for AI training. If a company seeks to process such personal data for AI training purposes and has implemented appropriate safeguards (including the user’s right to object), the company’s legitimate interests may take precedence. This legal clarity is particularly vital in the context of AI training, where large-scale data use is indispensable for fostering responsible technological development within the EU.

In the context of Article 9 GDPR, the Court takes a significant step by referencing  the CJEU’s guidelines in the Google/Costeja ruling (C-131/12). The Court’s reasoning is compelling: The numerous protective measures implemented by Meta and the low likelihood of individual identification due to the scale and structure of the training datasets significantly mitigate the impact on data protection rights. On this basis, prohibiting the data processing, premised on an overly expansive interpretation of Article 9 GDPR, would be disproportionate, undermining the European Union’s ambition to lead in trustworthy and ethical AI development.

Overall, many companies frequently include personal data in AI training datasets – often without a possibility for individuals to object to such training. A blanket ban of processing such data would make AI training in Europe virtually impossible. This, in turn, would significantly weaken Europe as a technology hub. In sum, the decision sets a strong precedent for a balanced interpretation of data protection law; one that supports innovation while safeguarding fundamental rights.

Outlook

The Court’s decision arrives at a critical moment for AI governance in Europe. It offers a clear roadmap for companies seeking to develop AI responsibly within the EU, and reduces legal uncertainty that has long surrounded the use of public data for training purposes. By engaging deeply with both the GDPR and the DMA, the Court has helped define a regulatory environment that is both rights-respecting and innovation-friendly. This decision is likely to influence how future claims are assessed and how regulators interpret the balance between data protection and technological advancement, including in other Member States. Given the novel nature of the issues at hand and the fact that they are ultimately determined by EU law, it is likely that the judgment and its reasoning will be of relevance beyond Germany, i.e. provide a valuable cornerstone for courts and authorities in other Member States, where the same issues may arise. For legal professionals and regulators alike, the ruling represents an essential step toward a more coherent and forward-looking AI framework in the EU.

The European Commission’s Digital Omnibus and Data Union Strategy recently published also marks a step forward in providing legal clarity for AI model training in Europe. Recognizing the indispensable role of large, high-quality datasets in building competitive AI models, the Commission has taken concrete measures to simplify and harmonize the regulatory landscape. This is particularly relevant for AI training, where the use of personal data—often at scale—is essential for technological advancement. For example, the Commission has clarified that bias detection and correction in AI models constitute a substantial public interest, justifying the processing of special categories of personal data under Article 9(2)(g) GDPR and new Article 4a AI Act. This legal basis is now extended to providers and deployers of all AI systems and models, not just high-risk ones, provided appropriate safeguards are in place.  In sum, recent developments confirm that Europe’s commitment to legal clarity and technological advancement is creating an AI landscape where responsible progress is not just possible, but actively supported by harmonised regulation and judicial guidance.

Team
Frankfurt am Main, Munich
Martin C. MekatPartner
Vienna, Düsseldorf
Lutz RiedePartner
Munich
Alice Möller-RothPrincipal Associate
Munich
Philipp SemmelmayerAssociate
FINDEN SIE UNS IN
Standorte
NAVIGIEREN ZU
Über unsDeine KarriereUnser DenkenUnsere KompetenzenMedia Center
KONTAKT
AnwaltssucheAlumniKontakt
HILFE
Betrug und BetrugsversucheBeschwerde- management
RECHTLICHES
BarrierefreiheitRechtliche Hinweise DeutschlandRechtliche Hinweise ÖsterreichCookiesLieferketten- transparenzBeschaffungDatenschutz

© 2025 Freshfields. Attorney Advertising: prior results do not guarantee a similar outcome

Select language: