The Higher Regional Court of Cologne has dismissed an application by the Consumer Protection Organization of North Rhine-Westphalia (Verbraucherzentrale NRW) that sought to prevent Meta from using public posts from Facebook and Instagram users for AI training purposes.
In its May 23 ruling, the court concluded that Meta is 'pursuing a legitimate end by using the data to train artificial intelligence systems' and that feeding user data into AI training systems was allowed 'even without the consent of those affected.' The court determined that Meta's interests in data processing outweighed the interests of data subjects, partly because the company had implemented effective measures to mitigate interference with users' rights.
Meta plans to begin using public content from adult EU users across its platforms starting May 27, 2025. The company has provided users with the option to opt out and has stated that content from users under 18 will not be used for training purposes. The Irish Data Protection Commission, Meta's lead regulatory authority in Europe, previously gave a positive assessment of the company's plans after Meta addressed various concerns through improved transparency notices and easier-to-use objection forms.
However, not all regulatory bodies are aligned. The Hamburg Data Protection Commissioner has initiated urgent proceedings against Meta, intending to prohibit the company from providing AI training to German data subjects for at least another three months. The European privacy advocacy group NOYB, led by Max Schrems, has also been critical of Meta's approach, arguing that the company should use an opt-in rather than opt-out model for data collection.
This German ruling stands in contrast to ongoing legal challenges Meta faces in the United States, where U.S. District Judge Vince Chhabria has expressed skepticism over Meta's claim of fair use in utilizing copyrighted materials to train its Llama AI model. In that case, authors including Junot Diaz and Sarah Silverman have alleged that Meta used pirated versions of their books without permission, with Chhabria warning that AI systems could potentially 'obliterate' the market for original creative works.
The contrasting legal outcomes highlight the evolving and complex regulatory landscape surrounding AI training data across different jurisdictions, as courts and regulators grapple with balancing technological innovation against privacy concerns and intellectual property rights.