Meta Platforms, the parent company of Facebook and Instagram, has announced that it will begin using public posts and user interactions with its AI services to train its artificial intelligence models in the European Union. This decision follows the company’s delayed roll-out of Meta AI in Europe due to regulatory concerns surrounding data privacy and protection.
According to Meta, only public content shared by adults such as posts and comments alongside queries and inputs to its AI tools will be utilized for training. Crucially, private messages and data from users under 18 will remain excluded. The company emphasized that transparency and user choice are key elements of this initiative.
To align with the EU’s strict privacy regulations, Meta will issue notifications across Facebook, Instagram, and WhatsApp, informing users about the data usage. These alerts will include links to opt-out forms, allowing individuals to object to having their content used for AI training.
The move comes after Meta paused its initial AI model launch in Europe in June 2024. This followed concerns raised by Ireland’s Data Protection Commission (DPC), which advised the company to delay the initiative. Meta also faced strong criticism from privacy advocacy group NOYB, which called on European regulators to intervene and prevent the use of social media content for AI training.
While Meta launched its AI tools in the United States in 2023 without major setbacks, its European expansion has been more complex, highlighting the region’s robust data protection framework. The European Commission has not yet commented on Meta’s latest decision.
As AI continues to reshape digital interactions, Meta’s efforts in Europe underscore the tension between innovation and individual privacy rights raising fresh debates about consent, transparency, and ethical data usage in the era of artificial intelligence.