Meta Platforms has postponed the launch of its Meta AI models in Europe following a directive from the Irish privacy regulator to delay its plan of utilizing data from Facebook and Instagram users.
The decision was made public on Friday by the US-based social media company.
This move comes after a series of complaints and a call to action by advocacy group NOYB, urging data protection authorities in multiple European countries—including Austria, Belgium, France, Germany, Greece, Italy, Ireland, the Netherlands, Norway, Poland, and Spain—to intervene.
Reuters reports indicate that the primary concern is Meta’s intent to use personal data to train its artificial intelligence (AI) models without obtaining user consent, despite Meta’s assertion that it would rely on publicly available and licensed online information.
On Friday, Meta revealed that the Irish Data Protection Commission (DPC) had requested a delay in training its large language models (LLMs) with public content shared by adult users on Facebook and Instagram.
The company described the Irish request as a setback for European innovation and competition in AI development.
The DPC, in response, welcomed Meta’s decision to pause the launch, noting that it followed intensive engagement with the regulator. This delay also allows Meta to address similar concerns raised by Britain’s Information Commissioner’s Office (ICO).
The ICO expressed approval of Meta’s decision and stated it would continue to monitor major developers of generative AI, including Meta, to ensure that the information rights of UK users are protected.