Italy’s data protection authority, Garante, has fined OpenAI €15 million (approximately $15.58 million) for privacy violations linked to its generative AI platform, ChatGPT.
The investigation concluded that OpenAI had processed users’ personal data to train its AI algorithms without obtaining a legal basis, thereby violating European privacy laws. Additionally, Garante criticized the company for failing to provide sufficient transparency about collecting and using user data.
Another significant issue identified by the regulator was the lack of a robust age verification system. This deficiency exposed children under 13 to potentially inappropriate AI-generated content, raising concerns about safeguarding vulnerable users.
As part of its ruling, Garante has ordered OpenAI to launch a six-month public awareness campaign across Italian media. The campaign aims to inform the public about how ChatGPT operates, particularly regarding its use of personal data.
OpenAI, however, has described the fine as “disproportionate” and intends to appeal the decision.
The ruling is not the first time Italy has taken action against OpenAI. In 2023, ChatGPT was temporarily banned in the country over alleged violations of the European Union’s General Data Protection Regulation (GDPR). The platform was reinstated after OpenAI introduced measures allowing users to opt out of data processing for AI training.