OpenAI fined €15 million for using personal data to train ChatGPT without consent
Since its launch over two years ago, OpenAI’s ChatGPT has attracted over 100 million users, with 300 million engaging weekly. But as its popularity grows, so do concerns about how personal data is used. Recent findings reveal that OpenAI trained ChatGPT using user data without proper consent, sparking significant ethical debates. This came to light after an investigation by an Italian watchdog uncovered violations of privacy laws.
Italy’s data protection authority announced on Friday a €15 million fine ($15.58 million) for OpenAI, concluding an inquiry into its use of personal data for training ChatGPT. According to the regulator, OpenAI lacked a valid legal basis for data processing and failed to provide adequate transparency about its practices, Reuters reported.
This isn’t the first time OpenAI has faced scrutiny in Italy. Nearly a year ago, the same authority raised similar concerns, accusing the Microsoft-backed AI startup of breaching privacy laws. OpenAI was given 30 days to respond, and while some adjustments were made, questions lingered about the platform’s compliance.
Italy Fines OpenAI €15 Million Over Privacy Violations
OpenAI has since criticized the recent fine, calling it “disproportionate” and announcing plans to appeal. The company also pointed out that the penalty far exceeds the revenue generated in Italy during the relevant period, arguing that the decision could hinder the country’s progress in AI adoption.
The investigation also highlighted gaps in safeguarding younger users. OpenAI was found to lack proper age verification measures, potentially exposing children under 13 to unsuitable AI-generated content. As part of the resolution, OpenAI is required to run a six-month media campaign in Italy to educate the public about its data collection practices.
A History of Scrutiny in Italy
Italy’s Garante, one of the European Union’s leading regulators on AI compliance, has taken a firm stance on protecting data privacy. Last year, it temporarily banned ChatGPT for alleged breaches of EU privacy rules, later allowing its return after OpenAI addressed several concerns, including user consent options for data usage, according to a report from Reuters.
“They’ve since recognized our industry-leading approach to protecting privacy in AI, yet this fine is nearly twenty times the revenue we made in Italy during the relevant period,” OpenAI said, adding that Garante’s approach “undermines Italy’s AI ambitions”.
Despite OpenAI’s cooperative efforts, the regulator factored in the severity of the violations when determining the fine. Under the EU’s General Data Protection Regulation (GDPR), companies violating privacy laws can face penalties of up to €20 million or 4% of their global turnover.
While OpenAI has argued that its privacy measures are industry-leading, this case underscores the growing challenges AI companies face as they navigate complex data regulations and heightened scrutiny from regulators.