Italy bans OpenAI’s ChatGPT over privacy concerns
So it begins! Early this week, Elon Musk and other tech titans called for a pause on training AI exceeding GPT-4, citing “risks to society.” In an open letter signed by Musk and Apple co-founder Steve Wozniak, a group of about one hundred tech leaders called for a six-month pause to the development of systems more powerful than OpenAI’s newly launched GPT-4.
Now, Italy is banning the use of OpenAI’s ChatGPT over privacy concerns. On Friday, Italy’s data protection authority announced a temporary ban on OpenAI’s ChatGPT over alleged privacy violations. The Italian privacy regulator said the ban will remain in effect until OpenAI complies with the European Union’s privacy laws.
In a statement posted on its website, the Italian National Authority for Personal Data Protection said ChatGPT violated the EU’s General Data Protection Regulation (GDPR) in several ways, including unlawfully processing people’s data and failing to prevent minors from accessing the AI chatbot.
“No way for ChatGPT to continue processing data in breach of privacy laws. The Italian SA imposed an immediate temporary limitation on the processing of Italian users’ data by OpenAI, the US-based company developing and managing the platform. An inquiry into the facts of the case was initiated as well. A data breach affecting ChatGPT users’ conversations and information on payments by subscribers to the service had been reported on 20 March. ChatGPT is the best known among relational AI platforms that are capable to emulate and elaborate human conversations.”
OpenAI has 20 days to respond to the order or face fines, the Italian privacy regulator warned.
“OpenAI, which does not have an office in the Union but has designated a representative in the European Economic Area, must communicate within 20 days the measures undertaken in implementation of what is requested by the Guarantor, under penalty of a fine of up to 20 million euros or up to 4% of the annual global turnover.”
The ban comes just a day after the nonprofit research group Center for AI and Digital Policy (CAIDP) filed a complaint with the Federal Trade Commission (FTC) accusing OpenAI of violating Section 5 of the FTC Act, which prohibits unfair and deceptive business practices, and the agency’s guidance for AI products. The group said OpenAI GPT-4 “is biased, deceptive, and a risk to privacy and public safety” and asked FTC to stop new OpenAI GPT releases.
ChatGPT has grown in popularity to millions of users within just a few months of its launch. Just two weeks ago, OpenAI launched the latest version of its primary large language model, GPT-4, which the company claimed can beat 90% of humans on the SAT. Unlike its predecessors, OpenAI said the new GPT-4 is a large multimodal model that can solve difficult problems with greater accuracy, adding that GPT-4 is the company’s most advanced system to date, producing safer and more useful responses.