Ethics group says OpenAI GPT-4 “is biased, deceptive, and a risk to privacy and public safety,” asks FTC to stop new OpenAI GPT releases
On Tuesday, Elon Musk joined other technology leaders called for a pause on training AI exceeding GPT-4, citing “risks to society.” In an open letter signed by Musk and Apple co-founder Steve Wozniak, tech leaders are calling for a six-month pause to the development of systems more powerful than OpenAI’s newly launched GPT-4.
Just two days later, OpenAI now faces new criticism from the nonprofit research group Center for AI and Digital Policy (CAIDP). In a complaint filed with the Federal Trade Commission (FTC) on Thursday, the advocacy group accused OpenAI of violating Section 5 of the FTC Act, which prohibits unfair and deceptive business practices, and the agency’s guidance for AI products.
The group calls GPT-4 “biased, deceptive, and a risk to privacy and public safety.” The group says the large language model fails to meet the agency’s standards for AI to be “transparent, explainable, fair, and empirically sound while fostering accountability.”
In its complaint to FTC, CAIDP said OpenAI’s ChatGPT-4 fails to meet the FTC’s standard of being “transparent, explainable, fair and empirically sound while fostering accountability.”
“The FTC has a clear responsibility to investigate and prohibit unfair and deceptive trade practices. We believe that the FTC should look closely at OpenAI and GPT-4,” Marc Rotenberg, president of CAIDP and a veteran privacy advocate, said in a statement on the website.
Rotenberg was also one of the more than 1,000 tech leaders who signed Tuesday’s letter urging a pause in AI experiments.
The group urges the agency “to open an investigation into OpenAI, enjoin further commercial releases of GPT-4, and ensure the establishment of necessary guardrails to protect consumers, businesses, and the commercial marketplace.”
ChatGPT has grown in popularity to millions of users within just a few months of its launch. Just two weeks ago, OpenAI launched the latest version of its primary large language model, GPT-4, which the company claimed can beat 90% of humans on the SAT. Unlike its predecessors, OpenAI said the new GPT-4 is a large multimodal model that can solve difficult problems with greater accuracy, adding that GPT-4 is the company’s most advanced system to date, producing safer and more useful responses.