OpenAI co-founder Sam Altman confirms ‘ChatGPT has shortcomings around bias’
“We know that ChatGPT has shortcomings around bias, and are working to improve it.”
Yesterday, we wrote about OpenAI after some users of ChatGPT reported that the organization co-founded by Elon Musk and Sam Altman has turned the chatbot into a political tool and now using it to promote transgenderism. “The damage done to the credibility of AI by ChatGPT engineers building in political bias is irreparable,” one ChatGPT complained. “AI says to affirm transgender children,” another ChatGP user said.
Just a few hours after a story, OpenAI co-founder Sam Altman took to social media to address the issues along with the hate directed toward the OpenAI engineers. In a Twitter thread post, Altman admitted that ChatGPT currently has some flaws around bias, which is not uncommon for every new technology. Altman added that the company is working to improve the chatbot. But he didn’t take kindly the attacks on OpenAI engineers, calling it appalling.
“We know that ChatGPT has shortcomings around bias, and are working to improve it. but directing hate at individual OAI employees because of this is appalling. hit me all you want, but attacking other people here doesn’t help the field advance, and the people doing it know that.”
we know that ChatGPT has shortcomings around bias, and are working to improve it.
but directing hate at individual OAI employees because of this is appalling. hit me all you want, but attacking other people here doesn’t help the field advance, and the people doing it know that.
— Sam Altman (@sama) February 1, 2023
Altman also added that OpenAI is “working to improve the default settings to be more neutral, and also to empower users to get our systems to behave in accordance with their individual preferences within broad bounds. this is harder than it sounds and will take us some time to get right.”
everyone on the openai team is exceptional and cares deeply. i am very grateful for all of their contributions.❤️
— Sam Altman (@sama) February 1, 2023
OpenAI is not the first tech company to face the AI bias problem. Back in 2021, two Google engineers resigned over the firing of a Black AI ethics researcher Timnit Gebru, an Eritrean-born computer scientist who works on algorithmic bias and data mining.
In 2020, Timnit Gebru wrote a paper where she said that Google and other big tech companies are “institutionally racist.” Her paper focused on similar issues plaguing ChatGPT about how AI language models have a structural bias against women and people belonging to ethnic minorities.
We first covered OpenAI about three years ago after Microsoft invested $1 billion in the organization. As part of the multi-year agreement reached, the two companies will work together to bring supercomputing technologies and OpenAI will run its services exclusively in Microsoft’s cloud.
Through the partnership, the two companies will accelerate breakthroughs in AI and power OpenAI’s efforts to create artificial general intelligence (AGI). The resulting enhancements to the Azure platform will also help developers build the next generation of AI applications. With the partnership, Microsoft and OpenAI will jointly build new Azure AI supercomputing technologies.
When asked recently about ChatGPT and if Microsoft viewed ChatGPT technology as experimental or strategic, its President Brad Smith told Reuters that AI has progressed faster than many predicted.
“We’re going to see advances in 2023 that people two years ago would have expected in 2033. It’s going to be extremely important not just for Microsoft’s future, but for everyone’s future,” he said in an interview this week.
OpenAI was founded in late 2015 by Elon Musk and Sam Altman as a for-profit startup conducting research in artificial intelligence (AI) with the goal of promoting and developing friendly AI in such a way as to benefit humanity as a whole. OpenIA said it aims to “freely collaborate” with other institutions and researchers by making its patents and research open to the public. Both founders are motivated in part by concerns about existential risk from artificial general intelligence.
Overall, we believe Altman handled the criticism about the ChatGPT bias very well. The fact that he admitted to the ChatGPT algorithm bias is a step in the right direction.