Google suspends image generation in Gemini AI due to inaccuracies in historical image depictions
Google’s latest venture into artificial intelligence, with its Gemini AI tool, has grabbed headlines. However, just days after unveiling its much-anticipated image generation feature, the tech giant is already facing backlash from angry users. In response to mounting criticism, Alphabet’s Google took to social media platform X to announce a temporary halt to Gemini’s ability to generate images for people.
The controversy arose as users noted a troubling trend: while Gemini readily produced images of black and Asian individuals upon request, it seemed reticent to do the same for white individuals. This apparent bias raised concerns about diversity and accuracy in AI-generated content.
In a swift move to address these growing concerns, Google announced the suspension of Gemini’s people-centric image generation capabilities. This decision reflects the company’s commitment to rectifying issues of representation and historical accuracy in AI-generated content. Criticism had emerged regarding the tool’s portrayal of historical figures, including notable figures such as the US Founding Fathers and soldiers from the Nazi era. These depictions inadvertently perpetuated racial and gender stereotypes, highlighting the complexities inherent in AI-driven image generation.
“We’re already working to address recent issues with Gemini’s image generation feature. While we do this, we’re going to pause the image generation of people and will re-release an improved version soon,” Google conveyed via its statement on X.
The company also acknowledged the need for enhanced accuracy in historical depictions generated by Gemini. This acknowledgment underscores Google’s ongoing efforts to ensure that its AI technologies provide nuanced and responsible representations across all contexts.
https://twitter.com/FAH36912/status/1760602443912486923
Gemini didn’t stop there. The tool also labeled a black person as an example of a 1943 German Soldier.
https://twitter.com/qorgidaddy/status/1760101193907360002
Gemini also generated images of non-whites as Vikings.
Come on. pic.twitter.com/Zx6tfXwXuo
— Frank J. Fleming (@IMAO_) February 21, 2024
But is it already too late? The controversy surrounding Gemini has brought its new AI technology under closer scrutiny. Initially hailed for its potential to diversify digital imagery, the tool encountered criticism for its misrepresentation of historical figures and events. Observers noted deviations from historical accuracy in the AI-generated images, sparking a broader conversation about the influence of artificial intelligence on our understanding of history and identity.
“It’s embarrassingly hard to get Google Gemini to acknowledge that white people exist,” an X user by the name of Deedy said in a post.
It's embarrassingly hard to get Google Gemini to acknowledge that white people exist pic.twitter.com/4lkhD7p5nR
— Deedy (@deedydas) February 20, 2024
Another X user asked: “I don’t understand how such a highly-anticipated AI product could be rolled out with such comical flaws.”
Following widespread criticism regarding apparent racial bias in Gemini’s image generation feature, Jack Krawczyk, Senior Director of Product Management at Gemini Experiences, addressed the concerns raised by social media users. Responding to Krawczyk’s radical postings, Elon Musk also chimed in. “What a racist douchenozzle!” he wrote on X.
What a racist douchenozzle!
— Elon Musk (@elonmusk) February 22, 2024