Microsoft AI tool creates violent, explicit images, and ignores copyrights: Microsoft engineer warns
A recent internal report from Microsoft has once again brought the ethical use of artificial intelligence (AI) into the spotlight. According to an exclusive scoop by CNBC, a Microsoft engineer raised red flags in December 2023 about Copilot Designer, the company’s AI-driven image generation tool. Launched earlier in the year, Copilot Designer enables users to generate images based on textual prompts, akin to other AI tools like OpenAI’s DALL-E.
Shane Jones, an AI engineer at Microsoft, stumbled upon unsettling discoveries during internal testing. He found that the tool could produce potentially harmful content, including violent and sexually explicit imagery, upon being fed specific keywords. Moreover, concerns arose regarding potential copyright violations, with the tool generating images featuring characters from Disney properties such as Elsa and Mickey Mouse.
Late one December night, Jones was taken aback by the images appearing on his screen. He had been experimenting with Copilot Designer, the AI image generator powered by OpenAI’s technology, since its launch in March 2023. Similar to OpenAI’s DALL-E, users input text prompts to conjure up pictures, fostering boundless creativity.
For the past month, Jones had been diligently testing the product for vulnerabilities, a process known as red-teaming. During this period, he observed the tool generating images that contravened Microsoft’s well-established principles of responsible AI.
Acknowledging the gravity of the situation, Microsoft reportedly initiated an investigation to address Jones’ concerns. This incident underscores the persistent challenges and ethical dilemmas associated with the development and conscientious use of artificial intelligence, particularly in realms like image generation, CNBC reported.
The AI service depicted scenarios ranging from demons and monsters to themes related to abortion rights, teenagers wielding assault rifles, sexualized portrayals of women in violent scenarios, and depictions of underage drinking and drug use. CNBC was able to replicate these scenes this week using the Copilot tool, originally named Bing Image Creator.
“It was truly an eye-opening moment,” Jones disclosed in an interview with CNBC. “It marked the realization that this model is not as safe as we thought.”
With six years of tenure at Microsoft, Jones currently serves as a principal software engineering manager at the company’s headquarters in Redmond, Washington. While he does not officially work on Copilot, Jones, as a red teamer, is part of a cohort of employees and external contributors who voluntarily scrutinize the company’s AI technology in their spare time to identify potential issues.
Jones was sufficiently alarmed by his discoveries to begin reporting his findings internally in December. Despite Microsoft acknowledging his concerns, it declined to remove the product from the market. Instead, the company referred Jones to OpenAI, and when no response was forthcoming, he took to LinkedIn to issue an open letter urging the startup’s board to conduct an investigation and temporarily suspend DALL-E 3, the latest iteration of the AI model.
“In the past three months, I’ve consistently urged Microsoft to withdraw Copilot Designer from public usage until stronger safeguards are established,” Jones stated in his letter to Khan. He emphasized that despite his recommendation, Microsoft has “rejected” it. Consequently, he is now advocating for the inclusion of disclosures on the product and a modification of its rating on Google’s Android app, clarifying that it’s suitable only for mature audiences.
“Once again, they have failed to enact these modifications and persist in promoting the product as suitable for ‘Anyone. Anywhere. Any Device,'” he lamented. Jones pointed out that the risk “was recognized by both Microsoft and OpenAI prior to the AI model’s public launch last October.”
“We are committed to addressing any and all concerns employees have in accordance with our company policies, and appreciate employee efforts in studying and testing our latest technology to further enhance its safety,” a Microsoft spokesperson told CNBC. “When it comes to safety bypasses or concerns that could have a potential impact on our services or our partners, we have established robust internal reporting channels to properly investigate and remediate any issues, which we encourage employees to utilize so we can appropriately validate and test their concerns.”