Meta to start labeling AI-generated images across platforms in transparency push
In a move towards greater transparency, Meta announced on Tuesday its plans to start identifying and tagging images created by other companies’ artificial intelligence services. The move, outlined by Meta’s top policy executive, involves integrating invisible markers into these images over the next few months.
In a blog post, Meta’s president of global affairs Nick Clegg said the markers will be applied to content shared on Meta’s platforms including Facebook, Instagram, and Threads, to alert users to the fact that these images, often resembling real photos, are actually digitally generated.
“As the difference between human and synthetic content gets blurred, people want to know where the boundary lies. People are often coming across AI-generated content for the first time and our users have told us they appreciate transparency around this new technology. So it’s important that we help people know when photorealistic content they’re seeing has been created using AI. We do that by applying “Imagined with AI” labels to photorealistic images created using our Meta AI feature, but we want to be able to do this with content created with other companies’ tools too,” Clegg said.
Meta already applies labels to content generated using its own AI tools, and now it plans to extend this to images created using services from other companies such as OpenAI, Microsoft, Adobe, Midjourney, Shutterstock, and Alphabet’s Google.
Clegg added that Meta has been collaborating with industry partners to establish universal technical standards that indicate when content has been generated using AI. Detecting these signals will enable us to label AI-generated images uploaded by users on Facebook, Instagram, and Threads.
“That’s why we’ve been working with industry partners to align on common technical standards that signal when a piece of content has been created using AI. Being able to detect these signals will make it possible for us to label AI-generated images that users post to Facebook, Instagram and Threads.”
Meta said it is currently developing this new capability, and within the next few months, it will begin applying labels in all supported languages for each app. We plan to continue implementing this approach throughout the next year, particularly during significant global elections. This period will provide valuable insights into how people create and share AI-generated content, what level of transparency users prefer, and how these technologies progress. Our findings will contribute to industry best practices and guide our future strategies.
This announcement sheds light on the evolving system of standards being developed by tech companies to address the potential risks associated with generative AI technologies. These technologies can produce fake yet convincing content in response to simple prompts.
This approach draws from a framework established over the past decade by some of the same companies to coordinate the removal of prohibited content across various platforms, including content depicting violence and exploitation.
In an interview with Reuters, Clegg expressed confidence in the ability of companies to reliably label AI-generated images, although he acknowledged that marking audio and video content posed greater challenges and was still in development.
“Even though the technology is not yet fully mature, particularly when it comes to audio and video, the hope is that we can create a sense of momentum and incentive for the rest of the industry to follow,” Clegg said.
While the technology for audio and video is not yet fully mature, Clegg hopes to create momentum and encourage the industry to follow suit.
In the meantime, Meta plans to require individuals to label their own altered audio and video content, with penalties for non-compliance. However, the specifics of these penalties were not disclosed.
Clegg noted the absence of a feasible method to label text generated by AI tools like ChatGPT, stating that it’s a matter that’s already been settled.
Regarding Meta’s encrypted messaging service WhatsApp, it remains unclear whether the company will apply labels to generative AI content shared on the platform.
Meta’s independent oversight board recently criticized the company’s policy on misleadingly doctored videos, advocating for labeling instead of removal. Clegg acknowledged the validity of these critiques, indicating that Meta’s current policy is inadequate given the proliferation of synthetic and hybrid content.
He also cited the new labeling initiative as evidence of Meta’s alignment with the board’s recommendations, signaling a step in the direction of addressing these concerns.
Meanwhile, Meta is not the first company to try to label AI-generated images. Early last year, the maker of the popular ChatGPT also launched a free AI-generated content detector tool but was later shut down on July 20, 2023, due to a low accuracy rate. The AI-generated content detector tool called the AI classifier, was launched in January to assist educators, journalists, and researchers in detecting AI-generated content.
“As of July 20, 2023, the AI classifier is no longer available due to its low rate of accuracy. We are working to incorporate feedback and are currently researching more effective provenance techniques for text, and have made a commitment to develop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated,” read a post on the blog that earlier introduced the AI Classifier.