Search engines will be adding ‘content warnings’ to AI-generated content, analysts predict
The internet witnessed a sensational eruption of excitement when ChatGPT, the popular AI chatbot developed by OpenAI, quickly took the world by storm. Within a mere five days after the launch, ChatGPT had already garnered over a million users. This phenomenon has not only captivated a global audience but also served as a catalyst for inspiring millions to generate more content than ever before.
However, beyond the glitter and glamour surrounding ChatGPT, the world of AI chatbots has seen a darker side come to light. These bots have been exploited for generating spam content. A disconcerting report published by NewsGuard in May shed light on the fact that 49 websites had harnessed the power of AI tools like ChatGPT to churn out a slew of AI-generated news stories and blog posts.
Fast forward a few months later, it now seems like the good times are coming to an end, especially for AI-generated content. A predictions report released on a Tuesday by analyst firm CCS Insight suggests that search engines like Google are gearing up to take decisive action and to drop the hammer on AI-generated content. CCS Insight predicts that search engines will soon be adding “content warnings to alert users that material they are viewing from a certain web publisher is AI-generated” rather than created by human authors.
AI Content Warnings
In its yearly compilation of key forecasts for the technology industry’s future beyond 2024, CCS Insight put forth a number of predictions regarding the future of AI, a technology that has garnered significant attention due to the numerous discussions about its potential and challenges. As part of its one-hour online broadcast sessions scheduled for October 10 to 12, CCS Insight shared one of its predictions saying:
“A wave of AI-generated web articles with minimal scrutiny prompts a search engine to add content health warnings to its results. The proliferation of generative AI creates a flood of AI-written spam articles. A major search engine is forced to start offering content warnings on individual search results that it believes may have been AI-generated.”
In addition, in a call ahead of the predictions report’s release, Ben Wood, chief analyst at CCS Insight told CNBC: “The bottom line is, right now, everyone’s talking generative AI, Google, Amazon, Qualcomm, Meta. We are big advocates for AI, we think that it’s going to have a huge impact on the economy, we think it’s going to have big impacts on society at large, we think it’s great for productivity.”
However, Wood also cautioned saying: “But the hype around generative AI in 2023 has just been so immense, that we think it’s overhyped, and there’s lots of obstacles that need to get through to bring it to market.”
AI Content Watermarking
CCS Insight also suggested that these developments would prompt an internet search company to introduce labels, akin to “watermarking,” to denote content generated by AI. This approach mirrors the strategy employed by social media platforms when they added information labels to posts about COVID-19 to combat the spread of misinformation about the virus.
The CCS Insight prediction doesn’t seem far-fetched. As you may recall back in the middle of this year, major AI players proposed the idea of AI content watermarking. We covered the story in June after OpenAI, Google, and Meta voluntarily made a pledged at the White House to improve AI safety with the implementation of watermarking on AI-generated content. The move was part of a broader initiative to bolster accountability and trust in the use of AI-generated materials.
As part of the collective effort, the seven major AI tech companies made commitments to implement a system for “watermarking” all types of AI-generated content, including text, images, audio, and videos. This watermarking process will technically embed a marker in the content, allowing users to easily identify instances where AI technology has been utilized.
Meanwhile, NewsGuard explained that the motivation behind this AI content generation was to fill online content farms, hoping to attract a trickle of advertising revenue through the occasional clicks of web users. Experts warned that the low costs associated with producing automated content incentivize the proliferation of these sites. According to NewsGuard, the sites “appear to be almost entirely written by artificial intelligence software.”
“The websites, which often fail to disclose ownership or control, produce a high volume of content related to a variety of topics, including politics, health, entertainment, finance, and technology. Some publish hundreds of articles a day. Some of the content advances false narratives. Nearly all of the content features bland language and repetitive phrases, hallmarks of artificial intelligence,” NewsGuard reported.
In one particular case, NewsGuard reported that it engaged in a series of email exchanges with someone who claimed to be the owner of Famadillo.com, a website known for posting a multitude of AI-generated product reviews credited to “admin.” The individual, who identified themselves as Maria Spanadoris, denied that the website extensively relied on AI for content generation.
Spanadoris, who declined a phone interview with NewsGuard, stated, “We did an expert [sic] to use AI to edit old articles that nobody read anymore [sic]just to see how it works.” However, no further details were provided.