CNET secretly used AI to write its entire articles for months; “We didn’t do it in secret, we did it quietly”
“We didn’t do it in secret, we did it quietly,” that was what CNET editor-in-chief Connie Guglielmo told his team following a public outcry after Futurism reported that the popular news outlet had been quietly using AI to publish entire articles without making the AI authorship immediately clear to its audience. So next time you read a story on your favorite blog site, you might want to double-check to see if the piece was indeed written by a real human.
The controversy started early this week when Futurism wrote a piece titled, “CNET is quietly publishing entire articles generated by AI.” The story was about how CNET was using “quietly” using AI to write a new wave of financial explainer articles. CNET appears to have started using AI around November of last year. According to Futurism, CNET AI-written articles were first spotted by online marketer Gael Breton in a Twitter post on Wednesday.
“Looks like @CNET (DR 92 tech site) just did their coming out about using AI content for SEO articles, “Breton tweeted.
The articles were published under the unassuming moniker of “CNET Money Staff,” and included topics like “Should You Break an Early CD for a Better Rate?” or “What is Zelle and How Does It Work?.” Unless they’s be taken down, you may still be able to find these articles on Google or WayBackMachine.
Looks like @CNET (DR 92 tech site) just did their coming out about using AI content for SEO articles. pic.twitter.com/CR0IkgUUnq
— Gael Breton (@GaelBreton) January 11, 2023
Unsurprisingly, the bylines of the articles obviously did not tell the whole story about the authorship. As such, your average reader visiting the CNET website would likely have no idea that what they’re reading is AI-generated. “It’s only when you click on “CNET Money Staff,” that the actual “authorship” is revealed:”
“This article was generated using automation technology,” reads a dropdown description, “and thoroughly edited and fact-checked by an editor on our editorial staff.”
According to Futurism, CNET has put out about 73 AI-generated articles since the program began more than two months ago. It didn’t take long before the Futurism story started to spread on social media. People were horrified and many observers, including journalists, were not happy.
“This is just the beginning,” tweeted Washington Post reporter Nathan Grayson in response to the Futurism’s story, “and aggregation plus explanation performed by AI will doubtless result in lower-quality work and fewer jobs.”
this is just the beginning, and aggregation + explanation performed by AI will doubtless result in lower-quality work and fewer jobs. a loss for everyone https://t.co/6ByljJ4TSE
— Nathan Grayson (@Vahn16) January 11, 2023
“I think about shit like this a lot as somebody laid off from a copyediting job because some people think AI tools can do the work for you,” wrote another writer. Kotaku writer Luke Plunkett also called the CNET program “awful.”
I think about shit like this a lot as somebody laid off from a copyediting job because some people think AI tools can do the work for you.
I can't believe I'm saying this but I hope Google steps in https://t.co/NUG8vThTCm
— @carlivelocci.com on the other app (@velocciraptor) January 11, 2023
“They only care about cranking out content and don’t care about quality/editing as long as the content ranks in [search engine optimization],” one former CNET employee told Futurism. “They use AI to rewrite the intros every two weeks or so because Google likes updated content.”
But after public outcry following the report, Futurism said it discovered that the AI was making many extremely basic errors, despite CNET’s promise that all the AI-generated articles were being diligently fact-checked by a human editor. The publication also added that “CNET responded by issuing an extensive correction and slapping a warning label on the rest of the content — as well, oddly, as adding a disclaimer to many human-written articles about AI topics.”
Meanwhile, CNET now says it’s going to pause the use of AI to generate its content. According to a Friday report by The Verge, CNET leadership told the group in a one-hour staff call on Friday that it was pausing all AI-generated content for now. Top executives at Red Ventures, the parent company of CNET and other websites, also shared more details on the company’s AI tool.
“We didn’t do it in secret,” CNET editor-in-chief Connie Guglielmo told the group. “We did it quietly.”
“Some writers — I won’t call them reporters — have conflated these two things and had caused confusion and have somehow said that using a tool to insert numbers into interest rate or stock price stories is somehow part of some, I don’t know, devious enterprise,” Guglielmo said. “I’m sure that’s news to The Wall Street Journal, Bloomberg, The New York Times, Forbes, and everyone else who does that and has been doing it for a very, very long time.”
The Verge also corroborated the Futurism, adding that a source had told the publication, for instance, that “CNET has indeed been using AI tools for far longer than it has publicly admitted.” For example, the source said that one such AI system has already been in use for eighteen months — while the oldest marked AI content on CNET is only about two months old.
The story didn’t stop there. The Verge also shared a story of a Former CNET senior editor. According to The Verge, the staffer sent a farewell email to hundreds of colleagues on her last day at the CNET. In her email, which was also obtained by Futurism, the editor criticized CNET for its use of AI tech. She also made yet another claim about the outlet’s alleged use of “AI in materials that weren’t marked as being bot-written.”
According to Futurism, she said, the materials came in the form of email newsletters. “And like the articles we reviewed, she said, these newsletters contained serious factual errors.”
“The advice was generated by scraping from CNET’s previously published cybersecurity and privacy coverage,” she wrote, “but after being rephrased by RV’s proprietary AI tool, that language was used to generate factually inaccurate information about cybersecurity and privacy tools, along with advice that would cause direct harm to readers.”
The email ended with some advice: “do not take as fact any editor’s rebuttal that all quality journalism outlets participate in unethical practices,” she added.
“Ethical standards (and robust, in-house debates on them) are alive and well in the best journalism outlets in the world,” she continued. “I’ve had the privilege of seeing this first-hand at many of them, including CNET.”
Meanwhile, the use of AI to write content is nothing new. Other big news sites, like AP or the LA Times also use AI to write some of their content. However, unlike CNET, they explicitly label the author as a bot or “bookend an article with a clear declaration of the AI authorship.”
Another prominent user of AI is the prestigious news agency The Associated Press (AP), which has been using AI since 2015 to automatically write thousands of its earnings reports. The AP has even proudly proclaimed itself as “one of the first news organizations to leverage artificial intelligence.”