Hackers target users with ChatGPT malware: ChatGPT scams are now the new crypto scams, Meta says
ChatGPT has soared in popularity in recent months. In just two months after its launch, ChatGPT went from an obscure AI tool to reaching 100 million monthly active users in January, making it the fastest-growing consumer application in history.
But the success of the OpenAI chatbot has also attracted the attention of hackers who are now targeting the tool to exploit its success. As the popularity of ChatGPT and other generative AI increases, so is the hackers’ interest in exploiting people.
In a new report published by the parent company of Facebook, the social giant said it’s seen a surge in malware disguised as ChatGPT and similar AI software. The company said it discovered malware distributors exploiting the public’s interest in ChatGPT to entice users into downloading malicious applications and browser extensions. Meta compared the phenomenon to cryptocurrency scams.
In a blog post, Meta said, “Our security researchers track and take action against hundreds of threat actors around the world. This year alone, we’ve detected and disrupted nearly ten new malware strains, including those posing as ChatGPT browser extensions and productivity tools, the latest iterations of malware known in the security community as Ducktail, and previously unreported malware families including one we call NodeStealer.”
According to a report, since March, the social media giant has identified approximately 10 malware families and over 1,000 malicious links marketed as tools that feature the widely-used AI-powered chatbot. Meta also revealed that some of the malware delivered functional ChatGPT capabilities alongside abusive files.
“As part of our most recent work to protect people and businesses from malicious targeting using ChatGPT as a lure, since March 2023 we’ve blocked and shared with our industry peers more than 1,000 malicious links from being shared across our technologies and reported a number of browser extensions and mobile apps to our peer companies. With each threat investigation, we’ve continued to strengthen how we detect and block these types of malware threats at scale,” Meta wrote.
During a press briefing on the report, Meta’s Chief Information Security Officer Guy Rosen stated that for malicious actors, “ChatGPT is the new crypto.” Rosen and other Meta executives have said that the company is preparing its defenses for potential abuses associated with generative AI technologies such as ChatGPT, which can quickly produce human-like writing, music, and art.
When questioned whether generative AI was already being employed in information operations, the executives stated that it was still in the early stages, but Rosen anticipated that “bad actors” to use the technologies to “try to speed up and perhaps scale up” their activities.
In one recent campaign, for example, Meta said it foiled some of the hackers’ activities that leveraged people’s interest in Open AI’s ChatGPT to lure individuals into installing malware. However, after detection by its security teams and industry counterparts, the perpetrators swiftly shifted their focus to other subjects such as posing as Google Bard, TikTok marketing tools, pirated software and movies, and Windows utilities.