Microsoft and OpenAI say hackers are using ChatGPT to improve cyberattacks and hone their craft
Microsoft and OpenAI disclosed today that state-sponsored hackers are using advanced language models such as ChatGPT to enhance their cyberattacks. The joint research revealed instances of Russian, North Korean, Iranian, and Chinese-backed groups utilizing tools like ChatGPT for target research, script improvement, and the development of social engineering techniques.
This groundbreaking research, detailed on both companies’ websites, exposes how hackers affiliated with foreign governments are incorporating generative artificial intelligence into their attacks. Microsoft specifically highlighted the use of OpenAI’s technology by five hacking groups associated with China, Russia, North Korea, and Iran.
Contrary to concerns in the tech industry about AI generating exotic attacks, hackers are employing it for more mundane tasks like drafting emails, translating documents, and debugging code, according to the companies.
OpenAI, in collaboration with Microsoft Threat Intelligence, also reported disrupting five state-affiliated actors attempting to leverage AI services for malicious cyber activities.
“In partnership with Microsoft Threat Intelligence, we have disrupted five state-affiliated actors that sought to use AI services in support of malicious cyber activities. We also outline our approach to detect and disrupt such actors in order to promote information sharing and transparency regarding their activities,” OpenAI said.
in a blog post, Microsoft said: “Cybercrime groups, nation-state threat actors, and other adversaries are exploring and testing different AI technologies as they emerge, in an attempt to understand potential value to their operations and the security controls they may need to circumvent.” Tom Burt, who oversees Microsoft’s efforts to combat major cyberattacks, added, “They’re just using it like everyone else is, to try to be more productive in what they’re doing.”
The Strontium group, linked to Russian military intelligence, was found using large language models (LLMs) for understanding satellite communication protocols, radar imaging technologies, and technical parameters. LLMs were also employed for basic scripting tasks like file manipulation and data selection.
Microsoft added that cybercrime groups and state-sponsored actors are exploring and testing various AI technologies to understand their value for operations and identify potential security controls to bypass.
“Is it providing something new and novel that is accelerating an adversary, beyond what a better search engine might? I haven’t seen any evidence of that,” said Bob Rotsted, who heads cybersecurity threat intelligence for OpenAI.
Microsoft, having invested $13 billion in OpenAI, maintains a close partnership with the startup. They shared threat information to detail how hacking groups tied to China, Russia, North Korea, and Iran utilized OpenAI’s technology. The specific OpenAI technology used was not disclosed, but the startup mentioned that it had restricted access for those groups upon discovery.
Since the release of ChatGPT in November 2022, concerns have been raised about adversaries weaponizing powerful AI tools. However, Bob Rotsted, who heads cybersecurity threat intelligence for OpenAI, suggested that the reality might be more understated, questioning whether AI is providing something significantly new beyond what a better search engine might offer. Despite efforts to limit account sign-ups, sophisticated culprits can still potentially evade detection through various techniques.