AI worms: what your cybersecurity team needs to know

An eye-opening experiment introducing Morris II, a proof-of-concept worm, shows that corporate cybersecurity teams must become vigilant for AI-powered versions of classic attack methods 

illustration of AI worm

The appearance of the first computer worms was a watershed in the history of cybersecurity. Unlike traditional viruses, they could replicate themselves, spreading their digital larvae across networks without human assistance. From the primordial worms of the internet’s formative years, such as Morris in 1988, to the ransomware cryptoworm WannaCry nearly three decades later, this sneaky genus of malware has left a trail of destruction in its tracks.

Innovations in wormery often appear in tandem with new technologies. And so it has happened with the dawn of democratised AI. Named after its ground-breaking forebear, Morris II is a new worm that uses generative AI to clone itself.

A wormhole appears

An experiment by researchers from Intuit, Cornell Tech and the Technion Israel Institute of Technology recently enlisted Morris II to use so-called poison prompts to break the defences of GenAI-powered email assistants. Emails stuffed with these prompts caused the assistants to comply with their commands. 

The prompts compelled them to send spam to other recipients and exfiltrate personal data from their targets. They then cloned themselves to other AI assistant clients, which mounted similar attacks.

The researchers hope that their proof-of-concept worm will serve as a warning that might prevent the appearance of similar species in the wild. They have alerted the developers of the three GenAI models they’d successfully targeted, which are working to patch the flaws exposed by Morris II. 

This experiment highlights the potential of AI systems to automate attacks without human input. But one of the researchers, Dr Ben Nassi, suggests it’s too soon to accurately estimate the threat posed by GenAI-powered attack methods. 

“I believe we’ll find out in a few years, based on how the industry reacts,” he says.

AI as an attack accelerator 

Criminals are already wielding other AI-aided weapons. In February, for instance, an employee at the Hong Kong branch of an unnamed multinational signed off a fraudulent £20m scam payment, believing instructions issued by deepfake imitations of their managers via a video call. 

Fraudsters are also using GenAI to supercharge their social engineering attempts, using tools such as ChatGPT to create more bespoke, targeted and grammatically correct phishing emails. 

Max Heinemeyer, chief product officer at cybersecurity firm Darktrace, believes that the use of AI to develop existing attack methods and scale them up will continue, but he adds that GenAI is still too erratic to be relied upon by criminals. 

Picture a scenario where hackers gain access to an email server and hijack email threads by posing as a recipient or a sender. They then attach a convincingly disguised PDF file containing malware. 

Hackers are actually doing this sort of thing already, but now imagine how much more effective they could be if, using a large language model (LLM), they were to automate bespoke, convincing responses in each email thread. 

“These would be indistinguishable from normal communications,” Heinemeyer says. 

Moreover, we wouldn’t have to wait for the emergence of AI worms for such attacks to start happening. 

LLMs in the digital underworld

Although most cybercriminal gangs are still focused on ransomware-based extortion, because it remains reliable and profitable, some are investigating the potential of LLM-powered attacks. 

Etay Maor is chief security strategist at infosec company Cato Networks, where he also runs the firm’s threat investigation lab. Its staff often lurk in digital-underworld hangouts, which are at the cutting edge of cybercrime. 

“We’ve seen that cybercrime groups are looking to recruit data scientists and specialists in machine learning,” Maor reports. “In private channels, they’ve mentioned creating their own malicious LLMs.” 

Cybercriminals are prioritising the lower-hanging fruit for now, but they’re definitely looking into scaling up

His team members have read discussions on Russian hacking forums about which LLMs are best for phishing and which are more suited for coding. Most of those posting on these forums are about four years away from having models that would be of much use to cybercriminals. For now, they’re largely using them to write phishing emails in languages they don’t know.

While Maor hasn’t yet seen self-governing, self-replicating malware that criminals can just “fire and forget”, he warns that they “are trying to get there. They’re prioritising the lower-hanging fruit for now, but they’re definitely looking into scaling up.”

Worms: when tech catches up to theory 

While lecturing in the late 1940s, pioneering mathematician John von Neumann led a thought experiment about self-replicating technology. What would it take, he wondered, to create a machine that could reproduce and evolve like humans? 

Published posthumously in 1966, von Neumann’s Theory of Self-Reproducing Automata proved hugely influential in the development of complex systems, but it would still take more than two decades for the technology to start catching up with the theory, with the emergence of the first computer worms. 

It would also require a lot of R&D work to create an aggressive, autonomous AI worm that works in a repeatable way. If cybercriminals are content with their current hacking armoury, they probably lack the incentive to dedicate the necessary time, effort and resources. Furthermore, Heinemeyer notes, anyone letting loose such a beast would be targeted by every law enforcement agency in the world, which is what happened when the WannaCry and NotPetya cryptoworms were unleashed. 

Malware of this type would therefore be more likely to originate from state-sponsored groups waging international cyber warfare. 

“I’m sure that nation-state actors could cook AI worms up in a lab behind closed doors. They might have done so already – I think all the ingredients are in place,” he says. “But, if you pull the trigger on this kind of weapon, you can do it only once. Once it’s out in the wild, people will immunise themselves against it by creating counter technologies.”

Why FUD shouldn’t shape your response

Early proof-of-concepts such as Morris II, indicating the devastating potential of more advanced weapons to come, highlight the importance of looking ahead. Intelligent malicious worms would seem a logical next step, especially given the increasing sophistication and availability of AI tooling and the growing professionalisation of the cybercriminal underworld. 

Businesses must therefore keep track of the emergence of new attack models – and, perhaps even more crucially, adopt a more proactive approach. 

Heinemeyer argues that corporate cybersecurity teams should prioritise reducing the attack surface, returning to the “people, processes and technology” framework to prepare for the unexpected. 

“I think it would do us good as an industry to not just focus on that Whac-A-Mole game and start shifting more activity towards anticipating attacks before they happen,” he says.

Dr Jason Nurse, reader in cybersecurity at the University of Kent, suggests that organisations should proceed cautiously with their own AI implementations. 

“AI has immense potential but, like any other technology, it needs the appropriate review and assessment as it relates to cyber risk,” he says, recommending the US National Security Agency’s recent guidance on secure AI (see panel, XX) as “a good place to start. It centres thinking about the deployment environment, continuously protecting the AI system and securing AI operations and maintenance.”

Our descent into a William Gibson-esque dystopia where autonomous worms stalk their victims in cyberspace is unlikely, but such AI-powered malware could surface sooner than you’d think. A friendly common or garden worm will tend to bury its head in the sand, but that doesn’t mean that we should.

The US National Security Agency’s guidance on secure AI usage

Expand Close