Researchers create AI "worms" able to spread between systems — stealing private data as they go

Malware worm
(Image credit: Shutterstock)

A team of researchers have created a self-replicating computer worm that wriggles through the web to target Gemini Pro, ChatGPT 4.0, and LLaVA AI powered apps.

The researchers developed the worm to demonstrate the risks and vulnerabilities of AI enabled applications, particularly how the links between generative-AI systems can help to spread malware.

In their report, the researchers, Stav Cohen from the Israel Institute of Technology, Ben Nassi of Cornell Tech, and Ron Bitton from Intuit, came up with the name ‘Morris II’ using inspiration from the original worm that wreaked havoc on the internet in 1988.

wZero-click Worms unleashed on AI

The worm was developed with three key goals in mind. The first is to ensure that the worm could recreate itself. By using adversarial self-replicating prompts that trigger the AI applications to output the original prompt itself, the AI will automatically replicate the worm each time it uses the prompt.

The second goal was to deliver a payload or perform a malicious activity, in this case the worm was programmed to do one of a number of actions. From stealing sensitive information, to crafting emails that were insulting and crass in order to sow toxicity and distribute propaganda.

Finally, the worm needed to be able to jump between hosts and AI applications in order to spread itself through the AI ecosystem. The first method targets AI-assisted email applications using retrieval-augmented generation (RAG) by sending a poisoned email that is then stored within the target's database. When the recipient tries to reply to the email, the AI assistant automatically generates a reply using the poisoned data and then propagates the self-replicating prompt through the ecosystem.

The second method requires an input to be executed by the generative-AI model, which then creates an output that prompts the AI to disseminate the worm to new hosts. When the next host is infected, it then immediately propagates the worm beyond itself.

In testing performed by the researchers, the worm was able to steal social security numbers and credit card details. 

The researchers sent their paper to Google and OpenAI to raise awareness about the potential dangers of these worms, and while Google did not comment, an OpenAI spokesperson told Wired that, “They appear to have found a way to exploit prompt-injection type vulnerabilities by relying on user input that hasn’t been checked or filtered.”

Worms such as these highlight the need for greater research, testing and regulation when it comes to rolling out generative-AI applications.

More from TechRadar Pro

TOPICS
Benedict Collins
Staff Writer (Security)

Benedict has been writing about security issues for over 7 years, first focusing on geopolitics and international relations while at the University of Buckingham. During this time he studied BA Politics with Journalism, for which he received a second-class honours (upper division), then continuing his studies at a postgraduate level, achieving a distinction in MA Security, Intelligence and Diplomacy. Upon joining TechRadar Pro as a Staff Writer, Benedict transitioned his focus towards cybersecurity, exploring state-sponsored threat actors, malware, social engineering, and national security. Benedict is also an expert on B2B security products, including firewalls, antivirus, endpoint security, and password management.

Read more
AI tools.
Not even fairy tales are safe - researchers weaponise bedtime stories to jailbreak AI chatbots and create malware
Ai tech, businessman show virtual graphic Global Internet connect Chatgpt Chat with AI, Artificial Intelligence.
AI agents can be hijacked to write and send phishing attacks
A stylized depiction of a padlocked WiFi symbol sitting in the centre of an interlocking vault.
Sounding the alarm on AI-powered cybersecurity threats in 2025
DeepSeek
Experts warn DeepSeek is 11 times more dangerous than other AI chatbots
DDoS attack
ChatGPT security flaw could open the gate for devastating cyberattack, expert warns
A hand reaching out to touch a futuristic rendering of an AI processor.
Google says Gemini is being misused to launch major cyberattacks
Latest in Pro
Isometric demonstrating multi-factor authentication using a mobile device.
NCSC gets influencers to sing the praises of 2FA
Sam Altman and OpenAI
OpenAI is upping its bug bounty rewards as security worries rise
Context Windows
Why are AI context windows important?
BERT
What is BERT, and why should we care?
A person holding out their hand with a digital AI symbol.
AI is booming — but are businesses seeing real impact?
A stylized depiction of a padlocked WiFi symbol sitting in the centre of an interlocking vault.
Dangerous new CoffeeLoader malware executes on your GPU to get past security tools
Latest in News
cheap Nintendo Switch game deals sales
Nintendo didn't anticipate that Mario Kart 8 Deluxe was 'going to be the juggernaut' for the Nintendo Switch when it was ported to the console, according to former employees
Three angles of the Apple MacBook Air 15-inch M4 laptop above a desk
Apple MacBook Air 15-inch (M4) review roundup – should you buy Apple's new lightweight laptop?
Witchbrook
Witchbrook, the life-sim I've been waiting years for, finally has a release window and it's sooner than you think
Amazon Echo Smart Speaker
Amazon is experimenting with renaming Echo speakers to Alexa speakers, and it's about time
Shigeru Miyamoto presents Nintendo Today app
Nintendo Today smartphone app is out now on iOS and Android devices – and here's what it does
iPhone 13 mini
The iPhone mini won't be returning, according to rumors – and you think that's a mistake