Researchers create AI "worms" able to spread between systems — stealing private data as they go
Generative AI can be hijacked by self-replicating worms
A team of researchers have created a self-replicating computer worm that wriggles through the web to target Gemini Pro, ChatGPT 4.0, and LLaVA AI powered apps.
The researchers developed the worm to demonstrate the risks and vulnerabilities of AI enabled applications, particularly how the links between generative-AI systems can help to spread malware.
In their report, the researchers, Stav Cohen from the Israel Institute of Technology, Ben Nassi of Cornell Tech, and Ron Bitton from Intuit, came up with the name ‘Morris II’ using inspiration from the original worm that wreaked havoc on the internet in 1988.
wZero-click Worms unleashed on AI
The worm was developed with three key goals in mind. The first is to ensure that the worm could recreate itself. By using adversarial self-replicating prompts that trigger the AI applications to output the original prompt itself, the AI will automatically replicate the worm each time it uses the prompt.
The second goal was to deliver a payload or perform a malicious activity, in this case the worm was programmed to do one of a number of actions. From stealing sensitive information, to crafting emails that were insulting and crass in order to sow toxicity and distribute propaganda.
Finally, the worm needed to be able to jump between hosts and AI applications in order to spread itself through the AI ecosystem. The first method targets AI-assisted email applications using retrieval-augmented generation (RAG) by sending a poisoned email that is then stored within the target's database. When the recipient tries to reply to the email, the AI assistant automatically generates a reply using the poisoned data and then propagates the self-replicating prompt through the ecosystem.
The second method requires an input to be executed by the generative-AI model, which then creates an output that prompts the AI to disseminate the worm to new hosts. When the next host is infected, it then immediately propagates the worm beyond itself.
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
In testing performed by the researchers, the worm was able to steal social security numbers and credit card details.
The researchers sent their paper to Google and OpenAI to raise awareness about the potential dangers of these worms, and while Google did not comment, an OpenAI spokesperson told Wired that, “They appear to have found a way to exploit prompt-injection type vulnerabilities by relying on user input that hasn’t been checked or filtered.”
Worms such as these highlight the need for greater research, testing and regulation when it comes to rolling out generative-AI applications.
More from TechRadar Pro
Benedict has been writing about security issues for over 7 years, first focusing on geopolitics and international relations while at the University of Buckingham. During this time he studied BA Politics with Journalism, for which he received a second-class honours (upper division), then continuing his studies at a postgraduate level, achieving a distinction in MA Security, Intelligence and Diplomacy. Upon joining TechRadar Pro as a Staff Writer, Benedict transitioned his focus towards cybersecurity, exploring state-sponsored threat actors, malware, social engineering, and national security. Benedict is also an expert on B2B security products, including firewalls, antivirus, endpoint security, and password management.