Not even fairy tales are safe - researchers weaponise bedtime stories to jailbreak AI chatbots and create malware
Researchers bypassed guardrails with the new ‘Immersive World’ technique

- Security researchers have developed a new technique to jailbreak AI chatbots
- The technique required no prior malware coding knowledge
- This involved creating a fake scenario to convince the model to craft an attack
Despite having no previous experience in malware coding, Cato CTRL threat intelligence researchers have warned they were able to jailbreak multiple LLMs, including ChatGPT-4o, DeepSeek-R1, DeepSeek-V3, and Microsoft Copilot, using a rather fantastical technique.
The team developed ‘Immersive World’ which uses “narrative engineering to bypass LLM security controls” by creating a “detailed fictional world” to normalize restricted operations and develop a “fully effective" Chrome infostealer. Chrome is the most popular browser in the world, with over 3 billion users, outlining the scale of the risk this attack poses.
Infostealer malware is on the rise, and is rapidly becoming one of the most dangerous tools in a cybercriminal's arsenal - and this attack shows that the barriers are significantly lowered for cybercriminals, who now need no prior experience in creating malicious code.
AI for attackers
LLMs have ‘fundamentally altered the cybersecurity landscape”, the report claims, and research has shown that AI-powered cyber threats are becoming a much more serious concern for security teams and businesses by allowing criminals to craft more sophisticated attacks with less experience and at a higher frequency.
Chatbots have many guardrails and safety policies, but since AI models are designed to be as helpful and compliant to the user as possible, researchers have been able to jailbreak the models, including persuading AI Agents to write and send phishing attacks with relative ease.
“We believe the rise of the zero-knowledge threat actor poses high risk to organizations because the barrier to creating malware is now substantially lowered with GenAI tools,” said Vitaly Simonovich, threat intelligence researcher at Cato Networks.
“Infostealers play a significant role in credential theft by enabling threat actors to breach enterprises. Our new LLM jailbreak technique, which we’ve uncovered and called Immersive World, showcases the dangerous potential of creating an infostealer with ease.”
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
You might also like
- Take a look at our picks for the best AI tools around
- Check out our choice for best antivirus software
- Criminals are spreading malware disguised as DeepSeek AI
Ellen has been writing for almost four years, with a focus on post-COVID policy whilst studying for BA Politics and International Relations at the University of Cardiff, followed by an MA in Political Communication. Before joining TechRadar Pro as a Junior Writer, she worked for Future Publishing’s MVC content team, working with merchants and retailers to upload content.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.

















