AI agents can be hijacked to write and send phishing attacks
Research claims AI agents can be easily hacked

- AI agents could be used to build and send phishing attacks
- Symantec researchers were able to prompt Operator into sending a malicious email
- These tools are only likely to get more powerful
Cybercriminals have been using AI to help them in cyberattacks for some time, but the introduction of "Agents", such as OpenAI’s Operator, now means criminals have a lot less work to do themselves, experts have claimed.
Previously, AI tools had been seen helping attackers send high-powered threats at a much quicker rate, dealing out sophisticated attacks more frequently than could have been imagined without the tools - and it lowered the bar for criminals, so even relatively low-skilled cybercriminals could build successful attacks.
Now, researchers from Symantec have been able to use Operator to identify a target, find their email address, create a PowerShell script aimed at gathering systems information, and send it to the victim using a “convincing lure.”
Agents leveraged
In a demonstration, researchers explained their first attempts failed, with Operator refusing to proceed “as it involves sending unsolicited emails and potentially sensitive information. This could violate privacy and security policies.”
With a few tweaks to the prompt though, the agent created an attack impersonating an IT Support worker, and sent out the malicious email. This presents serious risk for security teams, with research consistently showing that human error is the primary cause of over two-thirds of data breaches.
It “may not be long” before the agents become a lot more powerful, the report speculates. “It is easy to imagine a scenario where an attacker could simply instruct one to “breach Acme Corp” and the agent will determine the optimal steps before carrying them out.”
“This could include writing and compiling executables, setting up command-and-control infrastructure, and maintaining active, multi-day persistence on the targeted network. Such functionality would massively reduce the barriers to entry for attackers.”
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
AI agents are designed to be like virtual assistants, helping users book appointments, schedule meetings, and write emails. OpenAI takes "these kinds of reports seriously," a spokesperson told TechRadar Pro.
"Our usage policies prohibit using OpenAI services or products to facilitate or engage in illicit activity, including attempts to defraud, scam or intentionally deceive or mislead others, and we have proactive safety mitigations and strict rate limits in place to mitigate harmful usage. Operator is still a research preview and we are constantly refining and improving."
You might also like
- Take a look at our picks for the best AI tools around
- Check out our choice for best antivirus software
- The FCC is creating a security council to bolster US defenses against cyberattacks
Ellen has been writing for almost four years, with a focus on post-COVID policy whilst studying for BA Politics and International Relations at the University of Cardiff, followed by an MA in Political Communication. Before joining TechRadar Pro as a Junior Writer, she worked for Future Publishing’s MVC content team, working with merchants and retailers to upload content.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.

















