State-sponsored hackers are having a blast with LLMs — Microsoft and OpenAI warn new tactics could cause more damage than ever before

Holographic silhouette of a human. Conceptual image of AI (artificial intelligence), VR (virtual reality), Deep Learning and Face recognition systems. Cyberpunk style vector illustration.
(Image credit: Shutterstock)

Hackers are increasingly turning to LLMs and AI tools to refine their tactics, techniques and procedures (TTP) in their campaigns, new reports have warned.

A new research paper released by Microsoft in collaboration with OpenAI has revealed how threat actors are using the latest technical innovations to keep defenders on their toes.

Microsoft and OpenAI have detected and disrupted attacks from Russia, North Korean, Iranian and Chinese backed threat actors who have been using LLMs to refine their hacking playbooks.

 AI refines hackers edge

State-backed hackers have been abusing the built in language support mechanics to refine their ability to target foreign adversaries, and make them seem more legitimate when conducting social engineering campaigns. They are able to use this language processing to establish seemingly legitimate professional relationships with their victims.

Google also says that they have observed hackers performing intelligence gathering by using LLMs to garner information about the industries and locations their victims live and work in, alongside learning more about their personal relationships.

In one example, Microsoft and OpenAI observed the Russian GRU Unit 26165-linked Forest Blizzard group using LLMs to gather information on how satellites operate and communicate in very specific detail. They have also been observed using AI to refine their scripting abilities, most likely to automate or increase the efficiency of their technical operations.

North Korean linked group Emerald Sleet has been observed using LLMs to learn how to exploit critical software vulnerabilities that are publicly reported, generate content to use in spearphishing campaigns, and identify organizations that gather information about North Korean nuclear and defense capabilities.

In all of these cases, Microsoft and OpenAI identified and disabled all the accounts used by these threat actors, with Microsoft stating, “AI technologies will continue to evolve and be studied by various threat actors. 

“Microsoft will continue to track threat actors and malicious activity misusing LLMs, and work with OpenAI and other partners to share intelligence, improve protections for customers and aid the broader security community.”

More from TechRadar Pro

TOPICS
Benedict Collins
Staff Writer (Security)

Benedict has been writing about security issues for over 7 years, first focusing on geopolitics and international relations while at the University of Buckingham. During this time he studied BA Politics with Journalism, for which he received a second-class honours (upper division),  then continuing his studies at a postgraduate level, achieving a distinction in MA Security, Intelligence and Diplomacy. Upon joining TechRadar Pro as a Staff Writer, Benedict transitioned his focus towards cybersecurity, exploring state-sponsored threat actors, malware, social engineering, and national security. Benedict is also an expert on B2B security products, including firewalls, antivirus, endpoint security, and password management.