How GenAI could give threat actors a disarming advantage

A digital face in profile against a digital background.
(Image credit: Shutterstock / Ryzhi)

Humans are fundamentally social creatures. And language lies at the heart of how we socialise and communicate. It is the basis of understanding and therefore coexistence. Whether we know it or not, most of us speak two “languages”: the language of officialdom and business, and the dialect that’s spoken in the region where we grew up. When hearing or reading the latter, it can disarm us; making us feel closer to the person writing or speaking it.

The challenge with generative AI (GenAI) is that it gives threat actors with little grasp of such linguistic subtleties the ability to get inside our heads. It could further bolster their efforts to socially engineer victims, and conduct convincing fraud and disinformation campaigns.

Richard Werner

European Business Consultant at TrendMicro.

The language of cybercrime

Reading the dialect of our birthplace or childhood can have a strange psychological effect on many of us. It creates a sense of empathy with the person writing it. Even when we see it being artificially generated by GenAI it can have a similar impact.

However, there are unfortunately also opportunities here for threat actors. Take phishing. It still ranks as one of the top threat vectors for cyber-attacks, representing nearly a quarter of all ransomware compromises in Q4 2023. Fundamentally, it relies on social engineering: the ability of the fraudster to manipulate their victim into doing their bidding. They might do so by using official logos and sender domains. But language also plays a key role.

This is where GenAI could give opportunistic threat actors a leg-up. Writing phishing missives in a dialect the recipient instantly understands could raise trust levels and trick the victim into believing what they are being told. Now, this is unlikely to work in an enterprise setting. But it could be used in scams targeting consumers. GenAI is already predicted to supercharge phishing by generating grammatically perfect content in multiple languages. Why not multiple dialects too?

The same logic could see scammers use GenAI to gain the trust of their victims in romance and other confidence fraud types. The use of dialects could play a critical role in overcoming our increasingly skeptical attitude to people we meet online. It’s a cybercrime that already cost victims $734m in 2022, according to the FBI. But the bad guys are always looking for innovative ways to increase their haul.

Building bombs and faking news

Another threat looms large this year: misinformation/disinformation. Together, they were recently ranked by the World Economic Forum (WEF) as the number one global risk of the next two years. With around a quarter of the world’s population heading to the polls in 2024, there are growing concerns that nefarious actors will try to swing results towards their favored candidates, or undermine confidence in the entire democratic process. And while more seasoned internet users are becoming increasingly dubious about the news they read online, dialect could once again be a trump card for threat actors.

First, it is not widely used. That means we may pay more attention to content written in a specific dialect. We might read a social media post written in dialect, even if just for the joy of being able to decipher what it means. If it’s our own dialect, we might feel instantly closer to the person – or machine – that posted it. Politicians and cybersecurity experts may warn us about election interference from foreigners. But what could be less “foreign” than an account posting in a local or regional dialect close to home?

Finally, consider how dialects may allow threat actors to “jailbreak” GenAI systems. Researchers at Brown University in the US used rarely spoken languages like Gaelic to do exactly this to ChatGPT. The OpenAI chatbot has specific safety guardrails designed into it — such as refusing to give a user instructions on how to build a bomb. However, when the researchers asked ChatGPT in rare languages to do unethical things, they were able to access the forbidden information. According to media reports, Open AI is aware of, and already undertaking steps to mitigate, the risk. But we must remember that although GenAI seems “intelligent”, it can sometimes have the naivety of a four-year-old.

Time to educate

So what’s the solution? Certainly, AI developers must build better protections against abuse of GenAI’s dialect-generating capabilities. But users may also need to improve their understanding of potential threats, and ramp up their skepticism of what they read and watch online. Companies should include dialect in their anti-phishing/fraud training programs. And governments and industry bodies may want to run public awareness campaigns more widely. As GenAI is increasingly used for malicious purposes, poor language skills may even in time become a sign of credibility in written communication.

That isn’t where we are right now. But as cybersecurity professionals, we have to acknowledge that it could be soon.

We've featured the best AI Writer.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

TOPICS

Richard Werner is European Business Consultant at TrendMicro.

Read more
Half man, half AI.
Generative AI has a long way to go as siloed data and abuse of its capacity remain a downside – but it does change the game for security teams
Hands typing on a keyboard surrounded by security icons
The psychology of scams: how cybercriminals are exploiting the human brain
An AI face in profile against a digital background.
The truth about GenAI security: your business can't afford to “wait and see”
A profile of a human brain against a digital background.
Securely working with AI-generated code
Cartoon Phishing
Hackers use GenAI to attack more frequently and effectively
An abstract image of digital security.
Looking before we leap: why security is essential to agentic AI success
Latest in Pro
Epson EcoTank ET-4850 next to a TechRadar badge that reads Big Savings
I searched for the best printer deal you won't find in the Amazon Spring Sale
Microsoft Copiot Studio deep reasoning and agent flows
Microsoft reveals OpenAI-powered Copilot AI agents to bosot your work research and data analysis
Group of people meeting
Inflexible work policies are pushing tech workers to quit
Data leak
Top home hardware firm data leak could see millions of customers affected
Representational image depecting cybersecurity protection
Third-party security issues could be the biggest threat facing your business
An image of network security icons for a network encircling a digital blue earth.
Why multi-CDNs are going to shake up 2025
Latest in News
An image of Pro-Ject's Flatten it closed and opened
Pro-Ject’s new vinyl flattener will fix any warped LPs you inadvertently buy on Record Store Day
EA Sports F1 25 promotional image featuring drivers Oscar Piastri, Carlos Sainz and Oliver Bearman.
F1 25 has been officially announced, with this year's entry marking a return for Braking Point and a 'significant overhaul' for My Team mode
Garmin clippd integration
Garmin's golf watches just got a big software integration upgrade to help you improve your game
Robert Downey Jr reveals himself as Doctor Doom to a delighted crowd at San Diego Comic-Con 2024
Marvel is currently revealing the full cast for Avengers: Doomsday, and I think it's going to be a long-winded announcement
Samsung QN90F on yellow background
Samsung announces US prices for its 2025 mini-LED TV lineup, and it’s good and bad news
Nintendo Switch Lite
Forget the Nintendo Switch 2, the original Switch is getting one last hurrah in a surprise Nintendo Direct tomorrow