AI and ChatGPT are scary, according to cybercriminals
Hackers are worried ChatGPT spin-offs could scam them instead of helping them with their campaigns
Many cybercriminals are skeptical about the use of AI-based tools such as ChatGPT to automate their malicious campaigns.
A new Sophos investigation looked to gauge the interests of cybercriminals by analyzing dark web forums. Apparently, there are many safeguards in place in tools such as ChatGPT, which prevent hackers from automating the creation of malicious landing pages, phishing emails, malware code, and more.
That forced the hackers to do two things: try and compromise premium ChatGPT accounts (that, as the research suggests, come with fewer restrictions), or pivot towards GhatGPT derivatives - cloned AI writers that hackers built to circumvent the safeguards.
Reader Offer: $50 Amazon gift card with demo
Perimeter 81's Malware Protection intercepts threats at the delivery stage to prevent known malware, polymorphic attacks, zero-day exploits, and more. Let your people use the web freely without risking data and network security.
Preferred partner (What does this mean?)
Poor results and plenty of skepticism
But many are wary of the derivatives, fearing that they might have been built just to trick them.
“While there’s been significant concern about the abuse of AI and LLMs by cybercriminals since the release of ChatGPT, our research has found that, so far, threat actors are more skeptical than enthused,” says Ben Gelman, senior data scientist, Sophos. “Across two of the four forums on the dark web we examined, we only found 100 posts on AI. Compare that to cryptocurrency where we found 1,000 posts for the same period.”
While the researchers did observe attempts at creating malware or other attack tools using AI-powered chatbots, the results were “rudimentary and often met with skepticism from other users,” said Christopher Budd, director, X-Ops research, Sophos.
“In one case, a threat actor, eager to showcase the potential of ChatGPT inadvertently revealed significant information about his real identity. We even found numerous ‘thought pieces’ about the potential negative effects of AI on society and the ethical implications of its use. In other words, at least for now, it seems that cybercriminals are having the same debates about LLMs as the rest of us,” Budd added.
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
More from TechRadar Pro
- ChatGPT is being used to create malicious emails and code
- Here's a list of the best firewalls around today
- These are the best endpoint protection software right now
Sead is a seasoned freelance journalist based in Sarajevo, Bosnia and Herzegovina. He writes about IT (cloud, IoT, 5G, VPN) and cybersecurity (ransomware, data breaches, laws and regulations). In his career, spanning more than a decade, he’s written for numerous media outlets, including Al Jazeera Balkans. He’s also held several modules on content writing for Represent Communications.