OpenAI says it shuts down multiple campaigns using its systems for cybercrime
Hackers are using ChatGPT to try to influence elections around the world
OpenAI, the company behind the famed Chat-GPT generative Artificial Intelligence (AI) solution, says it has recently blocked multiple malicious campaigns abusing its services.
In a report, the company said it blocked more than 20 operations and deceptive networks around the world in 2024 so far.
These operations varied in nature, size, and targets. Sometimes, the crooks would use it to debug malware, and sometimes they would use it to write content (website articles, fake biographies for social media accounts, fake profile pictures, etc.).
Disrupting the disruptors
While this sounds sinister and dangerous, OpenAI says the threat actors failed to gain any significant traction with these campaigns:
"Threat actors continue to evolve and experiment with our models, but we have not seen evidence of this leading to meaningful breakthroughs in their ability to create substantially new malware or build viral audiences," it said.
But 2024 is election year - not just in the States, but elsewhere around the world - and OpenAI has seen ChatGPT abused by threat actors trying to influence pre-election campaigns. It mentioned multiple groups, including one called “Zero Zeno. This Israeli-based commercial company “briefly” generated social media comments about elections in India - a campaign that was disrupted “less than 24 hours after it began.”
The company added in June 2024, just before the elections for the European Parliament, it disrupted an operation dubbed “A2Z”, which focused on Azerbaijan and its neighbors. Other notable mentions included generating comments about the European Parliament elections in France, and politics in Italy, Poland, Germany, and the US.
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Luckily, none of these campaigns made any significant progress, and once OpenAI banned them, they were stopped entirely:
“The majority of social media posts that we identified as being generated from our models received few or no likes, shares, or comments, although we identified some occasions when real people replied to its posts,” OpenAI concluded. “After we blocked its access to our models, this operation’s social media accounts that we had identified stopped posting throughout the election periods in the EU, UK and France.”
More from TechRadar Pro
- ChatGPT malware use is growing at an alarming rate
- Here's a list of the best firewalls today
- These are the best endpoint protection tools right now
Sead is a seasoned freelance journalist based in Sarajevo, Bosnia and Herzegovina. He writes about IT (cloud, IoT, 5G, VPN) and cybersecurity (ransomware, data breaches, laws and regulations). In his career, spanning more than a decade, he’s written for numerous media outlets, including Al Jazeera Balkans. He’s also held several modules on content writing for Represent Communications.