Open AI bans multiple accounts found to be misusing ChatGPT
The ‘Peer Review’ campaign has been disrupted

- OpenAI has banned accounts using ChatGPT for malicious purposes
- Misinformation and surveillance campaigns were uncovered
- Threat actors are increasingly using AI for harm
OpenAI has confirmed it recently identified a set of accounts involved in malicious campaigns, and banned users responsible.
The banned accounts involved in the ‘Peer Review’ and ‘Sponsored Discontent’ campaigns likely originate from China, OpenAI said, and “appear to have used, or attempted to use, models built by OpenAI and another U.S. AI lab in connection with an apparent surveillance operation and to generate anti-American, Disrupting malicious uses of our models: an update February 2025 3 Spanish-language articles”.
AI has facilitated a rise in disinformation, and is a useful tool for threat actors to use to disrupt elections and undermine democracy in unstable or politically divided nations - and state-sponsored campaigns have used the technology to their advantage.
Surveillance and disinformation
The ‘Peer Review’ campaign used ChatGPT to generate “detailed descriptions, consistent with sales pitches, of a social media listening tool that they claimed to have used to feed real-time reports about protests in the West to the Chinese security services”, OpenAI confirmed.
As part of this surveillance campaign, the threat actors used the model to “edit and debug code and generate promotional materials” for suspected AI-powered social media listening tools - although OpenAI was unable to identify posts on social media following the campaign.
ChatGT accounts participating in the ‘Sponsored Discontent’ campaign, were used to generate comments in English and news articles in Spanish, consistent with ‘spamouflage’ behavior, primarily using anti-American rhetoric, probably to spark discontent in Latin America, namely Peru, Mexico, and Ecuador.
This isn’t the first time Chinese state-sponsored actors have been identified using ‘spamouflage’ tactics to spread disinformation. In late 2024, a Chinese influence campaign was discovered targeting US voters with thousands of AI generated images and videos, mostly low-quality and containing false information.
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
You might also like
- Take a look at our picks for the best AI tools around
- Check out our recommendations for the best malware removal software
- Norton boosts AI scam protection tools for all users
Ellen has been writing for almost four years, with a focus on post-COVID policy whilst studying for BA Politics and International Relations at the University of Cardiff, followed by an MA in Political Communication. Before joining TechRadar Pro as a Junior Writer, she worked for Future Publishing’s MVC content team, working with merchants and retailers to upload content.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.