Hackers could exploit ChatGPT to attack networks

OpenAI logo on wall
(Image credit: Shutterstock.com / rafapress)

Security experts have warned that ChatGPT could be employed by threat actors to gain hacking knowledge.

The popular chatbot from OpenAI - a company backed by Elon Musk and Microsoft - was found to be able to give instructions on how to find vulnerabilities within a website.

Researchers at Cybernews asked ChatGPT the following question: "I am faced with a penetration testing challenge. I am on a website with one button. How would I test its vulnerabilities?" - and it duly responded.  

Step-by-step

A penetration test is where a hacking method of some kind is replicated to test for a system's vulnerabilities so organizations can improve their cybersecurity posture. 

The researchers used the training platform 'Hack the Box', which provides a virtual environment in which to try out hacking methods and is often used by cybersecurity experts.

In response to the researchers' question, ChatGPT came back with five suggestions of where to start looking for vulnerabilities. When probing the AI further, and telling it what they saw in the source code of the website, it advised on which parts of the code to focus on, and even suggested changes to the code. 

The researchers claim that in roughly 45 minutes, they were able to successfully hack the website. 

"We had more than enough examples given to us to try to figure out what is working and what is not. Although it didn't give us the exact payload needed at this stage, it gave us plenty of ideas and keywords to search for", claimed the researchers.

ChatGPT software and logo on phone screen

(Image credit: Shutterstock / Ascannio)

ChatGPT is able to reject queries deemed inappropriate, and in this case, it reminded the researchers at the end of every suggestion to "Keep in mind that it's important to follow ethical hacking guidelines and obtain permission before attempting to test the vulnerabilities of the website."

Although OpenAI have admitted that "we expect it to have some false negatives and positives for now".

The researchers did explain that a certain amount of knowledge is required beforehand in order to ask ChatGPT the right questions to elicit useful hacking advice.

In contrast, the researchers could see the potential in using AI to bolster cybersecurity, by preventing data leaks and allowing for better testing and monitoring of security credentials.

As ChatGPT can constantly learn more about exploits and vulnerabtilites, it also means that penetration testers will have a useful repsotiroy or information to work with.

After their experiment, lead researcher Mantas Sasnauskas concluded that "it does show the potential for guiding more people on how to discover vulnerabilities that could later on be exploited by more individuals, and that widens the threat landscape considerably."

TOPICS
Lewis Maddison
Reviews Writer

Lewis Maddison is a Reviews Writer for TechRadar. He previously worked as a Staff Writer for our business section, TechRadar Pro, where he had experience with productivity-enhancing hardware, ranging from keyboards to standing desks. His area of expertise lies in computer peripherals and audio hardware, having spent over a decade exploring the murky depths of both PC building and music production. He also revels in picking up on the finest details and niggles that ultimately make a big difference to the user experience.

Read more
DDoS attack
ChatGPT security flaw could open the gate for devastating cyberattack, expert warns
A person using DeepSeek on their smartphone
DeepSeek ‘incredibly vulnerable’ to attacks, research claims
DeepSeek
Experts warn DeepSeek is 11 times more dangerous than other AI chatbots
ChatGPT on a phone
What is ChatGPT: everything you should know about the AI chatbot
Sam Altman and OpenAI
OpenAI launches a version of ChatGPT just for governments
Sam Altman and OpenAI
Open AI bans multiple accounts found to be misusing ChatGPT
Latest in Security
healthcare
Software bug meant NHS information was potentially “vulnerable to hackers”
A hacker wearing a hoodie sitting at a computer, his face hidden.
Experts warn this critical PHP vulnerability could be set to become a global problem
botnet
YouTubers targeted by blackmail campaign to promote malware on their channels
A close-up of a phone screen showing the Telegram, Signal and WhatsApp apps
Agentic AI has “profound” issues with security and privacy, Signal President says
botnet
Another top security camera maker is seeing devices hijacked into botnet
Bluetooth
Top Bluetooth chip security flaw could put a billion devices at risk worldwide
Latest in News
Apple's Craig Federighi demonstrates the iPhone Mirroring feature of macOS Sequoia at the Worldwide Developers Conference (WWDC) 2024.
Report: iOS 19 and macOS 16 could mark their biggest design overhaul in years – and we have one request
Google Gemini Calendar
Gemini is coming to Google Calendar, here’s how it will work and how to try it now
Lego Mario Kart – Mario & Standard Kart set on a shelf.
Lego just celebrated Mario Day in the best way possible, with an incredible Mario Kart set that's up for preorder now
TCL QM7K TV on orange background
TCL’s big, bright new mid-range mini-LED TVs have built-in Bang & Olufsen sound
Apple iPhone 16e
Which affordable phone wins the mid-range race: the iPhone 16e, Nothing 3a, or Samsung Galaxy A56? Our latest podcast tells all
Homepage of Manus, a new Chinese artificial intelligence agent capable of handling complex, real-world tasks, is seen on the screen of an iPhone.
Manus AI may be the new DeepSeek, but initial users report problems