One of Google's "big AI" projects uncovered some serious security threats seemingly all on its own

Computer Hacked, System Error, Virus, Cyber attack, Malware Concept. Danger Symbol
(Image credit: Shutterstock)
  • Project Zero and DeepMind "big AI" uncovers security vulnerabilities
  • Big Sleep finds a SQLite stack buffer underflow flaw before official release
  • AI could revolutionize software development by discovering critical flaws

A collaborative “big AI” project between Google Project Zero and Google DeepMind has discovered a critical vulnerability in a piece of software before public release.

The Big Sleep AI agent was set to work analyzing the SQLite open source database engine, where it discovered a stack buffer underflow flaw which was subsequently patched the same day.

This discovery potentially marks the first ever time an AI has uncovered a memory-safety flaw in a widely used application.

Fuzzed software out-fuzzed by AI

Big Sleep found the stack buffer underflow vulnerability in SQLite which had been ‘fuzzed’ multiple times.

Fuzzing is an automated software testing method that can discover potential flaws or vulnerabilities such as memory safety issues that are typically exploited by attackers. However, it is not a foolproof method of vulnerability hunting, and a fuzzed vulnerability that is found and patched could also exist as a variant elsewhere in the software and go undiscovered.

The methodology used by Google in this instance was to provide a previously patched vulnerability as a starting point for the Big Sleep agent, and then set it loose hunting for similar vulnerabilities elsewhere in the software.

While hunting for a similar vulnerability, Big Sleep encountered a vulnerability and traced the steps it took to recreate the vulnerability in a test case, gradually narrowing down the potential causes to a single issue and generating an accurate summary of the vulnerability.

Google Project Zero points out that the bug wasn’t previously spotted using traditional fuzzing techniques as the fuzzing harness was not configured to access the same extensions. However, when fuzzing was re-run with the same configurations, the vulnerability remained undiscovered despite 150 CPU-hours of fuzzing.

“We hope that in the future this effort will lead to a significant advantage to defenders - with the potential not only to find crashing testcases, but also to provide high-quality root-cause analysis, triaging and fixing issues could be much cheaper and more effective in the future,” the Big Sleep team said. “We aim to continue sharing our research in this space, keeping the gap between the public state-of-the-art and private state-of-the-art as small as possible.”

The full testing methodology and vulnerability discovery details can be found here.

You might also like

TOPICS
Benedict Collins
Staff Writer (Security)

Benedict has been writing about security issues for over 7 years, first focusing on geopolitics and international relations while at the University of Buckingham. During this time he studied BA Politics with Journalism, for which he received a second-class honours (upper division), then continuing his studies at a postgraduate level, achieving a distinction in MA Security, Intelligence and Diplomacy. Upon joining TechRadar Pro as a Staff Writer, Benedict transitioned his focus towards cybersecurity, exploring state-sponsored threat actors, malware, social engineering, and national security. Benedict is also an expert on B2B security products, including firewalls, antivirus, endpoint security, and password management.

Read more
An abstract image of digital security.
Identifying the evolving security threats to AI models
A person using DeepSeek on their smartphone
DeepSeek security breach - critical databases exposed, more than one million records reportedly leaked
A hand reaching out to touch a futuristic rendering of an AI processor.
Google Cloud unveils new AI Protection security tools, no matter which model you use
A person using DeepSeek on their smartphone
DeepSeek ‘incredibly vulnerable’ to attacks, research claims
Hands on a laptop with overlaid logos representing network security
How AI-powered remediation can help tackle security debt
Shadowed hands on a digital background reaching for a login prompt.
Private API keys and passwords found in AI training dataset - nearly 12,000 details leaked
Latest in Pro
Microsoft
"Another pair of eyes" - Microsoft launches all-new Security Copilot Agents to give security teams the upper hand
Lock on Laptop Screen
Medusa ransomware is able to disable anti-malware tools, so be on your guard
AI quantization
What is AI quantization?
US flags
US government IT contracts set to be centralized in new Trump order
An abstract image of digital security.
Fake file converters are stealing info, pushing ransomware, FBI warns
Google Gemini AI
Gmail is adding a new Gemini AI tool to help smarten up your work emails
Latest in News
Disney Plus logo with popcorn
You can finally tell Disney+ to stop bugging you about that terrible Marvel show you regret starting
Girl wearing Meta Quest 3 headset interacting with a jungle playset
Latest Meta Quest 3 software beta teases a major design overhaul and VR screen sharing – and I need these updates now
Philips Hue
Philips Hue might be working on a video doorbell, and according to a new report, we just got our first look at it
Microsoft
"Another pair of eyes" - Microsoft launches all-new Security Copilot Agents to give security teams the upper hand
Hatch Restore 3 in Putty
You can finally start your day with The Office theme song, and I couldn't be more excited
Cassian Andor looking nervously over his shoulder in Andor season 2
New Andor season 2 trailer has got Star Wars fans asking the same question – and it includes an ominous call back to Rogue One's official teaser