OpenAI is upping its bug bounty rewards as security worries rise

Sam Altman and OpenAI
(Image credit: Shutterstock/PatrickAssale)

  • OpenAI is increasing its bug bounty payouts
  • Spotting high-impact vulnerabilities could net researchers $100k
  • The move comes as more AI agents and systems are developed

OpenAI is hoping to encourage security researchers to identify security vulnerabilities by increasing its rewards for spotting bugs.

The AI giant has revealed it is upping its Security Bug Bounty program from $20k to $100k, and is widening the scope of its Cybersecurity Grant program, as well as developing new tools to protect AI agents from malicious threats.

This follows recent warnings AI agents can be hijacked to write and send phishing attacks, and the company is keen to outline its “commitment to rewarding meaningful, high-impact security research that helps us protect users and maintain trust in our systems.”

Disrupting threats

Since the Cybersecurity Grant Program was launched in 2023, OpenAI has reviewed thousands of applications and even funded 28 research initiatives, helping the firm gain valuable insights into security subjects like autonomous cybersecurity defenses, prompt injections, and secure code generation.

OpenAI says it continually monitors malicious actors looking to exploit its systems, and identifies and disrupts targeted campaigns.

“We don’t just defend ourselves,” the company said, “we share tradecraft with other AI labs to strengthen our collective defenses. By sharing these emerging risks and collaborating across industry and government, we help ensure AI technologies are developed and deployed securely.”

OpenAI is not the only company to increase its rewards program, with Google announcing in 2024 a five factor rise in bug bounty rewards, arguing that more secure products make finding bugs more difficult, which is reflected in the higher compensations.

With more advanced models and agents, and more users and developments, there are inevitably more points of vulnerability that could be exploited, so the relationship between researchers and software developers is more important than ever.

“We are engaging researchers and practitioners throughout the cybersecurity community,” Open AI confirmed.

“This allows us to leverage the latest thinking and share our findings with those working toward a more secure digital world. To train our models, we partner with experts across academic, government, and commercial labs to benchmark skills gaps and obtain structured examples of advanced reasoning across cybersecurity domains.”

Via CyberNews

You might also like

TOPICS
Ellen Jennings-Trace
Staff Writer

Ellen has been writing for almost four years, with a focus on post-COVID policy whilst studying for BA Politics and International Relations at the University of Cardiff, followed by an MA in Political Communication. Before joining TechRadar Pro as a Junior Writer, she worked for Future Publishing’s MVC content team, working with merchants and retailers to upload content.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.