'AI Godfather' sounds the alarm on autonomous AI

AI
(Image credit: Shutterstock)

  • 'AI godfather' Yoshua Bengio warns that the AI race prioritizes speed over safety
  • This risks unpredictable and dangerous consequences
  • He urges global cooperation to enforce AI regulations before autonomous systems become difficult to control

'AI godfather' Yoshua Bengio helped create the foundations of the neural networks running all kinds of AI tools today, from chatbots mimicking cartoon characters to scientific research assistants. Now, he has an urgent warning for AI developers, as he explained in a Sky News interview. The race to develop ever-more-powerful AI systems is escalating at a pace that, in his view, is far too reckless.

And it’s not just about which company builds the best chatbot or who gets the most funding. Bengio believes that the rapid, unregulated push toward advanced AI could have catastrophic consequences if safety isn’t treated as a top priority.

Bengio described watching developers racing against each other, getting sloppy, or taking dangerous shortcuts. Though speed can make the difference in breaking ground on a new kind of product worth billions and playing catch-up to a rival, it may not be worth it to society.

That pressure has only intensified for AI developers with the rise of Chinese AI firms like DeepSeek, whose advanced chatbot capabilities have caught the attention of Western companies and governments alike. Instead of slowing down and carefully considering the risks, major tech firms are accelerating their AI development in an all-out sprint for superiority. Bengio worries this will lead to rushed deployments, inadequate safety measures, and systems that behave in ways we don’t yet fully understand.

Bengio explained that he has been warning about the need for stronger AI oversight, but recent events have made his message feel even more urgent. The current moment is a "turning point," where we either implement meaningful regulations and safety protocols or risk letting AI development spiral into something unpredictable.

After all, more and more AI systems don’t just process information but can make autonomous decisions. These AI agents are capable of acting on their own rather than simply responding to user inputs. They're exactly what Bengio sees as the most dangerous path forward. With enough computing power, an AI that can strategize, adapt, and take independent actions could quickly become difficult to control should humans want to take back the reins.

AI takeover

The problem isn’t just theoretical. Already, AI models are making financial trades, managing logistics, and even writing and deploying software with minimal human oversight. Bengio warns that we’re only a few steps away from much more complex, potentially unpredictable AI behavior. If a system like this is deployed without strict safeguards, the consequences could range from annoying hiccups in service to full-on security and economic crises.

Bengio isn’t calling for a halt to AI development. He made clear that he's an optimist about AI's abilities when used responsibly for things like medical and environmental research. He just sees a need for a priority shift to more thoughtful and deliberate work on AI technology. His unique perspective may carry some weight when he calls for AI developers to put ethics and safety ahead of competing with rival companies. That's why he participates in policy discussions at events like the upcoming International AI Safety Summit in Paris,

He also thinks regulation needs to be bolstered by companies willing to take responsibility for their systems. They need to invest as much in safety research as they do in performance improvements, he claims, though that balance is hard to imagine appearing in today's AI melee. In an industry where speed equals dominance, no company wants to be the first to hit the brakes.

The global cooperation Bengio pitches might not appear immediately, but as the AI arms race continues, warnings from Bengio and others in similar positions of prestige grow more urgent. He hopes the industry will recognize the risks now rather than when a crisis forces the matter. The question is whether the world is ready to listen before it’s too late.

You might also like...

Eric Hal Schwartz
Contributor

Eric Hal Schwartz is a freelance writer for TechRadar with more than 15 years of experience covering the intersection of the world and technology. For the last five years, he served as head writer for Voicebot.ai and was on the leading edge of reporting on generative AI and large language models. He's since become an expert on the products of generative AI models, such as OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and every other synthetic media tool. His experience runs the gamut of media, including print, digital, broadcast, and live events. Now, he's continuing to tell the stories people want and need to hear about the rapidly evolving AI space and its impact on their lives. Eric is based in New York City.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.

Read more
A hand reaching out to touch a futuristic rendering of an AI processor.
DeepSeek and the race to surpass human intelligence
A person holding out their hand with a digital AI symbol.
AI safety at a crossroads: why US leadership hinges on stronger industry guidelines
A representative abstraction of artificial intelligence
DeepSeek and the coming AI Cambrian explosion
AI Education
The AI lie: how trillion-dollar hype is killing humanity
A butler
OpenAI’s Operator is one more step towards AGI, but should we be worried about giving too much power to AI agents?
Ai tech, businessman show virtual graphic Global Internet connect Chatgpt Chat with AI, Artificial Intelligence.
What companies can learn from the gold rush for the AI boom
Latest in Artificial Intelligence
EDMONTON, CANADA - FEBRUARY 10: A woman uses a cell phone displaying the Open AI logo, with the same logo visible on a computer screen in the background, on February 10, 2025, in Edmonton, Canada
How to use ChatGPT to prepare for a job interview
GPT 4.5
ChatGPT 4.5 understands subtext, but it doesn't feel like an enormous leap from ChatGPT-4o
AI Learning for kids
AI doesn't belong in the classroom unless you want kids to learn all the wrong lessons
EDMONTON, CANADA - FEBRUARY 10: A woman uses a cell phone displaying the Open AI logo, with the same logo visible on a computer screen in the background, on February 10, 2025, in Edmonton, Canada
ChatGPT-4.5 is here (for most users), but I think OpenAI’s model selection is now a complete mess
Google AI Mode
Google previews AI Mode for search, taking on the likes of ChatGPT search and Perplexity
ChatGPT Deep Research
I can get answers from ChatGPT, but Deep Research gives me a whole dissertation I'll almost never need
Latest in News
HPE
HPE set to cut thousands of employees despite results rise
Google Meet on phone
Google Meet is finally giving you the tools to create better AI-generated video backgrounds
Nacon Revolution X Unlimited
I was impressed by the Nacon Revolution X Unlimited Xbox controller at a recent event, and you can pre-order one right now
1Password partnership with Oracle Red Bull Racing F1 team
1Password is making it easier to find passwords based on where you are
Apple MacBook Air M3
The M3 MacBook Air is officially discontinued, but the M2 MacBook Air will live on elsewhere and that's good news
Stock photographs of people smiling and looking at laptops in a small business environment.
This web hosting platform elevates your online presence