Meta's AI chief is right to call AI fearmongering 'BS' but not for the reason he thinks

Yann LeCun
(Image credit: Getty Images)

AI is the latest technology monster scaring people about the future. Legitimate concerns around things like ethical training, environmental impact, and scams using AI morph into nightmares of Skynet and the Matrix all too easily. The prospect of AI becoming sentient and overthrowing humanity is frequently raised, but, as Meta's AI chief Yann LeCun told The Wall Street Journal, the idea is "complete B.S." LeCun described AI as less intelligent than a cat and incapable of plotting or even desiring anything at all, let alone the downfall of our species.

LeCun is right that AI is not going to scheme its way into murdering humanity, but that doesn't mean there's nothing to be worried about. I'm much more worried about people relying on AI to be smarter than it is. AI is just another technology, meaning it's not good or evil. But the law of unintended consequences suggests relying on AI for important, life-altering decisions isn't a good idea.

Think of the disasters and near disasters caused by trusting technology over human decision-making. The rapid-fire trading of stocks using machines far faster than humans has caused more than one near meltdown of part of the economy. A much more literal meltdown almost occurred when a Soviet missile detection system glitched and claimed nuclear warheads were inbound. In that case, only a brave human at the controls prevented global armageddon.

Now imagine AI as we know it today continues to trade on the stock market because humans gave it more comprehensive control. Then imagine AI accepting the faulty missile alert and being allowed to activate missiles without human input.

AI Apocalpse Averted

Yes, it sounds far-fetched that people would trust a technology famous for hallucinating facts to be in charge of nuclear weapons, but it's not that much of a stretch from some of what already occurs. The AI voice on the phone from customer service might have decided if you get a refund before you ever get a chance to explain why you deserve one, and there's no human listening and able to change their mind.

AI will only do what we train it to do, and it uses human-provided data to do so. That means it reflects both our best and worst qualities. Which facet comes through depends on the circumstances. However, handing over too much decision-making to AI is a mistake at any level. AI can be a big help, but it shouldn't decide whether someone gets hired or whether an insurance policy pays for an operation. We should worry that humans will misuse AI, accidentally or otherwise, replacing human judgment.

Microsoft's branding of AI assistants as Copilots is great because it evokes someone there to help you achieve your goals but who doesn't set them or take any more initiative than you allow. LeCun is correct that AI isn't any smarter than a cat, but a cat with the ability to push you, or all of humanity, off of a metaphorical counter is not something we should encourage.

You might also like...

Eric Hal Schwartz
Contributor

Eric Hal Schwartz is a freelance writer for TechRadar with more than 15 years of experience covering the intersection of the world and technology. For the last five years, he served as head writer for Voicebot.ai and was on the leading edge of reporting on generative AI and large language models. He's since become an expert on the products of generative AI models, such as OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and every other synthetic media tool. His experience runs the gamut of media, including print, digital, broadcast, and live events. Now, he's continuing to tell the stories people want and need to hear about the rapidly evolving AI space and its impact on their lives. Eric is based in New York City.

Read more
AI Education
The AI lie: how trillion-dollar hype is killing humanity
Multiple products with AI
Why is AI in everything these days? What you need to know about the world’s favorite buzzword
Bored frustrated business people working in the office with an efficient robot.
Shut it all down? Microsoft research suggests AI usage is making us feel dumber – but you don't need to panic yet
Meta AI on a smartphone
Meta wants to fill your social media feeds with bots – here's why I think it's wrong
AI-generated image of an android standing in front of a circuit board background with a giant padlock in the middle
We’re locked inside a creative bubble, will AI burst it or throw away the key?
AI
5 massive AI trends I'm looking out for in 2025
Latest in Artificial Intelligence
A phone showing a ChatGPT app error message
ChatGPT was down for many – here's what happened
ChatGPT app on an iPhone
5 things you should ask ChatGPT today – oh, and 1 you should never ask it!
Hume AI
What is Hume: Bring emotional understanding to AI-generated voices
Beautiful.ai
What is Beautiful.ai: Create modern presentations in as little time as possible
The Claude, ChatGPT, Google Gemini and Perplexity logos, clockwise from top left
The ultimate AI search face-off - I pitted Claude's new search tool against ChatGPT Search, Perplexity, and Gemini, the results might surprise you
Viggle
What is Viggle: everything you need to know about the AI animation tool and meme generator
Latest in Opinion
Closing the cybersecurity skills gap
How CISOs can meet the demands of new privacy regulations
Half man, half AI.
Ensuring your organization uses AI responsibly: a how-to guide
Judge sitting behind laptop in office
A day in the life of an AI-augmented lawyer
Cyber-security
Why Windows End of Life deadlines require a change of mindset
Polar Pacer
Polar's latest software update might have finally convinced me to ditch my Garmin
An image of the Samsung Display concept games console
Forget the Nintendo Switch 2 – I want a foldable games console