Elon Musk wants to pause AI training - but that's a terrible idea

AI Brain
(Image credit: Getty Images)

The AI genie is out of the bottle, riding on a horse that has left the barn, and I wish good luck to anyone attempting to pause AI development and training for 6 minutes, let alone six months.

"Pause Giant AI Experiments: An Open Letter" is a call to action (or inaction) signed without irony by, among others, Elon Musk (he founded OpenAI with Sam Altman in 2015 before walking away), and the core ask is fairly simple: "We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4." It means that a growing number of experts are terrified by what they've seen of OpenAI's GPT-4 (and perhaps earlier and competing AIs like Google Bard, and Bing AI, which uses a custom blend of ChatGPT). 

I get it. While there's no claim of sentience in the note, the signatories refer to these AI systems as having "human-competitive" intelligence. There's no doubt in my mind that any of these Chatbots can outthink humans when it comes to data retrieval and even the crafting of factual-sounding content. On the other hand, we've all seen how widely imperfect even the best of these AIs can be. They still have a habit of presenting false information as truth.

If anything, these AIs need more training right now and not less. 

What I mean is that even if we pause large language model (LLM) training for six months, no one will stop using Bing, Bard, and ChatGPT. Thanks to things like the new ChatGPT Plugins, they will be a part of many of the services and apps we use every day. These systems' abilities to understand us, respond in kind, and get their damn facts straight will be more important than ever.

A race we can't pause

Musk and company seem to fear out-of-control AIs when the reality is we'll all be controlling them every day. The question is, what we do with the information they give us?

This letter should've warned against blindly trusting AIs, regardless of their current and future computation and competitive power, not about a pause in training.

Let's say, for a moment, that Google, OpenAI, and various partners all agree to pause this training. Should we assume that everyone around the world will do the same? Should we expect China or Russia to pause their own training efforts?

Of course not. This is officially an arms race, and stepping out even temporarily could be disastrous for the US's (and its largely Western global partners) position in the AI race.

It's not that I entirely disagree with the letter. This bit makes sense: "implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts."

It also assumes that none of this has been happening. OpenAI was developed for this express purpose.

Do I fear AI? Not really. Do I think AI should be developed and managed in a responsible way? Of course.

Do I think pausing AI training will help in this effort? Not one little bit.

BTW, I asked ChatGPT (based on the dumber GPT-3) its opinion. Unsurprisingly, it took the high road. 

ChatGPT answers question about AI development

(Image credit: Future)
Lance Ulanoff
Editor At Large

A 38-year industry veteran and award-winning journalist, Lance has covered technology since PCs were the size of suitcases and “on line” meant “waiting.” He’s a former Lifewire Editor-in-Chief, Mashable Editor-in-Chief, and, before that, Editor in Chief of PCMag.com and Senior Vice President of Content for Ziff Davis, Inc. He also wrote a popular, weekly tech column for Medium called The Upgrade.

Lance Ulanoff makes frequent appearances on national, international, and local news programs including Live with Kelly and Mark, the Today Show, Good Morning America, CNBC, CNN, and the BBC.