This was the OMG moment I realized it may already be too late to regulate AI

Sam Altman testifies before Congress
(Image credit: C-Span)

It's time to start worrying about, and regulating, AI in equal measure.

I'll tell you what convinced me. No, it wasn't the low-key but determined calls from OpenAI CEO Sam Altman, during Tuesday's (May 16) US Senate hearing on AI, for a regulatory body and global oversight. It wasn't even the more worrisome rhetoric from almost-alarmist Professor Gary Marcus. Yes, all that had an impact, and I want to dig into it. What really got me, though, were the first few minutes of what turned into a three-hour hearing.

That was when Senator Richard Blumenthal opened the hearing with a prerecorded speech that outlined the dangers of black-box, unregulated AI, and diminishing trust, ending with "This is not the future we want."

It was a good way to kick off the hearing – Altman's first – setting as it did an inquisitive and concerned tone. Except the voice and words were not Senator Blumenthal's. He explained:

"If you were listening from home you might’ve thought that voice was mine and the words from me, but in fact that voice was not mine, the words were not mine, and the audio was an AI voice cloning software trained on my floor speeches. The remarks were written by ChatGPT when it was asked how I would open this hearing."

While I noticed Senator Josh Hawley visibly smirking next to Blumenthal, I felt a chill go down my spine. I mean, I know generative AI is capable of all this, and yet, I'm not sure it had ever been presented in such stark terms, and on such a lofty and public stage.

It was, to be honest, an OMG moment – and that's putting it politely.

Risk vs reward

The rest of the hearing was far less revelatory. Senators, for once, appeared to have done their homework, talking intelligently about models, training, and content rights. They dove into how easily these chatbots can manipulate people, and how they often lapse into hallucinatory responses.

As for Altman, he made it clear that he was not there to beat back criticism of OpenAI and GPT-4 .

"We think that regulatory intervention by governments will be critical to mitigating the risks of increasingly powerful models," said Altman, who supports not just a regulatory body, and the licensing of models of a certain degree of power, but some kind of global overnight, although he accepts that will be harder to achieve.

Altman displayed a surprising amount of empathy, telling the congresspeople, "We understand the people are anxious about how it could change the way we live. We are, too."

Many of the assembled Senators expressed a desire to avoid the mistakes made with social media: acting too late, and assuming that antiquated policies (like Section 230) were up to the task of navigating the modern social media landscape. 

This is laudable, but after hearing Senator Blumenthal's ChatGPT recording, I wondered if maybe we're already too late. Although I would say the senator's quick admission that the speech was not his is in line with Altman's recommendation that AI-generated content and images always make their origins clear.

Senators also rightly described AI's rapid emergence to something akin to the invention of the printing press, but also, and not unreasonably, worried that it might more like the creation of the atomic bomb.

Everyone wants this

There were also recommendations for AI 'Nutrition Labels' that would explain exactly what went into training an AI, and which would help us understand what a generative AI produces and why.

Obviously, Altman also did his best to explain that AI, and the work OpenAI does, can be a force for good. Yes, there will be job losses, but he insists there will also be a lot of job creation. Altman outlined the safeguards his company builds into development, including testing GPT-4 for six months before it was released publicly.

He added, "The benefits of the tools we’ve deployed so far vastly outweigh the risks."

But with the next US Presidential election just a year away, and the growing realization that vast numbers of people can be fooled by the content chatbots gleefully spit out, the clock is ticking.

Despite the almost unanimous agreement that we need AI regulation now, the prospects for Congress authorizing and funding an AI regulatory body in the meantime are slim. Regulation of any kind moves at its own glacial pace, and rarely seems designed to even catch up with, let alone get ahead of, risks. We have self-driving car technology on the road right now, for instance, but few nationwide rules for managing it.

Senator Blumenthal didn't just kick off the hearing with a bit of AI showmanship, he made a perhaps unintentional point: it's already too late to get ahead of this generative AI freight train. The question is, can we climb aboard, walk to the engineer's cabin and take control before it comes off the rails?

TOPICS
Lance Ulanoff
Editor At Large

A 38-year industry veteran and award-winning journalist, Lance has covered technology since PCs were the size of suitcases and “on line” meant “waiting.” He’s a former Lifewire Editor-in-Chief, Mashable Editor-in-Chief, and, before that, Editor in Chief of PCMag.com and Senior Vice President of Content for Ziff Davis, Inc. He also wrote a popular, weekly tech column for Medium called The Upgrade.

Lance Ulanoff makes frequent appearances on national, international, and local news programs including Live with Kelly and Mark, the Today Show, Good Morning America, CNBC, CNN, and the BBC. 

Read more
A person holding out their hand with a digital AI symbol.
AI safety at a crossroads: why US leadership hinges on stronger industry guidelines
AI Education
The AI lie: how trillion-dollar hype is killing humanity
An AI face in profile against a digital background.
Navigating transparency, bias, and the human imperative in the age of democratized AI
AI
5 massive AI trends I'm looking out for in 2025
AI
'AI Godfather' sounds the alarm on autonomous AI
Ai tech, businessman show virtual graphic Global Internet connect Chatgpt Chat with AI, Artificial Intelligence.
What companies can learn from the gold rush for the AI boom
Latest in Artificial Intelligence
Google AI Mode
Google previews AI Mode for search, taking on the likes of ChatGPT search and Perplexity
ChatGPT Deep Research
I can get answers from ChatGPT, but Deep Research gives me a whole dissertation I'll almost never need
OpenAI CEO Sam Altman attends the artificial intelligence Revolution Forum. New York, US - 13 Jan 2023
Sam Altman tweets delay to ChatGPT-4.5 launch while also proposing a shocking new payment structure
ChatGPT Deep Research
I tried Deep Research on ChatGPT, and it’s like a super smart but slightly absent-minded librarian from a children’s book
Google Gemini iPhone Lock Screen
You can now access Gemini from your iPhone's lock screen
A silhouette of a thinking man with a thought bubble above his head containing the Qualcomm Snapdragon logo.
'AI is the new UI'. Qualcomm’s bold vision for how we use our devices could lead to the death of the app - and I’m not sure how I feel about that
Latest in Opinion
Concept art representing cybersecurity principles
What businesses need for modern third-party risk management
An LG OLED TV on the right, and a Philips Roku TV on the left
I wouldn't buy the new Roku OLED TV – not when the LG OLED equivalent is even cheaper, while it lasts
Half man, half AI.
How finance teams can avoid falling behind in the AI race
The LGQNED93 and LG QNED91 with garden on screen
I saw LG's latest QNED mini-LED TV and it might finally compete with Samsung, Hisense and TCL
An abstract image of a lock against a digital background, denoting cybersecurity.
Cyber resilience under DORA – are you prepared for the challenge?
A person holding out their hand with a digital AI symbol.
The decision-maker's playbook: integrating Generative AI for optimal results