OpenAI has a new scale for measuring how smart their AI models are becoming – which is not as comforting as it should be

OpenAI logo
(Image credit: OpenAI)

OpenAI has developed an internal scale for charting the progress of its large language models moving toward artificial general intelligence (AGI), according to a report from Bloomberg

AGI usually means AI with human-like intelligence and is considered the broad goal for AI developers. In earlier references, OpenAI defined AGI as "a highly autonomous system surpassing humans in most economically valuable tasks." That's a point far beyond current AI capabilities. This new scale aims to provide a structured framework for tracking the advancements and setting benchmarks in that pursuit.

The scale introduced by OpenAI breaks down the progress into five levels or milestones on the path to AGI. ChatGPT and its rival chatbots are Level 1. OpenAI claimed to be on the brink of reaching Level 2, which would be an AI system capable of matching a human with a PhD when it comes to solving basic problems. That might be a reference to GPT-5, which OpenAI CEO Sam Altman has said will be a "significant leap forward." After Level 2, the levels become increasingly complex. Level 3 would be an AI agent capable of handling tasks for you without you being there, while a Level 4 AI would actually invent new ideas and concepts. At Level 5, the AI would not only be able to take over tasks for an individual but for entire organizations.

Level Up

The level idea makes sense for OpenAI or really any developer. In fact, a comprehensive framework not only helps OpenAI internally but may also set a universal standard that could be applied to evaluate other AI models.

Still, achieving AGI is not going to happen immediately. Previous comments by Altman and others at OpenAI suggest as little as five years, but timelines vary significantly among experts. The amount of computing power necessary and the financial and technological challenges are substantial.

That's on top of the ethics and safety questions sparked by AGI. There's some very real concern about what AI at that level would mean for society. And OpenAI's recent moves may not reassure anyone. In May, the company dissolved its safety team following the departure of its leader and OpenAI co-founder, Ilya Sutskever. High-level researcher Jan Leike also quit, citing concerns that OpenAI's safety culture was being ignored. Nonetheless, By offering a structured framework, OpenAI aims to set concrete benchmarks for its models and those of its competitors and maybe help all of us prepare for what's coming.

You might also like...

TOPICS
Eric Hal Schwartz
Contributor

Eric Hal Schwartz is a freelance writer for TechRadar with more than 15 years of experience covering the intersection of the world and technology. For the last five years, he served as head writer for Voicebot.ai and was on the leading edge of reporting on generative AI and large language models. He's since become an expert on the products of generative AI models, such as OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and every other synthetic media tool. His experience runs the gamut of media, including print, digital, broadcast, and live events. Now, he's continuing to tell the stories people want and need to hear about the rapidly evolving AI space and its impact on their lives. Eric is based in New York City.

Read more
Half man, half AI.
What is Artificial General Intelligence? Can AI think like humans?
An iPhone showing the ChatGPT logo on its screen
ChatGPT-4.5 is here for Pro users now and Plus users next week, and I can't wait to try it
Open AI
OpenAI is finally going to make ChatGPT a lot less confusing – and hints at a GPT-5 release window
OpenAI Day 12
12 Days of OpenAI ends with a new model for the new year
Representation of AI
Entering the next era of AI: AGI is closer than you think
OpenAI CEO Sam Altman attends the artificial intelligence Revolution Forum. New York, US - 13 Jan 2023
Sam Altman reveals your biggest requests for OpenAI in 2025 and there are two I'd love to see happen
Latest in Artificial Intelligence
Perplexity Squid Game Ad
New ad declares Squid Game's real winner is Perplexity AI
Audio Overview in Gemini
Get ready for Audio Overview in Google Gemini, I’ve used it in Notebook LM and it's a complete game changer
Google Gemini Canvas 'Collaborate with Gemini'
Gemini just got a huge writing and coding upgrade - Google keeps making its AI better and ChatGPT should be worried
A couple angry at each other while lying in bed
Should you use ChatGPT to win an argument? I spoke to mental health and relationship experts to find out
Google Gemini AI
Gemini Deep Research is now free - here are 4 ways to get the most out of Google’s awesome AI tool
An iPhone showing the ChatGPT logo on its screen
5 better prompts to use with ChatGPT
Latest in News
Volvo Gaussian Splatting
Volvo is using AI-generated worlds to make its cars safer and it’s all thanks to something called Gaussian splatting
Image of Asus ROG Ally running Bazzite/SteamOS
This SteamOS update promises a new future for non-Steam Deck handheld PCs – and I can’t wait
Perplexity Squid Game Ad
New ad declares Squid Game's real winner is Perplexity AI
Pedro Pascal in Apple's Someday ad promoting the AirPods 4 with Active Noise Cancellation.
Pedro Pascal cures his heartbreak thanks to AirPods 4 (and the power of dance) in this new ad
Frank Grimes confronts Homer Simpson in The Simpsons' Homer's Enemy episode
Disney+ adds a new continuous Simpsons stream, so you no longer have to spend ages choosing an episode
Helly and Mark standing on an artificial hill surrounded by goats in Severance season 2 episode 3
New Apple teaser for Severance season 2 finale suggests we might finally find out what Lumon is doing with those goats, and I don't think it's anything good