Meta is on the brink of releasing AI models it claims to have "human-level cognition" - hinting at new models capable of more than simple conversations

AI
(Image credit: Shutterstock)

We could be on the cusp of a whole new realm of AI large language models and chatbots thanks to Meta’s Llama 3 and OpenAI’s GPT-5, as both companies emphasize the hard work going into making these bots more human. 

In an event earlier this week, Meta reiterated that Llama 3 will be rolling out to the public in the coming weeks, with Meta’s president of global affairs Nick Clegg stating that we should expect the large language model “Within the next month, actually less, hopefully in a very short period, we hope to start rolling out our new suite of next-generation foundation models, Llama 3.”

Meta’s large language models are publicly available, allowing developers and researchers free and open access to the tech to create their bots or conduct research on various aspects of artificial intelligence. The models are trained on a plethora of text-based information, and Llama 3 promises much more impressive capabilities than the current model. 

No official date for Meta’s Llama 3 or OpenAI’s GPT-5 has been announced just yet, but we can safely assume the models will make an appearance in the coming weeks. 

Smarten Up 

Joelle Pineau, the vice president of AI research at Meta noted that “We are hard at work in figuring out how to get these models not just to talk, but actually to reason, to plan . . . to have memory.” Openai’s chief operating officer Brad Lightcap told the Finacial Times in an interview that the next GPT version would show progress in solving difficult queries with reasoning. 

So, it seems the next big push with these AI bots will be introducing the human element of reasoning and for lack of a better term, ‘thinking’. Lightcap also said “We’re going to start to see AI that can take on more complex tasks in a more sophisticated way,” adding “ We’re just starting to scratch the surface on the ability that these models have to reason.”

As tech companies like OpenAI and Meta continue working on more sophisticated and ‘lifelike’  human interfaces, it is both exciting and somewhat unnerving to think about a chatbot that can ‘think’ with reason and memory. Tools like Midjourney and Sora have championed just how good AI can be in terms of quality output, and Google Gemini and ChatGPT are great examples of how helpful text-based bots can be in the everyday. 

With so many ethical and moral concerns still unaddressed with the current tools available right now as they are, I dread to think what kind of nefarious things could be done with more human AI models. Plus, you must admit it’s all starting to feel a little bit like the start of a sci-fi horror story.  

You might also like...

TOPICS
Muskaan Saxena
Computing Staff Writer

Muskaan is TechRadar’s UK-based Computing writer. She has always been a passionate writer and has had her creative work published in several literary journals and magazines. Her debut into the writing world was a poem published in The Times of Zambia, on the subject of sunflowers and the insignificance of human existence in comparison. Growing up in Zambia, Muskaan was fascinated with technology, especially computers, and she's joined TechRadar to write about the latest GPUs, laptops and recently anything AI related. If you've got questions, moral concerns or just an interest in anything ChatGPT or general AI, you're in the right place. Muskaan also somehow managed to install a game on her work MacBook's Touch Bar, without the IT department finding out (yet).