China outstrips GPT-3 with even more ambitious AI language model

AI Head
(Image credit: Shutterstock)

A Chinese AI institute has unveiled a new natural language processing (NLP) model that is even more sophisticated than those created by both Google and OpenAI.

The WuDao 2.0 model was created by the Beijing Academy of Artificial Intelligence (BAAI) and developed with the help of over 100 scientists from multiple organizations. What makes this pre-trained AI model so special is the fact that it uses 1.75tn parameters to simulate conversations, understand pictures, write poems and even create recipes.

Parameters are variables that are defined by machine learning models and as these models evolve, the parameters themselves also improve to allow an algorithm to get better at finding the correct outcome over time. Once a model has been trained on a specific data set like human speech samples, the outcome can then be applied to solving other similar problems.

Models that contain a higher number of parameters are often more sophisticated but this requires investing a greater amount of time and money into their development.

WuDao 2.0 model

Back in January of this year, Google's Switch Transformer set a new record for AI language models with 1.6tn parameters which is six times larger than the 175bn parameters found in OpenAI's GPT-3 model released last year. However, now with the release of its WuDao 2.0 model, BAAI has broken the records set by both Google and OpenAI.

WuDao 2.0 is able to understand both Chinese and English as the new AI model was trained by studying 1.2TB of text in each language and 4.9TB of images and text overall. So far it has 22 partners including Xiaomi, Meituan and Kuaishou in China.

Chinese AI researcher Blake Yan provided further insight to the South China Morning Post on how these large AI language models can use the knowledge they already have to learn new tasks, saying:

“These sophisticated models, trained on gigantic data sets, only require a small amount of new data when used for a specific feature because they can transfer knowledge already learned into new tasks, just like human beings. Large-scale pre-trained models are one of today’s best shortcuts to artificial general intelligence.”

Artificial general intelligence, which refers to the hypothetical ability of a machine to learn any task in the way that a human can, is the end goal of training these large AI language models and with the release of WuDao 2.0, it appears we're one step closer to achieving it.

Via South China Morning Post

TOPICS
Anthony Spadafora

After working with the TechRadar Pro team for the last several years, Anthony is now the security and networking editor at Tom’s Guide where he covers everything from data breaches and ransomware gangs to the best way to cover your whole home or business with Wi-Fi. When not writing, you can find him tinkering with PCs and game consoles, managing cables and upgrading his smart home.