OpenAI, a company backed by Elon Musk, has decided not to release an AI system that can generate news stories and fiction on the grounds that it could be dangerous in the wrong hands.
OpenAI is a non-profit company that aims to finding a way to safely bring about artificial general intelligence. Normally it releases its research to the public, but its latest AI model, known as GPT-2, is reportedly so convincing that it has too much potential for misuse, generating huge volumes of misleading news stories.
- What is AI? Everything you need to know
- We ask podcaster Josh Clark: is AI dangerous?
- Six ways to get the most out of AI
GPT-2 takes a sample of text (a few words of several paragraphs) and predicts the following sentences in a similar style, with surprisingly plausible results.
The system was trained using a dataset of roughly 10 million news articles sourced by trawling Reddit – several times the size of those used by previous state-of-the-art systems.
The sheer volume of data gave the system a much better understanding of written language, and means it's more general purpose than other systems. The Guardian reports that it's able to pass simple reading comprehension tests, and translate and summarize text – often better than systems built for those specific purposes.
Telling tall tales
To demonstrate why it's keeping GPT-2 under wraps, OpenAI created a tweaked version of the system that can generate an infinite stream of positive or negative product reviews.
It's also possible that GPT-2 could develop biases due to its unfiltered dataset, learning from news stories written with an agenda and feeding that influence into its own work.
Get the best Black Friday deals direct to your inbox, plus news, reviews, and more.
Sign up to be the first to know about unmissable Black Friday deals on top tech, plus get all your favorite TechRadar content.
OpenAI says that, as systems like GPT-2 become commonplace, "The public at large will need to become more skeptical of text they find online, just as the 'deep fakes' phenomenon calls for more skepticism about images."
"We need to perform experimentation to find out what they can and can’t do,” said Jack Clark, OpenAI's head of policy. “If you can’t anticipate all the abilities of a model, you have to prod it to see what it can do. There are many more people than us who are better at thinking what it can do maliciously.”
Cat is TechRadar's Homes Editor specializing in kitchen appliances and smart home technology. She's been a tech journalist for 15 years, and is here to help you choose the right devices for your home and do more with them. When not working she's a keen home baker, and makes a pretty mean macaron.