Researchers are using Wikipedia to teach robots that you can’t eat tables
Who says you shouldn’t use Wikipedia as a source?
Verb pairings come pretty naturally to humans. You instinctively know that your phone can be picked up, put down, turned on, smashed, even loved. But you can’t swim it, dance it, or stir it.
Robots don’t have the same instincts, and so will often lack the understanding about what you can and can’t do to an object, meaning they can struggle knowing how to interact with it.
This is a problem that a team working out of Brigham Young university has been working on, using an unconventional method; Wikipedia.
“When machine learning researchers turn robots or artificially intelligent agents loose in unstructured environments, they try all kinds of crazy stuff,” said co-author Ben Murdoch.
“The common-sense understanding of what you can do with objects is utterly missing, and we end up with robots who will spend thousands of hours trying to eat the table.”
The team developed a method for teaching artificially intelligent ‘agents’ about the actions that can be applied to an object by cross-referencing the pairings of verbs on the publicly editable encyclopedia.
Bring me the horizon
To test the efficiency of the learning, the agents were put through a test that included a series of text-based adventure games, where a player and an agent have a back-and-forth interaction, suggesting situations, and then corresponding phrases. According to the BYU newsletter, the Wikipedia solution “improved the computer’s performance on 12 out of 16 games”.
Get the best Black Friday deals direct to your inbox, plus news, reviews, and more.
Sign up to be the first to know about unmissable Black Friday deals on top tech, plus get all your favorite TechRadar content.
The practical applications of this are far reaching, with lead author Nancy Fulda envisioning that a care robot with the ability to interact with the world intelligently “has incredible potential to do good, to help people”.
So if a robot is told to “get my glasses”, it would be able to understand that glasses were required, but also that glasses could be lifted, carried, and passed, all requirements of the action that otherwise would have to be individually programmed.
The team recently presented its work at the International Joint Conference on Artificial Intelligence, and claims that there’s still a lot of work to be done before it reaches the end goal of having a fully functioning android with these capabilities. As we hear of any further developments we’ll let you know.
- Want more cool news about people training robots? Check out: Engineers teach robots to understand emotion through touch
Source BYU
Andrew London is a writer at Velocity Partners. Prior to Velocity Partners, he was a staff writer at Future plc.