Are AI toys ethical?
How smart is too smart?
Artificial intelligence (AI) can be found everywhere these days, from the smartphones in our pockets to the microwaves in our kitchen. So, it only follows that AI-powered children’s toys are reaching new heights of popularity.
However, when it comes to kids' toys, how smart is too smart?
With the smart toy market expected to reach $54 billion by 2024 (around £42bn / AU$76bn), manufacturers are increasingly developing and releasing toys that can connect to the internet and learn as children interact with them.
As there currently isn’t any official ‘mark of quality’ for AI devices (although the Foundation for Responsible Robots are making headway in this area), it’s no wonder that people are concerned about the safety and ethics of giving robots to children as playthings. Could they be hacked over an insecure WiFi network? Could our children even be negatively influenced by wayward robots?
What is a smart toy?
A smart toy is a toy that has a degree of artificial intelligence, meaning it can learn, adjust the way it interacts with the user, react to external stimuli and behave according to pre-programmed patterns. The level of intelligence these toys have varies widely, with some able to use voice recognition, touch sensors and smartphone apps to interact with users.
One of the earliest attempts at a toy with ambitions to include the principles of AI was the Furby, which became the first economically successful domestic robot to be sold to consumers upon its launch in 1998. Children receiving one of the in-demand toys would open the box to find that the Furby spoke only in ‘Furbish’, a synthetic language made up of short phrases and sounds.
Intended to replicate the process of learning a language, the Furby would ‘learn’ more English phrases over time, with the ability to learn certain words and phrases more quickly in response to positive reinforcement from the child. Petting the Furby while it said a particular phrase would encourage it to repeat it again in the future.
Get the best Black Friday deals direct to your inbox, plus news, reviews, and more.
Sign up to be the first to know about unmissable Black Friday deals on top tech, plus get all your favorite TechRadar content.
A ‘threat to national security’
Urban legends surrounding the capabilities of the Furby soon flourished, to the point that some intelligence agencies even banned them from their premises in the fear that they would listen in on top secret discussions and later repeat what they had heard.
This, of course, was a myth, as the Furby never had the ability to repeat the phrases its microphones picked up. Still, their startling surge to popularity combined with society’s apprehension about machine learning at the height of Y2K panic, has led to a mistrust of robots aimed at children that still pervades to this day.
So, are we right not to fully trust AI toys? Or, should we embrace the new technology wholeheartedly and leave 20th century fear in the last millennium?
What are the benefits of AI toys?
There’s a huge variety of smart toys and child-friendly AI gadgets on the market at the moment, including the cutesy Anki Cosmo robot, the trainable robo-pup Aibo, and even Amazon’s Echo Dot kids' edition. Frankly, there’s never been a better time to delve into the world of connected toys.
Part of their popularity (aside from the whole “wow, look how cool this is” aspect) is the fact that playing with them can actually be really beneficial to children’s development, particularly in terms of introducing them to electronic devices in an increasingly connected environment. After all, interacting with computers is a skill many children will use throughout their education and into adulthood.
Aside from preparing children for a high tech future by allowing them to code and program, smart toys can also help them to learn about social interaction, which is particularly useful to children on the autism spectrum who may find it challenging to interact with strangers.
Despite these benefits, there are some who are wary of the increased popularity of AI powered toys, and not without good reason.
Security issues
If you Google the ‘safety of smart toys’, the results can appear pretty concerning. The volume of articles about how disconcertingly easy it is to hack into toys that use Bluetooth or WiFi connections could put off any parent. But, why would anyone want to hijack a kid’s toy?
Well, for one, toys that play recorded messages from internal speakers, like Cloud Pets, could be made to order products online from nearby smart speakers like the Amazon Echo, as demonstrated by Which? in an investigation into toy safety.
Perhaps even more worrying is the prospect that connected toys, like the Hello Barbie (pictured up top), with cameras and microphones have the potential to be hijacked and made into surveillance devices.
In one test, US security researcher Matt Jackubowski was able to override security features that enabled the doll to listen to conversations only when a button was pushed, meaning Hello Barbie could be programmed to constantly monitor its environment.
From there, it’s no big leap for hackers to take over the home’s WiFi network, eavesdrop on intimate conversations, and collect personal data, and even make Hello Barbie say anything they want to the user – most likely, a child.
Imagination is key
Even if you’re happy with the security of the AI toys you’re buying for your children, their prevalence does throw up some ethical questions. A large part of a child’s development lies in their ability to engage in creative play, where their imagination is the most important plaything they own.
This is why many child development experts prefer ‘open-ended’ toys to rigidly programmed AI toys, like building blocks and dolls. In the hands of a child, even a cardboard box can be a fairy castle, a pirate ship or a rocket to the moon.
It may be that smart toys aren’t particularly economical for the parents buying them either, as suggested by child development expert Stevanne Auerbach in her book "Smart Play - Smart Toys."
She proposes that different toys have different ‘play values,’ and that toys that don’t encourage exploration and imagination have lower play values, and children are less likely to play with them again. So, if a smart toy has very limited applications, it’s unlikely to be picked up again and again.
Kids under pressure
It’s even possible that AI toys, specifically humanoid robots, can have an effect on the way children behave, particularly in terms of compliance and following instructions.
To test this theory, researchers at Bielefeld University replicated the ‘Asch Paradigm’ a series of tests developed by behavioral psychologist Solomon Asch to determine how people are affected by peer pressure in a group.
In the experiment, a group of subjects are shown a black line, and are given a choice of three other lines of different lengths. They’re then asked individually which of the three lines (1, 2, or 3) matches the original.
Actors placed in the group are instructed to pick the wrong line, despite the correct answer being obvious, which, Asch found, led to a number of participants being influenced by peer pressure to select the wrong answer.
In the new study, the actors were replaced by three robots – specifically, Softbank’s cutesy humanoid droid Nao. When the researchers compared the results of children and adults, they found that the children were far more likely than the adults to be influenced by the robots and select the incorrect answer.
Although the researchers didn’t know why children were more susceptible to being influenced by robots than adults, it does raise concerns about the ethics of providing them with artificially intelligent toys - what if a maliciously hacked toy told them to purchase something online; or give up their address; or even harm themselves or someone else?
How can we make AI toys safe?
We spoke to the lead researcher behind the Bielefeld University study, Anna-Lisa Vollmer about what parents, toy companies, and even the government can do to ensure the safety of children when playing with connected toys.
She believes that AI toys should be regulated in the same way as non-tech toys, saying, “There are certain mechanisms in place, [for example] legal frameworks, standards, and independent testing institutes that ensure toys are safe.”
Vollmer also thinks that parents must take responsibility to “foresee potential dangers”, and should make an effort to be as well-informed as possible, especially as WiFi connected toys are constantly changing and adapting as they become ‘smarter’.
According to Vollmer, there are a number of questions parents should be asking themselves before buying smart toys for their children, particularly around data protection and security - including whether the company who manufactures the toy can be “trusted to upload only ethical content” and “what kind of data is stored while my child plays?”
However, the onus isn’t solely on parents to safeguard their children, as Vollmer says, “The best AI/robot companies can do to safeguard children, is to be good. The question here should be, how can legal frameworks be developed that can ensure that this field/sector can prosper and thrive?”
With AI toys continuing to soar in popularity, it’s clear that lawmakers, manufacturers, and consumers need to work together to ensure that children can reap the benefits from an early introduction to technology while staying safe.
- Best toys 2018: best smart toys for children of all ages
Olivia was previously TechRadar's Senior Editor - Home Entertainment, covering everything from headphones to TVs. Based in London, she's a popular music graduate who worked in the music industry before finding her calling in journalism. She's previously been interviewed on BBC Radio 5 Live on the subject of multi-room audio, chaired panel discussions on diversity in music festival lineups, and her bylines include T3, Stereoboard, What to Watch, Top Ten Reviews, Creative Bloq, and Croco Magazine. Olivia now has a career in PR.