Beyond voice control: the future of the 'zero UI'
Welcome to the hands-free, face-first future of computing
Main image credit: © Ultrahaptics
There are no apps, there's nothing to click, and the remote control that's always three seconds behind has been tossed away for good. Welcome to the zero user interface, or zero UI, where there's just you, your personal digital assistant and the sound of your own voice.
The appearance of Amazon's digital assistant Alexa on a whole slew of gadgets, from the eponymous Echo and Dot to all manner of devices unveiled at January's CES, even 'homebots', has put voice at the forefront in the consumer electronics industry. However, the voice revolution stretches far deeper than just being able to ask your light switch what time it is.
Expanding voice tech
It may be kicking off with the Amazon Echo and Amazon Dot, Google Home, and whatever Apple launches (or doesn't) later this year, but the zero UI is set to not only expand voice tech into homebots, chatbots and voice biometrics, but embrace face recognition tech, gesture control and haptic feedback. So much so that we could soon be looking back and laughing about the time we used to talk to Alexa, Google Assistant and Siri.
Speech science
How often does Alexa mis-hear you? Speech recognition software tends to look only for phrases if used in real time, or to process speech more accurately if it's given time to post-process.
“We are seeing a shift in the tech industry as we move away from touchpad technology towards speech as the main form of communication," says Dr Hermann Hauser, co-founder of Amadeus Capital Partners, an investor in Speechmatics, a real-time system that claims to understand many languages with high accuracy. However, voice tech presently has major disadvantages.
Multi-tasking … and text?
We all multi-task, but Alexa doesn't. “If you could have multiple 'conversations' going on at the same time, in different stages, it would be a much better experience," says Thomas Staven, Global head of Pre-sales, Corporate Product Management at Unit4, which has developed a digital assistant called Wanda.
Get the best Black Friday deals direct to your inbox, plus news, reviews, and more.
Sign up to be the first to know about unmissable Black Friday deals on top tech, plus get all your favorite TechRadar content.
Staven thinks that's exactly where digital assistants are headed, but also that the zero UI needs to be as flexible as possible to suit whatever environment people find themselves in.
"Sometimes voice is perfect – like in the privacy of your home, or in the car – but in a crowded office you cannot use voice as interaction," he adds. "Another ‘UI’ like text should be available – we need to offer flexibility here."
That's exactly what another company is working on – and it goes way beyond adding text.
Introducing ultrasound
What if you could 'communicate with Alexa, or a personal robot, using your hands?
"Voice is very powerful, and will be one of the primary interaction methods in the future, but there are a lot of things it will never be good at, like choosing between things, " says Tom Carter, CTO at Ultrahaptics, which has developed a technology that offers mid-air touch using ultrasound.
"Soundwaves are just pressure waves moving through the air, and at specific points you get very high pressure, and low or normal pressure everywhere else," explains Carter. "At the high-pressure spots there's enough force generated to displace the surface of your skin by gently pushing on it."
Haptic control
This goes way beyond the haptic buzz you get from Apple Watch when a message comes in. Using an array of speakers, the soundwaves can be manipulated to change the type of vibration hands can feel, so all manner of clicks, dials, shapes, textures can be created.
"We can sculpt the acoustic field," says Carter, who is currently developing the technology for a car manufacturer. "As you're driving along, you can hold your hand out and get a projection of the volume dial on your hand – it finds you and stays stuck to your hand," he says. "If you bring back touch, it's much more like operating a real user interface, but it's flexible and invisible."
Ultrahaptics' ultrasound technology integrates with all gesture-tracking tech, including Xbox Kinect, Leap Motion and Intel's RealSense camera.
A holographic Alexa?
Such advanced gesture technology could also be used to create haptic feedback interfaces for home appliances, such as ovens, but the goal is nothing short of the smart home at the speed of sound.
"With AR, 3D displays and 3D holograms coming out now, you can imagine a future where you have a 3D Alexa standing on the sideboard," says carter. "And when there's something that's too tricky for talking it can pop up a holographic interface above the speaker and you can reach out and, using technology, touch out and feel it for finer interactions."
Ultrahaptics are also likely to be used in location-based VR, in cinemas, arcades and theme parks. Carter adds: "We can make you feel the crackle in you fingertips as you send force-lightning out of your hands, or as you cast a magic spell."
Jamie is a freelance tech, travel and space journalist based in the UK. He’s been writing regularly for Techradar since it was launched in 2008 and also writes regularly for Forbes, The Telegraph, the South China Morning Post, Sky & Telescope and the Sky At Night magazine as well as other Future titles T3, Digital Camera World, All About Space and Space.com. He also edits two of his own websites, TravGear.com and WhenIsTheNextEclipse.com that reflect his obsession with travel gear and solar eclipse travel. He is the author of A Stargazing Program For Beginners (Springer, 2015),