What does AI in a phone really mean?
Machine learning is used in a lot of ways in our phones
Artificial intelligence (AI) is one of the most important recent developments in mobile phones. You’ll hear the term all the time if you follow tech closely enough.
But it’s rarely mentioned in relation to what is most recognizably AI, namely digital assistants like Google Assistant and Amazon Alexa.
Why not? Google and Amazon want these assistants to seem breezy and approachable. There are a few too many stories of AI stealing our jobs and preparing for world domination for it to be beneficial to their image.
But where else do we find phone AI, or claims of it?
Dedicated AI hardware
Several new and recent phones have hardware optimized for AI. These chips are usually called a neural engine or neural processing unit.
They are designed for the fast processing of rapidly changing image data, which would use more processor bandwidth and power in a conventional chip. You’ll find such a processor in the Huawei Mate 20 Pro’s Kirin 980 CPU and the iPhone XS’s A12 Bionic CPU.
Qualcomm also added AI optimization to its Snapdragon 845 chipset, used in numerous high-end 2018 phones. These tweaks are particularly useful for camera-based AI, which tends to intersect with things like augmented reality and face recognition.
Get the best Black Friday deals direct to your inbox, plus news, reviews, and more.
Sign up to be the first to know about unmissable Black Friday deals on top tech, plus get all your favorite TechRadar content.
Camera scene and object recognition
Huawei was the first phone company to try to base the key appeal of one of its phones around AI, with the Huawei Mate 10. This used the Kirin 970 chipset, which introduced Huawei’s neural processing unit to the public.
Camera app scene recognition was the clearest application of its AI. The Mate 10 could identify 13 scene types, including dog or cat pictures, sunsets, images of text, blue sky photos and snow scenes.
Dedicated cameras have had comparable Intelligent Auto modes, capable of knowing what they’re looking at, for years, and Sony Xperia phones made a fuss about similar software without the AI tagline years before.
However, this take on AI actually recognizes objects in the scene to inform this extra processing.
What you end up with is a turbo-charged image designed to be ready for mountains of social media likes. ‘AI’ is used to make a next-generation version of existing software seem more exciting.
AI-assisted night shooting
Huawei came up with a much more interesting use for AI in the Huawei P20 Pro. It’s a night shooting mode that emulates the effect of a long exposure while letting you hold the phone in your hands. No tripod required.
You can see how it works as you shoot. The P20 Pro, and the newer Mate 20 Pro, take a whole series of shots at different exposure levels, then merge the results for the best low-light handheld images you’ve seen from a phone.
The AI part is used to stitch together the images, compensating for slight inter-shot differences because of natural handshake, and motion of objects in the scene. There’s just one problem. Images tend to take 5-6 seconds to capture, which is a pretty long time compared to standard shots.
Its results do mark a significant step forwards in the flexibility of phone cameras, though.
Apple uses a similar method for all shots with its phones, the neural engine inside adding a layer of smarts to the mix when trying to decide how good the shot should look.
Google’s Super Res Zoom
Google’s various labs develop some of the most interesting uses for artificial intelligence. Not all bleed into phones, but the Google Pixel 3 XL does demonstrate some particularly clever camera smarts.
The phone has a single rear camera but uses software to make its zoomed images comparable in quality to those taken with a 2x camera. It’s called Super Res Zoom.
If you zoom in and rest the phone against something solid to keep it perfectly still, you can see how it works. The Pixel 3 XL’s optical stabilization motor deliberately moves the lens in a very slight circular arc, to let it take multiple shots from ever-so-slightly different positions.
The aim is to get shots that are offset to the tune of one sensor pixel. This lets the camera extrapolate more image data because of the pattern of the Bayer array, the filter that sits above the sensor and splits light into different colors.
This kind of sensor shifting is not actually new, but the ability to use it ‘automatically’ when shooting handheld is. As such, it’s a cousin to Huawei’s Super Night mode. The fundamental concepts are not new, but AI lets us use them in less restrained conditions.
Smart selfie blurs and augmented reality
Advanced AI object recognition is also used to take prettier portraits and let a phone take background blur images with just one camera sensor. Most blur modes rely on two cameras. The second is used to create a depth map of a scene, using the same fundamentals as our eyes.
Cameras set apart slightly have a different perspective of a scene, and these differences let them separate near objects from far-away ones. With a single camera, we don’t get this effect and therefore need better software smarts.
AI is used to recognize the border of someone’s face and, even trickier, judge where their hairdo ends and the background begins in an image. Huawei and Google have both used this feature in some of their higher-end phones.
Google told us how it gets this to work in 2017, with the Google Pixel 2. As well as using machine learning informed by more than a million images to recognize people, it also harvests depth information by comparing the views of the two halves of the single camera lens.
It can do this because of the Pixel 2’s Dual Pixel autofocus, which uses an array of microlenses that fit just above the sensor.
That this can create meaningful depth from these tiny differences in the view of a scene shows the power of Google’s AI software.
Google Duplex: real conversations, by fake people
Google also developed the most interesting, and unnerving, use for AI we’ve seen, in Google Duplex. This feature is part of Google Assistant, and lets it make calls on your behalf, to real people.
It can try to book a table at a restaurant, or an appointment at a hair salon. Google showed off the feature at the I/O 2018 conference. And it was so creepily effective, the backlash caused Google to switch tactic and make Duplex tell the person on the other end it wasn’t a real person.
Duplex emulates the pauses, “umm”s and “ahh”s of real people, and like Google Assistant, can deal with accents and half-formed sentences. It has been in testing over the summer of 2018, and will reportedly make its public debut in November on Pixel 3 devices.
Google Assistant, Siri and Alexa
Voice-driven services like this, Google Assistant and Amazon Alexa, are the most convincing applications of AI in phones. But you won’t see many mentions of the term AI from Amazon or Google.
Amazon calls Alexa “a cloud-based voice service”. On the front page of its website, Google does not describe what Assistant is at all.
They want us to use these digital assistants while thinking about how they work and what they are as little as possible. These services’ voice recognition and speech synthesis are impressive, but this brand of AI feeds off data. And data is most pertinent when talking about Google Assistant.
It can read your emails, knows everything you search in Google, the apps you run and your calendar appointments.
Siri is the purest of the digital assistants in AI terms, as it does not rely on data in the same way. That this has also led to Siri being regarded as the least intelligent and least useful of the assistants shows how far AI still has to go.
Apple has sensibly bridged the gap in iOS 12, which adds a feature called Shortcuts. These are user-programmable macros that let you attach actions to a phrase you specify.
This takes the onus off AI, using the tech for the functional basics instead of the more predictive and interpretive elements, and shows the vast breadth of different things the term ‘AI’ is being used (or specifically not used) for in your phone to let your handset do a lot more thinking than you realized.
Brought to you in association with Nokia and Android One, helping you to make more of your smartphone. You can learn more about the new Nokia 7.1 here, and you'll find more great advice on getting the most from your phone here.
Andrew is a freelance journalist and has been writing and editing for some of the UK's top tech and lifestyle publications including TrustedReviews, Stuff, T3, TechRadar, Lifehacker and others.