Apple Child Safety photo scanning: what you need to know

Apple
(Image credit: Apple)

Update: After pushback and confusion over how Apple's new policy would work, the company published a multi-page FAQ. It explains precisely where photos will be analyzed for CSAM and how the Messages filtering will work, mostly to reassure users that their entire iCloud storage won't be scanned (just photos right before they're uploaded) and that Messages will only be filtered for users under 12 years old who are linked to family accounts. Apple expressly stated its CSAM photo analysis would not be put to any other use, especially by government request.

Apple announced that it would be enacting a new protocol: automatically scanning iPhones and iPads to check user photos for child sexual assault material (CSAM). The company is doing this to limit the spread of CSAM, but also adding other features ‘to protect children from predators who use communication tools to recruit and exploit them,’ Apple explained in a blog post. For now, the features will only be available in the US.

Apple will institute a new feature in iOS 15 and iPadOS 15 (both expected to launch in the next couple months) that will automatically scan images on a user’s device to see if they match previously-identified CSAM content, which is identified by unique hashes (e.g. a set of numbers consistent between duplicate images, like a digital fingerprint). 

Checking hashes is a common method for detecting CSAM that website security company CloudFlare instituted in 2019 and used by the anti-child sex trafficking nonprofit Thorn, the organization co-founded by Ashton Kutcher and Demi Moore. 

In addition, Apple has added two systems parents can optionally enable for children in their family network: first, on-device analysis in the Messages app that scans incoming and outgoing photos for material that might be sexually explicit, which will be blurred by default, and an optional setting can inform account-linked parents if the content is viewed. 

Apple is also enabling Siri and Search to surface helpful resources if a user asks about reporting CSAM; both will also intervene when users search queries relating to CSAM, informing the searcher of the material’s harmful potential and pointing toward resources to get help. 

That’s an overview of how, by Apple’s own description, it will integrate software to track CSAM and help protect children from predation by intervening when they receive (and send) potentially inappropriate photos. But the prospect of Apple automatically scanning your material has already raised concerns from tech experts and privacy advocates – we’ll dive into that below.

iPhone 12 Pro

(Image credit: Apple)

Will this affect me?

If you do not have photos with CSAM on your iPhone or iPad, nothing will change for you. 

If you do not make a Siri inquiry or online search related to CSAM, nothing will change for you.

If your iPhone or iPad’s account is set up with a family in iCloud and your device is designated as a child in that network, you will see warnings and blurred photos should you receive sexually explicit photos. If your device isn’t linked to a family network as belonging to a child, nothing will change for you. 

Lastly, your device won’t get any of these features if you don’t upgrade to iOS 15, iPadOS 15, or macOS Monterey. (The latter will presumably scan iCloud photos for CSAM, but it’s unclear if the Messages intervention for sexually explicit photos will also happen when macOS Monterey users use the app.)

These updates are only coming to users in the US, and it’s unclear when (or if) they’ll be expanded elsewhere – but given Apple is positioning these as protective measures, we’d be surprised if they didn’t extend it to users in other countries.

WWDC 2021 screenshot

(Image credit: Apple)

Why is Apple doing this?

From a moral perspective, Apple is simply empowering parents to protect their children and perform a societal service by curbing CSAM. As the company stated in its blog post, “this program is ambitious, and protecting children is an important responsibility.”

Apple has repeatedly championed the privacy features of its devices, and backs that up with measures like maximizing on-device analysis (rather than uploading data to company servers in the cloud) and secure end-to-end encrypted communications, as well as initiatives like App Tracking Transparency that debuted in iOS 14.5. 

But Apple has also been on the receiving end of plenty of lawsuits over the years that have seemingly pushed the company to greater privacy protections – for instance, a consumer rights advocate in the EU sued the tech giant in November 2020 over Apple’s practice of assigning each iPhone an Identifier for Advertisers (IDFA) to track users across apps, as reported by The Guardian

This may have nudged Apple to give consumers more control with App Tracking Transparency, or at least aligned with the company’s actions in progress.

TechRadar couldn’t find a particular lawsuit that would have pressured Apple to institute these changes, but it’s entirely possible that the company is proactively protecting itself by giving younger users more self-protection tools as well as eliminating CSAM on its own iCloud servers and iPhones in general – all of which could conceivably limit Apple’s liability in the future.

But if you can remove CSAM material, why wouldn’t you?

Apple

(Image credit: Apple)

What do security researchers think? 

Soon after Apple introduced its new initiatives, security experts and privacy advocates spoke up in alarm – not, of course, to defend using CSAM but out of concern for Apple’s methods in detecting it on user devices.

The CSAM-scanning feature does not seem to be optional – it will almost certainly be included in iOS 15 by default, and once downloaded, inextricable from the operating system

From there, it automatically scans a user’s photos on their device before they’re uploaded to an iCloud account – if a certain amount of a photo matches those CSAM hashes during a scan, Apple manually reviews the flagged image and, if they determine it to be valid CSAM, the user’s account is shut down and their info is passed along to the National Center for Missing and Exploited Children (NCMEC), which collaborates with law enforcement.

Apple is being very careful to keep user data encrypted and unreadable by company employees unless it breaches a threshold of similarity with known CSAM. And per Apple, “the threshold is set to provide an extremely high level of accuracy and ensures less than a one in one trillion chance per year of incorrectly flagging a given account.”

But it’s the automatic scanning that has privacy advocates up in arms. “A backdoor is a backdoor,” digital privacy nonprofit Electronic Frontier Foundation (EFF) wrote in its blog post responding to Apple’s initiative, reasoning that even adding this auto-scanning tech was opening the door to potentially broader abuses of access: 

“All it would take to widen the narrow backdoor that Apple is building is an expansion of the machine learning parameters to look for additional types of content, or a tweak of the configuration flags to scan, not just children’s, but anyone’s accounts. 

"That’s not a slippery slope; that’s a fully-built system just waiting for external pressure to make the slightest change,” the EFF wrote, pointing to laws passed in other countries that require platforms to scan user content, like India’s recent 2021 rules. 

Others in the tech industry have likewise pushed back against Apple’s auto-scanning initiative, including Will Cathcart, head of the Facebook-owned WhatsApp messaging service. 

In a Twitter thread, he pointed to WhatsApp’s practice of making it easier for users to flag CSAM, which he claimed led the service to report over 400,000 cases to NCMEC last year, “all without breaking encryption.”

In fairness, Facebook has been trying to get around Apple's App Tracking Transparency: after being forced to disclose how much user data its mobile app (and WhatsApp's app) access, Facebook has tried prompting users to allow that access while criticizing Apple for App Tracking Transparency's harm to small businesses (and, presumably, Facebook) relying on that advertising income.   

Other tech experts are waiting for Apple to give more information before they fully side with the EFF’s view. 

“The EFF and other privacy advocates' concern around misuse by authoritarian regimes may be scarily on point or an overreaction - Apple needs to provide more implementation details,” Avi Greengart, founder of tech research and analysis firm Techsponential, told TechRadar via Twitter message. 

“However, as a parent, I do like the idea that iMessage will flag underage sexting before sending; anything that even temporarily slows the process down and gives kids a chance to think about consequences is a good thing.”

TOPICS
David Lumb

David is now a mobile reporter at Cnet. Formerly Mobile Editor, US for TechRadar, he covered phones, tablets, and wearables. He still thinks the iPhone 4 is the best-looking smartphone ever made. He's most interested in technology, gaming and culture – and where they overlap and change our lives. His current beat explores how our on-the-go existence is affected by new gadgets, carrier coverage expansions, and corporate strategy shifts.