Give Meta my face recognition data? I'd rather lose my Instagram account

Meta social media icons are being displayed on a smartphone among Facebook, Messenger, Instagram, Threads, and other products, with the Meta icon visible in the background.
(Image credit: Photo by Jonathan Raa/NurPhoto via Getty Images)

Meta has just announced plans to bring back facial recognition technology to Facebook and Instagram. This time it's a security measure to help combat "celeb-bait" scams and restore access to compromised accounts.

"We know security matters, and that includes being able to control your social media accounts and protect yourself from scams," wrote the Big Tech giant in a blog post published on Monday, October 21.

Meta wants to use facial recognition technology to detect scammers who use images of public figures to carry out attacks. The company is proposing to compare images on adverts or suspicious accounts with celebrities' legitimate photos. The facial recognition tech will also allow regular Facebook and Instagram users to regain access to their own accounts if locked or hijacked. They'll be able to verify their identity through video selfies which can then be matched to their profile pictures. Handy, sure, but can I trust Meta with my biometrics?

The Big Tech giant promises to take a "responsible approach" which includes encrypting video selfies for secure storage, deleting any facial data as soon as it’s no longer needed, and not using these details for any other purpose. Yet, looking at Meta's track record regarding protecting and misusing its users' information and I'm concerned.

Meta's broken promises

Facebook's parent company has repeatedly breached the privacy, and trust, of its users in the past.

The 2018 Cambridge Analytica scandal was probably the turning point. It shed light on how the personal information of up to 87 million Facebook users was misused for targeting political advertising, predominately during Donald Trump's 2016 presidential campaign.

The company implemented significant changes around user data protection after that but Meta's privacy breaches have continued.

Only this year Meta admitted to having scraped all Australian Facebook posts since 2007 to train its AI model without giving the option to opt-out. The company was also hit with a major fine (€91 million) in Europe for incorrectly storing social media account passwords in unencrypted databases. The year before, January 2023, Meta was hit by an even bigger fine (€390 million) for serving personalized ads without the option to opt out and illicit data handling practices.

It's certainly enough to make me skeptical of Meta's good intentions and big promises.

It's also worth noting that Meta itself decided to shut down its previous facial recognition system in 2021 over privacy concerns, promising to delete all the "faceprints" collected. Now, three years later, it's back on the agenda.

"We want to help protect people and their accounts," wrote Meta in its official announcement, "and while the adversarial nature of this space means we won’t always get it right, we believe that facial recognition technology can ​​help us be faster, more accurate, and more effective. We’ll continue to discuss our ongoing investments in this area with regulators, policymakers, and other experts."

We won’t always get it right – that's not very reassuring. So, something wrong is certain to happen at some point? If that's the case, no thanks, Meta, I don't trust you with my biometric data. I'd rather lose Facebook or Instagram account. What's the benefit of solving a problem to create an even bigger one?

What's certain is that Mark Zuckerberg doesn't need to lose any sleep over EU fines over this for the time being. Meta's facial recognition tests aren't running globally. The company has excluded the UK and the EU markets. GDPR provides stringent privacy laws around personal information.

Elsewhere, Meta's testing will eventually show whether or not the new security feature is the right solution to the growing issue of social media scams, or whether it becomes yet another privacy nightmare. Well, in the name of my privacy, I'm not sure it's worth the trouble of finding out.

Chiara Castro
Senior Staff Writer

Chiara is a multimedia journalist committed to covering stories to help promote the rights and denounce the abuses of the digital side of life—wherever cybersecurity, markets and politics tangle up. She mainly writes news, interviews and analysis on data privacy, online censorship, digital rights, cybercrime, and security software, with a special focus on VPNs, for TechRadar Pro, TechRadar and Tom’s Guide. Got a story, tip-off or something tech-interesting to say? Reach out to chiara.castro@futurenet.com