Can the Zoom Cat filter open up a portal for deepfakes?

Can the Zoom Cat filter open up a portal for deepfakes?
(Image credit: Shutterstock)

Since the start of the pandemic, digitals tools have allowed us to continue to function as a society – albeit with a few changes here and there. Thanks to these tools, we’ve been able to continue working, shop for groceries, attend virtual GP appointments via video conferencing and even attend digital court hearings.

About the author

René Hendrikse, MD EMEA & LatAM at Mitek.

However, the move to digital has not been without its challenges. The recent viral video of a ‘cat’ passing ‘judgement’ via a Zoom call – when a judge attempted to use his assistant’s computer during a virtual hearing – highlights this. While the trending video brought joy to many, it also calls into attention, the question of whether our digital interactions are as secure as they seem.

Could video calls, a tool we have become accustomed to, pose a potential, unquantified threat? How would this threat, if left unmasked, impact our ability to work, borrow and buy securely?

Fraudster’s new favourite trick

This isolated incident, ‘catcalling’ if you will, should not be dismissed as a one-off. In fact, during the first nine months of the pandemic, a quarter of Brits and 23% of Americans compromised their security at home, sharing their work passwords with their flatmate, partner, or family member amid increased home-schooling, remote working, and socializing.

Research by SailPoint showed that our lockdown cyber hygiene has slipped – which isn’t making the already high risk of fraud any easier to manage. We trust our eyes the most, which means videos in particular can create a false sense of security.

With our social interactions mostly reduced to messages and video calls, what does this mean for retail banks, approving more mortgages, loans and new customers online than ever before? Or corporate banks, where video calls are now the mainstay of relationship managers and corporations large and small? With billions in hard-earned cash on the table, could video calls be the biggest fraud risk yet?

They just might be. Banks and fintechs have already started establishing partnerships to tackle the use of spoofed videos – found to be a new favorite trick of fraudsters a few months into the pandemic – as ‘deepfake’ crimes continue to be the biggest consumer worry. It’s not without reason. Deepfakes and synthetic identities are likely to open the door for the next wave of identity theft fraud.

Perfecting our “computer vision”

While fraud rises, businesses can’t stay still when it comes to cybersecurity. With digital, including video calls, we often rely on the safety of the channel we use, such as end-to-end encryption, but not how our identity is used there.

‘Frankenstein fraud’, or synthetic identity fraud, is changing that. We are seeing fraudsters gaining access to ever-more sophisticated technologies to create not just false ID images and video feeds, but fake data records that back up that false identity.

Deepfakes posing as real famous people spotted on videos, including Elon Musk and Tom Cruise, are quite hard to tell from their real counterparts. What’s the probability then of spotting a spoof when a brand new customer is trying to sign up to a new banking service? It certainly is a big threat for fintech companies, banks and e-commerce giants alike.

This means our identity verification technologies must take a risk-based, zero trust approach. The reality is that the identity risk profile of a person can change, and probably will over their lifetime – for example if they become a victim of identity fraud. Our technology must stay flexible to enable a change in parameters if the situation develops, protecting consumers from the risks of identity fraud, stopping it in its tracks.

To keep real people protected, we have to perfect our “computer vision” and train digital identity verification algorithms on a variety of diverse profiles, lighting, and proximities. The technology of today and tomorrow must be able to tell a mask, deepfake photo or video from a real person – and avoid disabling people’s access to vital financial products or services at the same time.

A balancing act

That said, we tend to like getting what we want in an instant and avoid jumping through hoops. Having a quick and seamless user experience is key nowadays, meaning that the onboarding process is often a balancing act, between convenience and security; speed and catching more fraud.

So, remember this: every time an app – whether for a bank, payment provider, or retailer – asks you to move closer to the camera or step back, change the frame or lighting of your face, it is not doing it to make the process more difficult. The technology is doing its best to protect us from identity fraud, to keep fraudsters away.

The digital world has its advantages, and its disadvantages too. What we see on the surface may not be what it seems: not every Frankenstein face will look weird or fake, and not every human face will pass the test first time round. The occasional cat filter, therefore, may be one of the most innocent human representations yet. Or it could be a warning sign of fraud that we’re likely to see on the horizon.

Read more
Hands typing on a keyboard surrounded by security icons
Outdated ID verification myths put businesses at risk
Hands typing on a keyboard surrounded by security icons
Tackling the threat of deepfakes in the workplace
A graphic showing fleet tracking locations over a city.
How can banks truly understand the changing regulatory landscape?
Hands typing on a keyboard surrounded by security icons
The psychology of scams: how cybercriminals are exploiting the human brain
Concept art representing cybersecurity principles
Cybercriminals cashing in on holiday sales rush
An illustration of a hooded hacker with an obscured face holding a large fingerprint against a red background.
ID theft – what happens when someone steals your identity
Latest in Security
Hacker silhouette working on a laptop with North Korean flag on the background
North Korea unveils new military unit targeting AI attacks
An image of network security icons for a network encircling a digital blue earth.
US government warns agencies to make sure their backups are safe from NAKIVO security issue
Laptop computer displaying logo of WordPress, a free and open-source content management system (CMS)
This top WordPress plugin could be hiding a worrying security flaw, so be on your guard
Computer Hacked, System Error, Virus, Cyber attack, Malware Concept. Danger Symbol
Veeam urges users to patch security issues which could allow backup hacks
UK Prime Minister Sir Kier Starmer
The UK releases timeline for migration to post-quantum cryptography
Representational image depecting cybersecurity protection
Cisco smart licensing system sees critical security flaws exploited
Latest in News
Ray-Ban Meta Smart Glasses
Samsung's rumored smart specs may be launching before the end of 2025
Apple iPhone 16 Review
The latest iPhone 18 leak hints at a major chipset upgrade for all four models
Quordle on a smartphone held in a hand
Quordle hints and answers for Monday, March 24 (game #1155)
NYT Strands homescreen on a mobile phone screen, on a light blue background
NYT Strands hints and answers for Monday, March 24 (game #386)
NYT Connections homescreen on a phone, on a purple background
NYT Connections hints and answers for Monday, March 24 (game #652)
Quordle on a smartphone held in a hand
Quordle hints and answers for Sunday, March 23 (game #1154)