The role of deepfakes in the year of democracy, disinformation, and distrust

A person's face against a digital background.
(Image credit: Shutterstock / meamorworks)

AI-generated misinformation and disinformation are set to be the biggest short-term global risks of the year, according to the World Economic Forum. With half of the global population participating in elections this year, misinformation in the form of deepfakes poses a particular danger to democracy. Ahead of the UK General Election, candidates were warned that AI-generated misinformation would circulate, with deepfake video, audio and images being used to troll opponents and fake endorsements.

In recent years, low-cost audio deepfake technology has become widely available and far more convincing. Some AI tools can generate realistic imitations of a person’s voice using only a few minutes of audio, which is easily obtained from public figures, allowing scammers to create manipulated recordings of almost anyone.

But how true has this threat proven to be? Has the deepfake threat proven overhyped, or is it flying under the radar?

Philipp Pointner

Deepfakes and disinformation

Deepfakes have long raised concern in social media, politics, and the public sector. But now with technology advances making AI-enabled voice and images more lifelike than ever, bad actors armed with AI tools to create deepfakes are coming for businesses.

In one recent example targeting advertising group WPP, hackers used a combination of deepfake videos and voice cloning in an attempt to trick company executives into thinking they were discussing a business venture with peers with the ultimate goal of extracting money and sensitive information. While unsuccessful, the sophisticated cyberattack shows the vulnerability of high-profile individuals whose details are easily available online.

This echoes the fear that the sheer volume of AI-generated content could make it challenging for consumers, to distinguish between authentic and manipulated information, with 60% admitting they have encountered a deepfake within the past year and 72% worrying on a daily basis about being fooled by a deepfake into handing over sensitive information or money, according to Jumio research. This demands a transparent discourse to confront this challenge and empower businesses and their end-users with the tools to discern and report deepfakes.

Fighting AI with AI

Education about how to detect a deepfake alone is not enough, and IT departments are scrambling to put better policies and systems in place to prevent deepfakes. This is because fraudsters are now using a variety of sophisticated techniques such as deepfake faces, face morphing and face swapping to to impersonate employees and customers, making it very difficult to spot that the person isn’t who you think they are.

Although cybercriminals are now finding fraud more fruitful, advanced AI can also be the key to not just defending against, but actively countering deepfake cyber threats. For businesses, ensuring the authenticity of individuals accessing accounts is crucial in preventing fraudulent activities such as account takeovers and unauthorized transactions. Biometric-based verification systems are a game-changer in weeding out deepfake attempts. Using unique biological characteristics like fingerprints and facial recognition to verify consumer identities during logins makes it significantly harder for fraudsters to succeed in spoofing their way into accounts. Layering these verification systems together using multiple biometric markers makes for an extremely tough account security system to beat.

But that’s not all. AI can step up the game even further by detecting fraudulent activities in real-time by using predictive analytics. Picture machine learning algorithms sifting through mountains of data, picking out unusual patterns that might indicate fraud. These AI systems are like watchdogs, with the ability to constantly learn how fraudsters behave compared to how typical, legitimate users act. For example, AI can analyze the typical use patterns of billions of devices and phone numbers used to log in to critical accounts where personal information is stored, such as email or bank accounts, to detect unusual behavior.

For example, when a new user is setting up an account with your business, it’s no longer enough to check their ID and let them upload a picture of their selfie. You need to be able to detect deepfakes of both the ID and the selfie through real-time identity verification measures. This involves using advanced selfie verification and both passive and active liveness detection that can catch spoofing attacks.

To truly prevent deepfakes, the solution must control the selfie process and take a series of images to determine whether the person is physically present and awake. Biometric technology can then compare specific facial features from the selfie — such as the distance between the eyes, nose, and ears — against those of the ID photo, ensuring they’re the same person. The selfie verification step should also offer other biometric checks such as age estimation to flag selfies that don’t appear to match the data on the ID.

The future of deepfakes

For the remainder of 2024 and beyond, the potential of AI-generated content driving disinformation to disrupt democratic processes, tarnish reputations and incite public uncertainty cannot be underestimated.

Ultimately, there is no exact approach to effectively mitigating the threat of deepfakes. The key lesson here companies should take from the rise of AI-infused fraud is not to neglect their own use of AI to bolster defenses.

Fighting AI with AI offers businesses their best chance of handling the ever-increasing threat volume and sophistication.

We've listed the best identity management software.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Philipp Pointner is Chief of Digital Identity at Jumio, the leading provider of AI-powered identity verification.

Read more
Hands typing on a keyboard surrounded by security icons
Tackling the threat of deepfakes in the workplace
AI deepfake faces
In a test, 2000 people were shown deepfake content, and only two of them managed to get a perfect score
Honor's Deepfake Detection feature on an orange background
I hope this AI deepfake detection feature comes to more phones soon – but it needs one key upgrade to be truly useful
Hands typing on a keyboard surrounded by security icons
The psychology of scams: how cybercriminals are exploiting the human brain
A deepfake image being created on a computer.
AI deepfakes estimated to cause $40 billion in losses by 2027
DeepSeek on a mobile phone
“This is a wake-up call" - the DeepSeek disruption: 10 experts weigh in
Latest in Pro
Finger Presses Orange Button Domain Name Registration on Black Keyboard Background. Closeup View
I visited the world’s first registered .com domain – and you won’t believe what it’s offering today
Racks of servers inside a data center.
Modernizing data centers: an efficient path forward
Dr. Peter Zhou, President of Huawei Data Storage Product Line
Why AI commonization is so important for business intelligent transformation and what Huawei’s data storage has to offer
Wix automation
The world's leading website builder aims to save businesses time with new tool
Data Breach
Thousands of healthcare records exposed online, including private patient information
China
Juniper patches security flaws which could have let hackers take over your router
Latest in News
Google Pixel 8a in aloe green showing
Google Pixel 9a benchmark link teases the performance of the upcoming mid-ranger
Quordle on a smartphone held in a hand
Quordle hints and answers for Monday, March 17 (game #1148)
NYT Strands homescreen on a mobile phone screen, on a light blue background
NYT Strands hints and answers for Monday, March 17 (game #379)
NYT Connections homescreen on a phone, on a purple background
NYT Connections hints and answers for Monday, March 17 (game #645)
Apple iPhone 16 Pro HANDS ON
Leaked iPhone 17 dummy units may have given us our best look yet at all four models
A super close up image of the Google Gemini app in the Play Store
It's official: Google Assistant will be retired for phones this year, with Gemini taking over