In a test, 2000 people were shown deepfake content, and only two of them managed to get a perfect score

AI deepfake faces
(Image credit: Shutterstock/ Lightspring)

  • iProov study finds older adults struggle most with deepfakes
  • False confidence is widespread among the younger generation
  • Social media is a deepfake hotspot, experts warn

As deepfake technology continues to advance, concerns over misinformation, fraud, and identity theft are growing, thanks to literacy in AI tools being at a startling low.

A recent iProov study claims most people struggle to distinguish deepfake content from reality, as it took 2,000 participants from the UK and US being exposed to a mix of real and AI-generated images and videos, finding only 0.1% of participants - two whole people - correctly distinguished between real and deepfake stimuli.

The study found older adults are particularly susceptible to AI-generated deception. Around 30% of those aged 55-64, and 39% of those over 65, had never heard of deepfakes before. While younger participants were more confident in their ability to detect deepfakes, their actual performance in the study did not improve.

Older generations are more vulnerable

Deepfake videos were significantly harder to detect than images, the study added,as participants were 36% less likely to correctly identify a fake video compared to an image, raising concerns about video-based fraud and misinformation.

Social media platforms were highlighted as major sources of deepfake content. Nearly half of the participants (49%) identified Meta platforms, including Facebook and Instagram, as the most common places where deepfakes are found, while 47% pointed to TikTok.

"[This underlines] how vulnerable both organizations and consumers are to the threat of identity fraud in the age of deepfakes," said Andrew Bud, founder and CEO of iProov.

"Criminals are exploiting consumers’ inability to distinguish real from fake imagery, putting personal information and financial security at risk."

Bud added even when people suspect a deepfake, most take no action. Only 20% of respondents said they would report a suspected deepfake if they encountered one online.

With deepfakes becoming increasingly sophisticated, iProov believes that human perception alone is no longer reliable for detection, and Bud emphasized the need for biometric security solutions with liveness detection to combat the threat of ever more convincing deepfake material.

“It’s down to technology companies to protect their customers by implementing robust security measures," he said. "Using facial biometrics with liveness provides a trustworthy authentication factor and prioritizes both security and individual control, ensuring that organizations and users can keep pace with these evolving threats."

You may also like

Efosa Udinmwen
Freelance Journalist

Efosa has been writing about technology for over 7 years, initially driven by curiosity but now fueled by a strong passion for the field. He holds both a Master's and a PhD in sciences, which provided him with a solid foundation in analytical thinking. Efosa developed a keen interest in technology policy, specifically exploring the intersection of privacy, security, and politics. His research delves into how technological advancements influence regulatory frameworks and societal norms, particularly concerning data protection and cybersecurity. Upon joining TechRadar Pro, in addition to privacy and technology policy, he is also focused on B2B security products. Efosa can be contacted at this email: udinmwenefosa@gmail.com

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.