A "fakeness score" could help people identify AI generated content

A person's face against a digital background.
(Image credit: Shutterstock / meamorworks)

  • New deepfake detection tool helps to crack down on fake content
  • A "deepfake score" helps users spot AI generated video and audio
  • The tool is free to use to help mitigate the impact of fake content

Deepfake technology uses artificial intelligence to create realistic, yet entirely fabricated images, videos, and audio, with the manipulated media often imitating famous individuals or ordinary people for the use of fraudulent purposes, including financial scams, political disinformation, and identity theft.

In order to combat the rise in such scams, security firm CloudSEK has launched a new Deep Fake Detection Technology, designed to counter the threat of deepfakes and provide users with a way to identify manipulated content.

CloudSEK’s detection tool aims to help organizations identify deepfake content and prevent potential damage to their operations and credibility, assessing the authenticity of video frames, focusing on facial features and movement inconsistencies that might indicate deepfake tampering, such as facial expressions with unnatural transitions, and unusual textures in the background and on faces.

The rise of deepfakes but there is a solution

Audio analysis is also used, where the tool detects synthetic speech patterns that signal the presence of artificially generated voices. The system also transcribes audio and summarizes key points, allowing users to quickly assess the credibility of the content they are reviewing. The final result is an overall "Fakeness Score," which indicates the likelihood that the content has been artificially altered.

This score helps users understand the level of potential manipulation, offering insights into whether the content is AI-generated, mixed with deepfake elements, or likely human-generated.

A Fakeness score of 70% and above is AI-generated, 40% to 70% is dubious and possibly a mix of original and deep fake elements while 40% and below is likely human-generated.

In the finance sector, deepfakes are being used for fraudulent activities like manipulating stock prices or tricking customers with fake video-based KYC processes.

The healthcare sector has also been affected, with deepfakes being used to create false medical records or impersonate doctors, while government entities face threats from election-related deepfakes or falsified evidence.

Similarly, media and IT sectors are equally vulnerable, with deepfakes being used to create fake news or damage brand reputations.

“Our mission to predict and prevent cyber threats extends beyond corporations. That’s why we’ve decided to release the Deepfakes Analyzer to the community,” said Bofin Babu, Co-Founder, CloudSEK.

You might also like

Efosa Udinmwen
Freelance Journalist

Efosa has been writing about technology for over 7 years, initially driven by curiosity but now fueled by a strong passion for the field. He holds both a Master's and a PhD in sciences, which provided him with a solid foundation in analytical thinking. Efosa developed a keen interest in technology policy, specifically exploring the intersection of privacy, security, and politics. His research delves into how technological advancements influence regulatory frameworks and societal norms, particularly concerning data protection and cybersecurity. Upon joining TechRadar Pro, in addition to privacy and technology policy, he is also focused on B2B security products. Efosa can be contacted at this email: udinmwenefosa@gmail.com