Forget sextortion scams, we’re more worried about deepfake ransomware
Deepfake videos are becoming increasingly difficult to identify
Appetite for deepfake scams is expanding among users of underground forums, leading to concerns the technology could be used as part of extortion-based ransomware attacks.
Deepfakes are AI-generated videos and images that transplant the face of another individual - traditionally a celebrity or politician - into a scene in which they were not originally present.
In recent years, deepfakes have been used primarily in the dissemination of fake news and the creation of hoax pornography - and have become increasingly convincing.
- Here's why you shouldn't watch 'inappropriate content' on remote working devices
- Adult streaming site leaks info on millions of users
- Stalkerware now poses a greater privacy risk than ever
According to a report from security firm Trend Micro, deepfake technology could soon be used to blackmail members of the public or workforce into divulging sensitive information or paying significant ransom fees.
Deepfake ransomware
As part of a wider investigation into trends in underground cybercriminal forums and marketplaces, Trend Micro found that interest is growing among forum members in the ability to monetize deepfake technology.
According to the firm, underground forum users often discuss how AI could be used for “eWhoring” (or sextortion) and for circumventing Face ID authentication, especially on dating websites.
While sextortion attacks traditionally rely on social engineering techniques to manipulate the victim into paying a cryptocurrency ransom, Trend Micro fears the increasing sophistication of deepfakes could make reputation scams of this kind all the more potent.
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
“A real image or video would be unnecessary. Virtually blackmailing individuals is more efficient because cybercriminals wouldn’t need to socially engineer someone into a compromising position,” explains the report.
“The attacker starts with an incriminating Deepfake video, created from videos of the victim’s face and samples of their voice collected from social media accounts. To further pressure the victim, the attacker could start a countdown clock and include a link to a fake video...If the victim does not pay before the deadline, all contacts in their address books will receive the link.”
Based on its analysis of underground communities, Trend Micro believes the use of deepfakes for extortion-based ransomware is set to take off in the near future.
While attacks of this kind have not yet been identified in the wild, it is thought a range of different demographics could be at risk - from political candidates to senior executives, celebrities and teenage civilians.
- Check out our list of the best antivirus services around
Joel Khalili is the News and Features Editor at TechRadar Pro, covering cybersecurity, data privacy, cloud, AI, blockchain, internet infrastructure, 5G, data storage and computing. He's responsible for curating our news content, as well as commissioning and producing features on the technologies that are transforming the way the world does business.