Microsoft names cybercriminals who created explicit deepfakes

Microsoft
(Image credit: Future)

  • A lawsuit against criminal gang Storm-2139 has been updated
  • Four defendants have been named by Microsoft
  • The group is allegedly responsible for creating illegal deepfakes

A lawsuit has partially named a group of criminals who allegedly used leaked API keys from “multiple” Microsoft customers to access the firm’s Azure OpenAI service and generate explicit celebrity deepfakes. The gang reportedly developed and used malicious tools that allowed threat actors to bypass generative AI guardrails to generate harmful and illegal content.

The group, dubbed the “Azure Abuse Enterprise”, are said to be key members of a global cybercriminal gang, tracked by Microsoft as Storm-2139. The individuals were identified as; Arian Yadegarnia aka “Fiz” of Iran, Alan Krysiak aka “Drago” of United Kingdom, Ricky Yuen aka “cg-dot” of Hong Kong, China, and Phát Phùng Tấn aka “Asakuri” of Vietnam.

Microsoft’s Digital Crimes Unit (DCU) filed a lawsuit against 10 “John Does” for violating US law and the acceptable use policy and code of conduct for the generative AI services - now amended to name and identify the individuals.

A global network

This is an update to the previously filed lawsuit, in which Microsoft outlined the discovery of the abuse of Azure OpenAI Service API keys - and pulled a Github repository offline, with the court allowing the firm to seize a domain related to the operation.

“As part of our initial filing, the Court issued a temporary restraining order and preliminary injunction enabling Microsoft to seize a website instrumental to the criminal operation, effectively disrupting the group’s ability to operationalize their services.”

The group is organized into creators, providers, and users. The named defendants reportedly used customer credentials scraped from public sources (most likely involved in data leaks), and unlawfully accessed accounts with generative AI services.

“They then altered the capabilities of these services and resold access to other malicious actors, providing detailed instructions on how to generate harmful and illicit content, including non-consensual intimate images of celebrities and other sexually explicit content,” said Steven Masada, Assistant General Counsel at Microsoft’s DCU.

You might also like

TOPICS
Ellen Jennings-Trace
Staff Writer

Ellen has been writing for almost four years, with a focus on post-COVID policy whilst studying for BA Politics and International Relations at the University of Cardiff, followed by an MA in Political Communication. Before joining TechRadar Pro as a Junior Writer, she worked for Future Publishing’s MVC content team, working with merchants and retailers to upload content.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.