AI models could be attacked, flawed by this Hugging Face security issue — security worries add to AI concerns

Machine Learning AI
(Image credit: Shutterstock)

There is a way to abuse the Hugging Face Safetensors conversion tool to hijack AI models and mount supply chain attacks.

This is according to security researchers from HiddenLayer, who discovered the flaw and published their findings last week, The Hacker News reports.

For the uninitiated, Hugging Face is a collaboration platform where software developers can host and collaborate on unlimited pre-trained machine learning models, datasets, and applications.

Changing a widely used model

Safetensors is Hugging Face’s format for securely storing tensors which also allows users to convert PyTorch models to Safetensor via a pull request.

And that’s where the trouble lies, as HiddenLayer says the conversion service can be compromised: "It's possible to send malicious pull requests with attacker-controlled data from the Hugging Face service to any repository on the platform, as well as hijack any models that are submitted through the conversion service.”

So, the hijacked model that’s supposed to be converted allows threat actors to make changes to any Hugging Face repository, claiming to be the conversion bot.

Furthermore, hackers can also exfiltrate SFConversionbot tokens - belonging to the bot that makes the pull requests - and sell malicious pull requests themselves.

Consequently, they could modify the model and set up neural backdoors, which is essentially an advanced supply chain attack.

"An attacker could run any arbitrary code any time someone attempted to convert their model," the research states. "Without any indication to the user themselves, their models could be hijacked upon conversion."

Finally, when a user tries to convert a repository, the attack could lead to their Hugging Face token getting stolen, granting the attackers access to restricted internal models and datasets. From there on, they could compromise them in various ways, including dataset poisoning.

In one hypothetical scenario, a user submits a conversion request for a public repository, unknowingly changing a widely used model, resulting in a dangerous supply chain attack.

"Despite the best intentions to secure machine learning models in the Hugging Face ecosystem, the conversion service has proven to be vulnerable and has had the potential to cause a widespread supply chain attack via the Hugging Face official service," the researchers concluded.

"An attacker could gain a foothold into the container running the service and compromise any model converted by the service."

More from TechRadar Pro

Sead is a seasoned freelance journalist based in Sarajevo, Bosnia and Herzegovina. He writes about IT (cloud, IoT, 5G, VPN) and cybersecurity (ransomware, data breaches, laws and regulations). In his career, spanning more than a decade, he’s written for numerous media outlets, including Al Jazeera Balkans. He’s also held several modules on content writing for Represent Communications.

Read more
A person using DeepSeek on their smartphone
DeepSeek ‘incredibly vulnerable’ to attacks, research claims
An abstract image of digital security.
Identifying the evolving security threats to AI models
Hugging Face
What is Hugging Face? Everything we know about the ML platform
DeepSeek
Experts warn DeepSeek is 11 times more dangerous than other AI chatbots
AI tools.
Not even fairy tales are safe - researchers weaponise bedtime stories to jailbreak AI chatbots and create malware
A person holding out their hand with a digital AI symbol.
Meta Llama LLM security flaw could let hackers easily breach systems and spread malware
Latest in Security
Insecure network with several red platforms connected through glowing data lines and a black hat hacker symbol
Multiple H3C Magic routers hit by critical severity remote command injection, with no fix in sight
Microsoft
"Another pair of eyes" - Microsoft launches all-new Security Copilot Agents to give security teams the upper hand
Lock on Laptop Screen
Medusa ransomware is able to disable anti-malware tools, so be on your guard
An abstract image of digital security.
Fake file converters are stealing info, pushing ransomware, FBI warns
Insecure network with several red platforms connected through glowing data lines and a black hat hacker symbol
Coinbase targeted after recent Github attacks
hacker.jpeg
Key trusted Microsoft platform exploited to enable malware, experts warn
Latest in News
girl using laptop hoping for good luck with her fingers crossed
Windows 11 24H2 seems to be a massive fail – so Microsoft apparently working on 25H2 fills me with hope... and fear
ChatGPT Advanced Voice mode on a smartphone.
Talking to ChatGPT just got better, and you don’t need to pay to access the new functionality
Insecure network with several red platforms connected through glowing data lines and a black hat hacker symbol
Multiple H3C Magic routers hit by critical severity remote command injection, with no fix in sight
Apple Watch Ultra 2 timer
The Apple Watch is getting a sleep alarm upgrade it probably should have had 10 years ago
Nikon Z5
The Nikon Z5 II could land soon – here's what to expect from Nikon's rumored entry-level full-frame camera
Google Pixel Watch 3
Google Pixel Watches hit with delayed notifications, crashing, and performance issues following Wear OS 5.1 update