Why a machine identity “kill switch” is needed to govern and regulate AI

A representative abstraction of artificial intelligence
(Image credit: Shutterstock / vs148)

Artificial Intelligence isn’t the future of computing. It’s already here. And it’s already being used across diverse industries to power everything from fraud prevention to chatbots, vehicle safety, and personalized shopping recommendations. But with tech moguls warning of AI’s destructive potential, and of the need for government regulation, lawmakers are sitting up and taking notice.

But what form should risk-based enterprise governance and government regulation take? How can organizations protect themselves against threat actors that seek to abuse the power of AI? A positive development would be to assign AI models, inputs, and outputs with identities – in the same way that devices, services, and algorithms have today. By building in governance and protection today, businesses can ensure they are better equipped to deal with tomorrow. This progressive approach could help to drive assurance and security, protect businesses against cyber attacks looking to abuse the power of AI, hold developers accountable and, if required, provide the ultimate kill switch for AI.

The AI regulation train is leaving the station

As AI becomes embedded more firmly into business processes, it inevitably also becomes a more compelling target for attackers. An emerging threat is malicious actors “poisoning” AI affect the decisions a model makes, for example. The Center for AI Safety has already drawn up a lengthy list of potential societal risks, although some are more immediately concerning than others.

That’s part of the reason why global governments are turning their attention to ways in which they can shepherd development and use of the technology, to minimize abuse or accidental misuse. The G7 is talking about it. The White House is trying to lay down some rules of the road to protect individual rights and ensure responsible development and deployment. But it is the EU that is leading the way on regulation. Its proposals for a new “AI Act” were recently green-lit by lawmakers, and there are new liability rules in the works to make compensation easier for those suffering AI-related damages.

Kevin Bocek

Kevin Bocek is Vice President of Security Strategy and Threat Intelligence at Venafi.

The EU tacitly recognizes AI identity

Aristotle’s law of identity posits that if something exists it therefore has an identity. The EU’s proposed new regulations on AI seem to be moving in this direction. The EU outlines a risk-based approach to AI systems whereby those considered an “unacceptable risk” are banned, and those classed as “high risk” must go through a muti-stage system of assessment and registration before they can be approved.

This means that, once developed, an AI model must undergo an audit to ensure it complies with the relevant AI regulations, that it is certified, and that it can be registered. A “declaration of conformity” must then be signed before the AI model can be given a CE marking and placed on the market.

This way of treating AI models or products would seem to imply that they each possess unique individual identities. This could be formalized in a system which authenticates the model itself, its communication with other assets, and the outputs is creates. In this way, we could authenticate whether the model has been certified, if it’s good or bad, and if it’s been changed. Equally, we could authorize and authenticate what the AI is able to connect and communicate with, what other systems it calls upon, and the chain of trust that leads to a specific decision or output.

The latter will be particularly important when it comes to remediation and traceability, as teams will need to be able to trace back all the actions an AI model has taken over time to explain how it came to a certain outcome – or in the case of a malicious actor poisoning the AI, being able to trace what that actor has done while the AI has been compromised. For all of these reasons, identity is required.

The ultimate kill switch

To be clear, when we talk of a ‘kill switch’, we are not talking about one super identity, but a number of related identities all working together. There could be thousands of machine identities associated with each model being used to secure every step in the process to stop unauthorized access and malicious manipulation – from the inputs that train the model, to the model itself and its outputs. This could be a mix of code signing machine identities to verify outputs, alongside TLS and SPIFFE machine identities to protect communications with other machines and cloud native services and AI inputs. And models must be protected at every stage – both during training and while in use. This means that each machine, during every process, needs an identity to prevent authorized access and manipulation.

If AI systems go rogue and start to represent a serious threat to humankind, as some key industry figures have warned could be possible, their identities could be used as a de facto kill switch. As taking away an identity is akin to removing a passport, it becomes extremely difficult for that entity to operate. This kind of kill switch could stop the AI from working, prevent it from communicating with a certain service, and protect it by shutting it down if it has been compromised. It would also need to kill anything else deemed dangerous in the dependency chain that the AI model has generated. This is where that identity-based auditability and traceability becomes important.

Making AI accountable through identity

As governments around the world grapple with how best to regulate a technology of growing influence and import, none have touted the possibility of AI identity management. Yet the EU’s regulation, by far the most fully formed, says each model must be approved and registered—in which case it naturally follows that each would have its own identity. This opens the door to the tantalizing prospect of building a machine identity-style framework for assurance in this burgeoning space.

There’s plenty still to work out. But assigning each AI a distinct identity, would enhance developer accountability and foster greater responsibility, discouraging malicious use. Doing so with machine identity isn’t just something that will help protect businesses in the future, it’s a measurable success today. And more broadly it would help to enhance security and trust in a technology so far lacking either. It’s time for regulators to start thinking about how to make AI identity a reality.

We've featured the best encryption software.

Kevin Bocek

Kevin Bocek is Vice President, Security Strategy and Threat Intelligence at Venafi. He has over 16 years of experience in the IT security industry. He is recognized as subject matter expert in threat detection, encryption, digital signatures, and key management. Additional experience in managing technical sales and professional services organizations.

Read more
A person holding out their hand with a digital AI symbol.
How will the evolution of AI change its security?
An abstract image of digital security.
Identifying the evolving security threats to AI models
Avast cybersecurity
How to beat ‘shadow AI’ across your organization
An abstract image of digital security.
Looking before we leap: why security is essential to agentic AI success
A hand reaching out to touch a futuristic rendering of an AI processor.
Balancing innovation and security in an era of intensifying global competition
A representative abstraction of artificial intelligence
Enterprises aren’t aligning AI governance and AI security. That’s a real problem
Latest in Pro
cybersecurity
What's the right type of web hosting for me?
Security padlock and circuit board to protect data
Trust in digital services around the world sees a massive drop as security worries continue
Hacker silhouette working on a laptop with North Korean flag on the background
North Korea unveils new military unit targeting AI attacks
An image of network security icons for a network encircling a digital blue earth.
US government warns agencies to make sure their backups are safe from NAKIVO security issue
Laptop computer displaying logo of WordPress, a free and open-source content management system (CMS)
This top WordPress plugin could be hiding a worrying security flaw, so be on your guard
construction
Building in the digital age: why construction’s future depends on scaling jobsite intelligence
Latest in Opinion
An image of the Samsung Display concept games console
Forget the Nintendo Switch 2 – I want a foldable games console
Image of Naoe in AC Shadows
Assassin's Creed Shadows is hands-down one of the most beautiful PC ports I've ever seen
Apple CEO Tim Cook
Forget Siri, Apple needs to launch a folding iPhone and get back on track
construction
Building in the digital age: why construction’s future depends on scaling jobsite intelligence
Concept art representing cybersecurity principles
Navigating the rise of DeepSeek: balancing AI innovation and security
A person holding out their hand with a digital AI symbol.
Taking AI to the edge for smaller, smarter, and more secure applications