Maintaining cyber control when AI can act autonomously

AI robotic workers in an office.
(Image credit: Future/NPowell)

The ServiceNow AI platform vulnerability earlier this year reflects a broader shift happening in enterprise cyber risk. There was no evidence of exploitation before a fix was in place, but the incident serves as a warning to cybersecurity professionals.

Weaknesses in agentic AI capabilities can allow user impersonation and privileged workflow execution to take place, illustrating how modern security threats are evolving beyond traditional data breaches.

Article continues below
Matthew Lloyd Davies

Principal Cyber Security Author, Pluralsight.

For businesses operating across the supply chain, the risk of ungoverned AI agents can grow exponentially. Without proper oversight, autonomous agents could create disruptions that cascade across multiple organizations.

As agentic AI adoption increases and becomes embedded in business software, cybersecurity is no longer just about protecting data; it is about controlling the systems that can act on the organization's behalf. Organizations must move beyond a cybersecurity model centered solely on stopping breaches, and instead focus on how to maintain operational control when automated systems act beyond their intended scope.

A changing cybersecurity model

For most of the last two decades, the cybersecurity model was built around a clear perimeter. Cyber teams would typically be managing and preventing compromises at individual server points, discrete, identifiable failures that could be isolated and contained. The rise of agentic AI has shifted their attention.

As AI becomes embedded into core business platforms, organizations don't just need to worry about hallucinations or output inaccuracies. The next major shift is from 'AI content risk' to 'AI action risk'. When AI agents are interacting across identities, APIs, platforms and workflows, it introduces new risk factors, and unlike a static data breach, these can propagate across multiple systems before anyone notices.

The critical question is what AI agents are authorized to do: how they trigger workflows, execute tasks and operate within delegated permissions. When an agent is misconfigured, exploited or granted excessive privileges, the consequences can escalate rapidly, because these systems automate decisions across multiple workflows simultaneously.

The question is no longer only "have we been breached?" but "are our systems still doing what we authorized them to do?" Those are different problems, and they demand different controls.

Retaining operational control

In testing scenarios, researchers demonstrated that unauthenticated external attackers requiring only a target’s email address could embed malicious instructions in data fields that higher-privileged users’ AI agents would later process. If left unmanaged, organizations can expect to see unauthorized workflow execution, cross-platform access expansion and rapid propagation of errors or malicious actions.

In effect, a familiar security flaw becomes more consequential when it sits inside a platform that can act across workflows – often described as impact amplification.

A reported security flaw that enables user impersonation and arbitrary actions within entitlements is exactly the kind of failure mode that leaders should worry about in AI-enabled workflow systems. It’s why knowing how to retain operational control when automated systems behave unexpectedly is crucial.

For cybersecurity teams, this means treating AI features as changes to the organization's control environment. Organizations must reassess permissions, audit trails, monitoring and rollback paths at every AI implementation. Disciplined identity governance, least-privilege access design and tighter privilege management are essential.

This requires a shift in how organizations manage risk. Rather than focusing on supplier assessments, leaders should prioritize integration governance – prioritizing the small number of platforms that can trigger material business actions. It also involves controlling the seams: mapping key integrations, data flows and privileged automations, while monitoring them for abnormal behavior and tightening admin and service-account privileges.

Rehearsed executive response when AI-enabled workflows are exploited will become increasingly important as the link between cyber and AI becomes stronger. Set out clear escalation expectations including rapid disclosure, clear mitigations and tested vendor comms channels. Time to clarity is a critical security capability in AI controlled systems.

The cyber skills gap

Cybersecurity was identified as one of the top skills gaps in our Tech Skills Report, and 95% of IT and business professionals say they lack adequate support to build skills. Clearly, organizations must invest in capability to govern AI-enabled systems effectively.

If AI agents are going to be added to an existing product, cybersecurity must be top of the agenda in the planning stage. That includes ensuring AI agents are narrowly scoped in terms of their privileges and risks are mapped out if something goes wrong. It also demands investing in the technical capability to design, monitor and rapidly contain AI-driven automation.

But this requires skilled professionals whose skills are up to date on the latest AI cyber risks. Currently, the knowledge gap in the majority of organizations makes it hard for security professionals to defend against AI-powered threats – let alone know what to do when something goes wrong. Those organizations that get it right will see a wealth of new learning in how security and privacy in AI work together.

Equally important is practice. Being able to measure readiness with sandbox assessments will ensure decision-making has been exercised and recovery times are widely understood. Rehearsals should also include executive teams, legal and comms, who are poised to react to threats and coordinate quickly with vendors.

What leadership should prioritize

As organizations accelerate the adoption of AI agents, leaders need to redefine risk. That means treating unauthorized actions, workflow manipulation and operational disruption as disaster scenarios worthy of the same rehearsal rigor applied to ransomware or a major outage. It's a responsibility that doesn't just lie with the cybersecurity team. It’s a responsibility that doesn’t just lie with the cybersecurity team.

The questions every leadership team should already have answers to are: Who can act on our behalf? What's the kill switch? What's our containment move in the first hour? Organizations that have rehearsed those answers, across cyber, legal, comms and executive teams, will be the ones that keep core systems running when something goes wrong.

Check our list of the best Firewalls: reviewed, rated, and ranked.

TOPICS

Principal Cyber Security Author, Pluralsight.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.