TL;DR: AI is spreading across organizations faster than governance can keep up, creating a surge of unmanaged machine identity security gaps and “shadow AI” that expands the attack surface. Attackers can now compromise identities in minutes, making machine identity security the control plane for safe AI adoption.

Nearly everyone agrees that AI is reshaping the technology industry at a pace faster than anything seen in decades. Whether this transcendent shift strengthens or weakens an organization’s resilience depends on one thing that many overlook: machine identity security. The pace of innovation is pushing every organization to move quickly, but the one that wins with AI won’t be the one that moves fastest. It will be the one that scales safely, balancing the willingness to adopt quickly with the mechanisms and controls needed to address both internal and external risks.

As I explored recently on an episode of the Beyond the Breach podcast, the rapid and often invisible expansion of AI across the enterprise is bringing with it an explosion of machine identities that most organizations are wholly unprepared to govern. This raises questions around how Active Directory and broader identity platforms must evolve, how agentic AI changes identity governance, and why human oversight remains critical for assurance. The answers to all these questions start in the same place: Machine identity security determines whether AI strengthens or weakens resilience.

AI is already inside the business

Employees who lack access to the tools they need will find alternatives. In many cases, teams are adopting AI tools without formal oversight or approval, using publicly available providers to enhance their work.

Every AI system carries an identity, a set of permissions, and a potential blast radius. It can access CRM systems, ERP systems, and mail and communication systems. Yet few organizations can clearly explain three simple questions about their AI systems:

  • Who controls those identities?
  • How broadly can they operate?
  • What happens when they behave unexpectedly?

Attackers are already exploiting this gap in governance. They are leveraging the speed and breadth of AI and pointing it directly at organizations. Recent attacks in the UK sector have demonstrated that the next logical step for threat actors is to compromise third-party AI providers and use federated identities to enter target environments.

The first sign of an AI-assisted intrusion is unexpected identity behavior. Attackers gain access through phishing or by penetrating the infrastructure directly and create additional identities to accelerate lateral movement. The patterns then appear in data movement: exfiltration of customer data and organizational data. According to CrowdStrike’s 2026 Global Threat Report, lateral movement across networks can now occur in just 29 minutes, 65 percent faster than the year before. In one documented intrusion, data exfiltration began within four minutes of entry. The speed at which this unfolds leaves little room for manual intervention. Policies must be in place to identify unexpected behaviors, and solutions must be in place to detect this activity before it escalates.

The explosion of non-human identities

Machine identity security is now critical as machine identities outnumber human identities by an estimated 82:1. For every employee logging in, dozens of service accounts, bots, scripts, workloads, and automated agents authenticate continuously in the background – and most organizations don’t know who those identities belong to or what they can access.

Organizations can invest in sophisticated tools and build security operation centers capable of detecting anomalous behavior. But detection alone is not enough. When a signal goes off, the organization must answer:

  • Who owns this identity?
  • For which operation or application is it being used?
  • What response is required, and within what timeframe?
  • Is the risk accepted, mitigated, or escalated?

And then, the obvious question: Should that machine identity even have access to the environment it is working in?

Organizations need systems in place that translate signals into meaningful context.

Over the last 25 years, Active Directory accumulated significant technical debt as identities gained privileges from historic accounts. Non-human identities are following the same trajectory, only at far greater speed. They’re accumulating privileges and increasing the attack surface. The problem is there’s no policy framework in place to track the lifecycle of these accounts and no accountability for who owns them.

The gap isn’t tooling. The gap is governance.

Shadow AI: the new shadow IT

Shadow AI mirrors the shadow IT problem that organizations spent years trying to resolve: an unsanctioned layer of unmonitored tools, self-adopted agents, and unmanaged integrations.

Unusual API call patterns may indicate that an external AI service has been connected to an internal system, and those can be monitored. What can’t be easily monitored is an employee copying company data into a browser-based AI chat and pasting the output back into a work document. Equally difficult to detect is the developer quietly experimenting with an agentic model.

A small AI model that checks the status of a website, runs diagnostics, and brings it back online is seemingly harmless. But if that process creates an IP conflict and takes down another website, the financial implications could be significant. And the risk will not appear on your risk register, because no one formally introduced it.

The risk does not stem from experimentation itself. It stems from the identities created to support that experimentation and the lack of machine identity security around them.

When company data enters an AI system, whether purposefully or inadvertently, the organization loses visibility into where it goes and who can access it. With evolving international compliance requirements, that exposure carries real compliance risk. Regardless of whether autonomous AI agents were involved in an incident, the person who signed the risk register is ultimately liable.

You cannot stop shadow AI with policies alone. You need to stop it by controlling identity. Identity is how shadow AI operates within your environment.

Controlling shadow AI starts with:

  • Establishing non-human identity hygiene for every tool and every agent within your organization
  • Ensuring all machine identities are least privileged by default, so they cannot break out of the guardrails you’ve put in place
  • Building the capability to roll back identities, both from a cloud and on-premises perspective

Identity recovery must be treated as a first-class control within your policies and within your approach to the organization. This is essential.

Machine identity security as the AI control plane

Active Directory isn’t disappearing in an AI-driven future. It is becoming the silent control plane for many organizations and their capabilities. Active Directory and Entra ID are the foundations for both on-premises and SaaS approaches. Active Directory still anchors trust for major enterprises and will not be displaced overnight, even as AI is introduced.

Identity drives every organization. If identity systems aren’t in place, you cannot authenticate. You cannot communicate. In many cases, you can’t even make a call from an office, because these systems all require identity.

Agentic AI can execute large-scale modifications to systems in milliseconds, well before any human can intervene. The speed and scale of automated action must be proportionate to the controls in place. Without that balance, every AI capability introduced carries potential liability alongside its value.

Machine identity security depends on three non-negotiables:

  1. Visibility into every identity – Organizations must understand what is happening and distinguish correct behavior from misconfigured behavior.
  2. Least privilege by default – No agent or tool should be able to gain access beyond defined guardrails.
  3. Recovery readiness Recovery capability must be tested regularly (at least twice a year, preferably every month) with different staff to build muscle memory.

Attackers are already targeting third-party AI providers, using federated identities to enter enterprise environments. Organizations must be able to demonstrate – based on regulatory and international compliance requirements – that they have systems in place that can withstand these pressures. This is not only a requirement for CISOs. It is a governance responsibility, and it needs to be evidentiary: tested and provable against auditors.

What it takes to win with AI

AI identity governance isn’t about saying no to innovation. It’s saying yes – but safely, repeatedly, and at scale. It’s about balancing enablement with an understanding of the ramifications and implications of the data that these systems will access. Machine identity security is what makes that balance possible.

The goal is confident automation, so that you can generate commercial differentiators that will enable you to bypass competitors, while maintaining the governance framework, safeguards, and policies required to prevent liability and sustain what is being built.

Can your business sustain and thrive under these conditions? That question starts with machine identity security.

Bastiaan Verdonk has over 30 years' experience in the IT industry, with a special focus on Identity Threat Detection and Response, Active Directory and the evolving state of cyber security. During his 20 years at Quest Software, he has supported various customers around the globe to implement Quest products in a wide variety of environments and dealing with several challenges. Most recently, Bastiaan became a trusted subject matter expert on cyber security and resilience where he is involved in some speaking opportunities where he shares his experiences and knowledge with many audiences. He hast spoken at the Gartner IAM conference in 2025 and is part of the Technical Expert Conference which is hosted by Quest both in the US and in EMEA.

Explore the Beyond the Breach podcast

Tune in to the full series for expert perspectives on identity security and what it takes to build cyber resilience in the age of AI.