AI Agent Governance: Why Discovery Isn’t Enough
3 May 2026If you count the number of active AI agents in a mid-sized financial company today, you’re usually counting wrong – because most haven’t been captured yet. That’s the actual problem. Not missing technology, but missing accountability. Non-human Identities (NHIs) – AI agents, service accounts, automated pipelines – exceed human identities in organizations today by a factor of 50 to 100. The trend is rising, especially since the standardization of interfaces like the Model Context Protocol (MCP) has simplified the integration of applications and data with (external) AI services.
The discovery problem is largely solved. Platforms like our partner Astrix and others can detect, list, and classify agents. The actual question is the next one: Who is responsible for this agent? What is it allowed to do? For how long? And who checks this afterwards?
Most organizations don’t have an answer to this yet.
Non-Human Identities (NHIs) are digital identities of AI agents, service accounts, and automated systems that access resources without direct human interaction. NHI Governance describes the systematic control, monitoring, and recertification of these identities within existing IAM and GRC structures.
Note on Scope: This article refers to long-lived, static agents with defined roles and permanent access rights – not to short-lived agents deployed for completing individual, time-limited tasks. Governance requirements and lifecycle considerations differ fundamentally.
Why Classical IAM Structurally Fails at AI Agent Governance
Traditional Identity and Access Management was built for humans. Behind every identity stands an employee, a contractor, a partner – a person who can identify themselves, be trained, and be held accountable.
Agents follow a different logic. They act autonomously. They operate context-sensitively. They can accumulate permissions – not through malicious intent, but because their tasks change without governance processes keeping pace. This is classic permission creep, just at 50 times the speed.
Static role models don’t work here. A role model designed for a clerk with stable tasks and a fixed employment contract doesn’t map the dynamics of an agent that uses different tools tomorrow than today. This isn’t a configuration problem – it’s a conceptual mismatch.
Additionally: Many agents don’t just interact with internal systems, but with external, often foreign-hosted AI services. This means potentially sensitive company information leaves the organization’s sphere of control – often without approval, without logging, without recall capability.
And the numbers make the pressure clear: NHIs exceed human identities in organizations today by a factor of 50 to 100 – and in some environments even beyond that. Anyone who only discovers this mass but doesn’t govern it hasn’t begun the actual work yet.
A Pragmatic Framework for AI Agent Governance
Anyone waiting for ready-made market standards is waiting too long. Industry-wide, unified standards for agent-to-agent communication [1], delegation of authorizations, impersonation, and best practices are still in development.
The A2A protocol is an open standard that enables communication and collaboration between AI agents. It provides a common language – regardless of which frameworks or vendors the agents were developed with. Agents can thus work together as autonomous entities across organizational and technological boundaries. The protocol breaks down existing silos and creates the prerequisite for coordinated, cross-vendor agent architectures. More information: https://a2a-protocol.org
These are real, unsolved problems. Nevertheless, there is a pragmatic path today that can be implemented with existing IAM and GRC structures.
Clustering and Ownership: The Basic Requirement
Without ownership there is no accountability. Every agent needs a human owner – a person responsible for the scope of permissions, operations, and recertification. This sounds trivial. In practice, many organizations fail right here because agents are deployed by development teams without being integrated into existing governance processes.
Clustering helps: Grouping agents by purpose, risk class, system affiliation – but also by intent profile and by the data and processes they access – significantly simplifies administration. This data and process information is typically already captured in GRC tools and can be directly utilized. A taxonomy built on this creates the foundation for scalable governance without manually managing each agent individually.
Lifecycle Management: Joiner, Mover, Leaver for NHIs
What is self-evident for human employees is missing almost everywhere for agents: a clearly defined lifecycle. Joiner – the agent is deployed productively and receives defined permissions. Mover – its tasks or technical context change, permissions are adjusted. Leaver, the agent is decommissioned, all access is revoked.
This sounds trivial. In practice, it usually fails at offboarding – not of agents, but of humans. When an employee leaves the company who created and operated an agent, this agent becomes an orphaned object: active, with full permissions, but without a responsible person. This is one of the most common and least discussed NHI risk paths.
The solution lies in linking human and non-human lifecycle management. Offboarding processes for employees must include NHI ownership as a mandatory step: Which agents has this person been responsible for? What will be reassigned, what will be deactivated? Without this linkage, a blind spot remains that grows with every employee turnover [2].
Least Privilege, SoD and Policy-Based Authorization
The least privilege principle applies to agents just as it does to humans – with the difference that agents actively try to circumvent it when their tasks require it. Context-based access control is therefore more important than static role assignments.
Policy-Based Authorization – the combination of RBAC, ABAC, and rule-based control – allows access rights to be dynamically tied to the actual context of the agent: which task, which data class, which risk level. Segregation of Duties (SoD) must also be consistently enforced – the separation of incompatible permissions that has been standard for human identities for years [3]. With policy-based approaches, SoD can also be applied to agents: Those who grant authorizations via policies can define SoD rules at the policy level and thus enforce them across systems – regardless of how many agents receive these policies.
A significant advantage here: PBAC policies can be recertified independently of the agents. Since the number of policies is typically significantly lower than the number of individual agents, this considerably simplifies governance – and makes it scalable. This isn’t future technology. This can be implemented in IGA systems today if the conceptual framework is right.
Intent Hierarchy as a Governance Foundation
An emerging perspective that directly connects IAM and GRC is the concept of Intent Hierarchy [5]. It describes how different layers of intent define and limit what an agent is allowed to do:
| Layer | Description | IAM Relevance |
|---|---|---|
| Organizational Intent | Corporate policies, regulatory requirements, data protection specifications. Highest priority – not overridable by user instructions. | Directly connected to ISMS policies and GRC frameworks. Defines the hard boundaries for every agent. |
| Role-Based Intent | The digital job description of the agent – area of responsibility, autonomy boundaries, context within the organization. | Directly mappable as organizational role in IGA systems. Connects technical design with business purpose. |
| Developer Intent | What the agent can technically do – capability boundaries, allowed APIs, guardrails against undesired behavior. | Defines the technical framework; relevant for system integration and connector configuration. |
| User Intent | What an end user demands from the agent – the concrete task goal of an interaction. | Only fulfilled if all higher layers permit it. Escalation point in case of conflicts. |
The conflict hierarchy is clear: Organizational overrides Role-Based, which overrides Developer, which overrides User. For governance teams this means: The first two layers are their terrain.
Organizational Intent closes the circle to the GRC perspective: Policies – i.e., guidelines – describe at the corporate level, technology-neutral, what is allowed in the company and what isn’t. The Information Security Management System (ISMS) provides the framework for how implementation, management, and control of these guidelines are to occur. They are the guardrails for every governance decision – including for agents. Anyone who wants to enforce Organizational Intent in their agents must first know what their policies say.
Role-Based Intent is the link between technical design and organizational business purpose – and it is directly mappable as an organizational role in IGA systems. A “HIPAA Compliance Reviewer” has different Role-Based Intent than an “HR Onboarding Assistant,” even if both run on the same technical foundation.
This isn’t abstract architectural theory. It’s the foundation for recertification logic that actually delivers governance value.
IAM and GRC Converge – Especially with External Agents
Anyone using external AI services, whether an LLM via API or a SaaS-bound agent, moves in legal and regulatory terrain that has previously been reserved for third-party management.
This isn’t an analogy today. It’s already reality. External agents have access to corporate data, perform actions, and make context-based decisions – just like external service providers. The requirements from DORA, NIS2, and internal governance frameworks suggest treating them the same way: with risk classification, contractual basis, data protection review, and periodic access review.
IAM and GRC have largely operated separately so far. Agent Governance is the point where this separation no longer works. An organization that treats NHIs only as an IAM problem overlooks the regulatory dimension. One that treats it only as a compliance issue overlooks the technical one. Identity Visibility and Intelligence Platforms (IVIP) – like the NEXIS Platform – address precisely this convergence: They create a unified governance layer across fragmented IAM structures and make visible risks that remain hidden in individual solutions [4].
Start Now – Don’t Wait for Finished Standards
The open questions about industry-wide, unified standards (Agent-to-Agent [1] and others), delegation, impersonation, and best practices are real. But they must not be a waiting state. The basic structure – ownership, authorization governance, lifecycle management, recertification, third-party assessment for external services – is implementable today.
The organizations that start now aren’t investing in a temporary solution. They’re building adaptability: the ability to integrate new agent classes, new protocols, and new regulatory requirements into an already functioning governance structure.
The NEXIS Platform supports this approach as an Identity Visibility and Intelligence Platform (IVIP) with integrated capabilities for governance, Policy-Based Authorization, and the integration of GRC capabilities like Policy Management and Third Party Management – directly deployable on existing IAM structures, without greenfield requirements.
Appendix
FAQ
What is the difference between NHI Discovery and NHI Governance?
Discovery identifies which Non-Human Identities exist in an environment – agents, service accounts, API keys, tokens. This is a necessary prerequisite, but not a sufficient one. Governance takes the next step: It ensures that every NHI has a responsible owner, only possesses the actually required permissions, is subject to a defined lifecycle, and is regularly recertified. Discovery answers “What exists?”. Governance answers “Who is responsible, what is it allowed to do, and until when?”
How do I integrate AI agents into existing IAM lifecycle processes?
The most pragmatic entry point is linking with existing joiner-mover-leaver processes. Specifically: Offboarding workflows must include NHI ownership as a mandatory step. When an employee leaves the company, all agents for which they were responsible must be reassigned or deactivated. Beyond that, every agent needs a defined organizational role in the IGA system – this creates the foundation for permission control, SoD review, and recertification.
What is Organizational Intent and why is it relevant for IAM teams?
Organizational Intent describes the outermost boundary for the behavior of an AI agent: corporate policies, regulatory requirements, and security specifications that the agent must comply with – regardless of what a user demands. This is directly relevant for IAM teams because Organizational Intent comes from the same sources as classic governance policies: the ISMS, GRC frameworks, and regulatory requirements like DORA or NIS2. Anyone already managing these policies has already laid the conceptual foundation for Agent Governance.
What is an IVIP platform and what role does it play in NHI Governance?
Identity Visibility and Intelligence Platforms (IVIP) are platforms that bring together data from fragmented IAM systems – IGA, PAM, Access Management – in a unified governance layer and enrich it through analytics and AI. For NHI Governance they are particularly relevant because they practically address the convergence of IAM and GRC: They provide visibility across all identity types – human and non-human – and enable consistent control across system boundaries. An example of an IVIP platform is the NEXIS Platform [4].
Sources
[1] Agent2Agent Protocol (A2A). https://a2a-protocol.org
[2] Klarl, H. (2025, June 5) Enhancing IAM Hygiene – The Hidden IAM Risks You Can Fix This Quarter.
[3] Nexis. Segregation of Duties in Modern IT Landscapes: Your Guide to Secure and Audit-Ready SoD Controls.
[4] Klarl, H. (2025, September 17). From Patchwork to Governance: The Role of IVIP in Modern Identity Fabrics.
[5] Copty, F., Haiby, N., Hen, I. (2026, March 19). *Governing AI Agent Behavior: Aligning User, Developer, Role, and Organizational Intent.* Microsoft Security Community Blog. https://techcommunity.microsoft.com/blog/microsoft-security-blog/governing-ai-agent-behavior-aligning-user-developer-role-and-organizational-inte/4503551