IAM

Why 99% Accurate AI Isn’t Good Enough for Identity Governance

8 May 2026
Dr. Heiko Klarl
Viktoria Müller
Dr. Heiko Klarl, Viktoria Müller

AI in identity governance is no longer a nice-to-have. It’s expected. But expectations come with conditions. The condition that matters most? Trust.

At Nexis, we’ve spent years building AI into the NEXIS Platform with one principle: transparency over black-box automation. Victoria Müller (IAM Consultant at Nexis) and I recently hosted a webinar titled “AI You Can Trust: Transparent Identity Governance with NEXIS” to explore why explainability matters and how we’ve addressed it.

What is Explainable AI in IAM?

Explainable AI in identity and access management refers to AI systems that provide clear reasoning for their recommendations – showing which data patterns triggered a suggestion and allowing users to verify the logic before accepting it. Unlike black-box AI models that produce outputs without transparency, explainable AI in IAM ensures that access decisions, role recommendations, and risk assessments remain auditable and deterministic.

Why IAM Needs Explainable AI

Identity and access management deals with sensitive data. Who has access to what, why they have it, and whether that access creates risk are not questions you answer with guesswork. IAM practitioners – the people managing roles, running recertifications, enforcing segregation of duties – need precision. They need determinism. A 99% accurate recommendation isn’t good enough when the 1% outlier is a compliance violation or a privilege escalation.

This is where explainable AI becomes critical. Traditional machine learning models have been part of IAM tools for years. Role mining, anomaly detection, access pattern analysis – these use cases benefit from deterministic algorithms that produce consistent, repeatable results. But when you introduce large language models (LLMs) into the mix, the stakes change. LLMs are probabilistic. They generate text, summarize documents, and assist users in ways that feel conversational and intuitive. That’s valuable. It’s also risky if you can’t explain why the AI suggested something.

Traditional AI vs. Explainable AI in IAM:

  • Traditional AI: Black-box recommendations, “trust the model” approach, probabilistic outputs only, fixed model updates, external LLM dependency
  • Explainable AI (NICO): Transparent reasoning shown, user verifies logic before accepting, deterministic + probabilistic hybrid, learns from user feedback, BYO LLM support

NICO, the AI co-pilot embedded throughout the NEXIS Platform, addresses this directly. NICO doesn’t just make suggestions. It explains them. When NICO flags a data quality issue – such as an employee’s location attribute showing “Frankfurt” but their role assignments suggesting they work in Munich – it shows the pattern it detected and explains the reasoning. Users can accept or reject the recommendation. The decision stays with the user, not the AI. That distinction matters.

NICO in Action: Automated Access Reviews and Data Quality

Two core capabilities demonstrate how explainable AI transforms identity governance: automated access reviews and role management.

In recertification scenarios, NICO surfaces changed employees first. Most recertification campaigns drown managers in long lists of assignments. The easiest path? Click “approve all” and move on. That defeats the purpose. NICO restructures the workflow. It highlights outliers – employees whose attributes changed since the last review, roles with conflicting entitlements, assignments that don’t match typical patterns. In pilot deployments with mid-sized enterprises, NICO reduced recertification review time by 60-70% by surfacing high-risk assignments first. Managers focused on the 15-20% of assignments requiring scrutiny instead of clicking through thousands of unchanged roles. Managers still make every decision, but they spend their time where it matters.

Role management demonstrates something equally important: continuous data quality improvement. Instead of treating data cleanup as a separate project that never gets prioritized, NICO embeds it into daily workflows. When a role owner reviews business roles, NICO flags inconsistencies. Maybe a role includes an entitlement that no one else with that role has. Maybe the role description doesn’t match the actual permissions. NICO explains the issue and proposes a fix. The role owner applies it or dismisses it. Over time, the data gets cleaner without anyone scheduling a “data quality sprint.”

Identity data works like kitchen hygiene: clean as you go, or face a disaster later.

Letting dirty data accumulate makes cleanup painful. Addressing it incrementally – during recertifications, during role reviews, during application onboarding – keeps the identity environment manageable.

AI-Assisted Role and Policy Management: IAM Governance Documentation

Regulated enterprises face a documentation burden that most IAM teams dread. Banks and insurers subject to DORA, BAIT, or VAIT must maintain detailed IAM Governance Documentation (in German its called “Berechtigungskonzepte”) for every application. These documents describe the application’s purpose, its integration with IAM systems, critical entitlements, segregation of duties rules, and more. Application owners hate writing them. Compliance teams hate chasing application owners to finish them.

AI assistance integrated directly into the IAM Governance Documentation changes this dynamic. NICO pre-fills sections based on uploaded documents – user stories, technical specs, existing wikis. The application owner reviews the generated text, edits it if needed, and moves on. NICO also validates the content. If a field is supposed to contain an application description but instead holds copied text from a car manual (yes, that happens), NICO flags it.

The goal isn’t to eliminate human judgment. It’s to eliminate repetitive, low-value work. Application owners shouldn’t spend hours rewriting information that already exists somewhere in the enterprise. NICO extracts that knowledge, structures it, and lets the owner focus on what’s unique or requires expertise.

Bring Your Own LLM: Enterprise-Ready AI Architecture

Large organizations don’t want to depend on external LLM providers for sensitive IAM workflows. They want control – over the model, over the data, over the hosting environment.

NEXIS supports bring-your-own-LLM (BYO LLM). If an enterprise runs its own LLM – whether hosted on-premise or in a private cloud – that model can be plugged directly into NEXIS. The default Microsoft Azure LLM service gets replaced with the enterprise’s own model. Data never leaves the organization’s environment. The model benefits from enterprise-wide training and context. Users get a consistent AI experience across all tools, not fragmented co-pilots that don’t know each other exist.

This matters more as enterprises deploy dozens of cybersecurity tools, each with its own embedded AI assistant. Without centralization, organizations end up with 40 different co-pilots, each asking users to adapt to a different interaction model. BYO LLM enables standardization.

MCP: Opening Identity Data to the Enterprise

The Model Context Protocol (MCP) takes this concept further. MCP allows other systems – other LLM-powered applications, other AI agents – to access NEXIS data and functions without heavyweight API integrations. Instead of building custom connectors for every use case, NEXIS capabilities get exposed through MCP. An enterprise LLM can query role assignments, pull IAM Governance Documentation data, or check segregation of duties rules.

Why does this matter? Because identity data shouldn’t be locked inside the IAM platform. NEXIS functions as a high-quality identity and cybersecurity data lake. Other teams – security operations, compliance, IT service management – benefit from that data. MCP makes it accessible without forcing every consumer to become a NEXIS expert or invest in custom integration projects.

MCP adoption is accelerating as enterprises build agentic AI workflows. The Model Context Protocol is rapidly becoming the standard for AI agent interoperability in enterprise environments. By exposing NEXIS as an MCP server, identity governance data becomes queryable by any MCP-compatible tool – from Microsoft Copilot to custom enterprise agents – without per-tool integration projects. Example: When Microsoft Copilot needs to verify whether a user should have access to a specific SharePoint folder, it queries the NEXIS MCP server for role entitlements and SoD rules – no manual lookup required.

When an AI agent needs to understand application access rules before provisioning a resource, it queries the MCP server. When a compliance audit requires evidence of role recertification, the audit system pulls it via MCP. This capability transforms identity governance from a closed system into an enterprise-wide intelligence source.

Trust Through Transparency

AI in IAM must be explainable. Probabilistic models have a place – they excel at summarization, content generation, and conversational interfaces. But they don’t replace deterministic governance. They enhance it.

NICO doesn’t make decisions. It supports them. It surfaces insights that would take hours to find manually. It explains its reasoning. It learns from user feedback. And critically, it operates within an architecture that lets enterprises retain control – over the model, over the data, over the outcomes.

That’s what trust looks like in AI-powered identity governance.

Want to see NICO in action? Experience transparent AI for identity governance in a perzonalized demo.

Request a Demo

 

Frequently Asked Questions

What makes AI explainable in identity governance?
Explainable AI in IAM shows the reasoning behind recommendations – which data patterns triggered a suggestion, which rules were applied, and why a user or role was flagged. NICO provides this transparency by surfacing the logic behind every suggestion.

Can NEXIS use our organization’s own LLM?
Yes. NEXIS supports bring-your-own-LLM (BYO LLM), allowing enterprises to replace the default Azure LLM with their own on-premise or private cloud model. Identity data never leaves your environment.

How does MCP integrate with NEXIS?
MCP (Model Context Protocol) allows other enterprise systems – compliance tools, security platforms, AI agents – to access identity data from NEXIS without custom integrations. They can query role assignments, authorization concepts, and SoD rules directly. This turns NEXIS into a shared data source across your organization.

Does NICO make access decisions automatically?
No. NICO surfaces insights and recommends actions, but humans make final decisions. The AI explains its reasoning, and users approve or reject suggestions. This preserves governance control while reducing manual effort.