Back to AI lexicon
AI Governance Frameworks

Gap analysis for AI governance

Gap analysis for AI governance

Gap analysis in AI governance is the process of measuring an organization's current AI practices, policies, and controls against a target benchmark. That benchmark might be a regulation, an industry standard, or an internal maturity goal. The point is to find missing policies, incomplete processes, or misaligned values across AI lifecycles.

Why gap analysis matters

Gap analysis gives governance teams a structured way to find risks, compliance failures, and weak ethical safeguards before they turn into liabilities. As AI deployments scale, controls around data, models, and decision-making need to keep pace with regulatory, ethical, and operational expectations. Without a deliberate assessment, companies risk legal exposure, reputational damage, and operational failures.

Free Excel template Policy & Standards Matrix Template Map AI governance policies against standards and regulations, identify gaps, track remediation, and document compliance evidence.

A significant majority of enterprises claim to have AI governance initiatives, yet fewer than half can demonstrate measurable maturity against any standard benchmark. The distance between intention and implementation is exactly what gap analysis is designed to surface.

The EU AI Act's conformity assessment requirements for high-risk systems make gap analysis particularly urgent. Companies deploying AI in covered use cases need to understand where their current practices fall short of regulatory requirements well before enforcement deadlines.

Practical methodology

The most structured approach follows ISO 42001's clause structure, using its 38 Annex A controls across 10 control objectives as the assessment backbone.

Step 1: Define scope

Determine which AI systems, departments, and use cases are in scope. Assemble a cross-functional team with representatives from IT, compliance, data science, legal, risk management, and business units that deploy AI. Clear scope definition prevents both coverage gaps and wasted effort.

Step 2: Current state inventory

Document existing AI policies, governance structures, risk processes, data practices, and technical controls through a combination of document review, stakeholder interviews, and system inspection. Look at both formal documentation and actual operational practices, since systems sometimes operate differently than they are documented.

The most fundamental gap often emerges at this stage: many companies simply do not have a complete inventory of the AI systems running in production. Shadow AI, where models get deployed outside official governance processes, is a growing concern.

Step 3: Benchmark against controls

Map current practices against the target standard's requirements. For ISO 42001, assess each of the 38 Annex A controls and rate them as Compliant, Partially Compliant, or Not Compliant.

The controls cover AI policy and objectives, risk assessment processes, data governance (quality, privacy, lineage), model transparency, human oversight mechanisms, performance monitoring, supplier and third-party AI management, incident response, and continuous improvement.

For EU AI Act compliance, map current practices against the specific requirements for the applicable risk tier, including documentation obligations, conformity assessment procedures, and post-market monitoring requirements.

Step 4: Risk-prioritize gaps

Not all gaps carry the same weight. High-risk AI systems (per EU AI Act classification) or gaps involving safety, bias, and data governance typically deserve highest priority. Weigh regulatory exposure, potential harm, business impact, and remediation feasibility when ranking them.

Quick wins that can be addressed immediately should be separated from structural gaps that require longer-term investment. Quick wins build momentum and show progress to leadership.

Step 5: Remediation roadmap

Build a prioritized action plan with specific owners, timeframes, and resource estimates for each gap. For teams pursuing ISO 42001 certification, the roadmap feeds directly into the implementation project.

Timelines vary widely by organizational complexity. Smaller companies may need 6 to 12 months to close gaps for certification. Large enterprises with sprawling AI deployments typically need 18 to 24 months.

Common gaps that surface

Certain gaps appear consistently, based on practitioner experience across industries.

No formal AI inventory

Companies frequently discover they have no clear view of what AI systems are in production, who owns them, or what data they process. Without an inventory, every other governance activity runs on incomplete information.

Absent or informal risk assessment

Risk decisions about AI systems are often made ad hoc, without a documented methodology. Teams may assess risks informally but lack the structured processes and documentation that regulators and auditors expect.

Weak data governance

Data quality, lineage, and consent documentation for training data is commonly missing or inconsistent. Data problems are the most common root cause of AI failures, and data governance is a core requirement of both ISO 42001 and the EU AI Act.

No human oversight mechanisms

Automated AI decisions frequently lack defined escalation paths, override capabilities, or human review triggers. For high-risk systems, the EU AI Act explicitly requires human oversight.

Policy-practice disconnect

AI governance policies exist on paper but never make it into development pipelines. Very few teams integrate AI compliance reviews directly into their CI/CD processes. The distance between stated policy and actual practice is one of the most common and most dangerous findings.

Lack of incident procedures

There is often no defined process for detecting, reporting, investigating, and fixing AI-related failures or harms. Under the EU AI Act, incident reporting obligations apply to both providers and deployers of high-risk AI systems.

Missing third-party oversight

When using vendor-provided or open-source AI models, companies often lack governance processes for assessing and managing third-party AI risks. Contracts may not include adequate audit rights, performance requirements, or incident reporting obligations.

AI governance maturity models

Maturity models offer a frame for understanding where a company currently stands and what progression looks like:

Level 1 — Ad hoc: No formal AI governance. Decisions are reactive and AI is deployed without structured oversight.

Level 2 — Developing: Some policies exist but governance is siloed and inconsistent across departments. Individual teams may follow good practices without any organizational standard behind them.

Level 3 — Defined: Documented processes are in place. Roles are assigned. Gap analysis has been conducted and the company understands its current state with a remediation plan.

Level 4 — Managed: Governance is woven into development workflows. Metrics are tracked and regular audits verify compliance. AI risk management is part of enterprise risk management.

Level 5 — Optimized: Continuous improvement processes run on their own. External audit readiness is maintained. Governance functions as a competitive advantage rather than a cost center.

Most enterprises currently fall between Level 2 and Level 3. The gap analysis itself is often what moves a company from Level 2 to Level 3.

Real-world examples

Financial services

Banks and financial companies often run governance gap analyses when introducing new risk-scoring models. If the model lacks transparency or bias controls, a gap analysis flags the absence of documentation, fairness audits, or human override capabilities. For regulated institutions, these gaps can directly affect their regulatory standing.

Healthcare

A healthtech startup building a diagnostic AI without a risk classification, bias assessment, or external review process benefits from early gap analysis. Catching these gaps before seeking certification or entering partnerships avoids costly rework and regulatory delays. Under the EU AI Act, most health AI applications are classified as high-risk, making the analysis especially important.

Enterprise AI adoption

Large companies deploying AI across multiple business units often find through gap analysis that different teams have adopted inconsistent governance practices. One team may have careful documentation and testing, while another deploys models with minimal oversight. The analysis surfaces these inconsistencies and makes standardization possible.

Integration with frameworks and standards

ISO/IEC 42001

Gap analysis is formally the first phase of the ISO 42001 certification journey. Its output feeds directly into the Statement of Applicability (defining which Annex A controls apply), the implementation project plan, and the internal audit scope. Certification auditors typically require 75 to 100 documented artifacts as evidence of control implementation.

NIST AI RMF

The NIST AI RMF's four functions (Govern, Map, Measure, Manage) provide an alternative assessment framework, particularly useful for teams not pursuing ISO 42001 certification. The Map function specifically involves identifying AI systems, stakeholders, and risk factors, which overlaps directly with gap analysis work.

EU AI Act

Gap analysis against EU AI Act requirements focuses on risk classification, documentation obligations, conformity assessment procedures, human oversight mechanisms, and post-market monitoring plans. For companies with high-risk AI systems, the analysis is a prerequisite for demonstrating compliance.

Best practices for AI gap analysis

  • Involve diverse roles. Governance needs input from data scientists, product managers, legal, compliance, risk management, and operations. No single team has full visibility into how AI is actually used.

  • Inspect live systems, not just policy documents. Systems sometimes behave differently than the documentation says. Walk through actual development workflows, deployment processes, and monitoring practices alongside written policies.

  • Prioritize by impact. Some issues are technical debt; others are compliance-critical. Direct remediation effort toward the highest-impact gaps first.

  • Assign owners and follow up. Gap analysis without follow-through creates a false sense of security. Track whether improvements are actually implemented.

  • Repeat the process. Annual assessments are a minimum. Additional assessments should follow regulatory changes, major organizational shifts, new AI deployments, or incidents.

  • Keep thorough records. Findings, remediation plans, and progress tracking need to be documented in a format suitable for regulatory inspection and external audit.

FAQ

What is the main benefit of gap analysis in AI governance?

It reveals where current AI practices fall short of expected norms or standards, allowing early corrections before issues grow into regulatory violations, operational failures, or public trust incidents.

How often should gap analysis be done?

At minimum, once a year. But it is also important after major AI deployments, regulatory changes, organizational restructuring, or incidents involving model failure or bias. Continuous monitoring between formal assessments helps catch emerging gaps.

Can startups use gap analysis effectively?

Yes. It does not need to be formal or expensive. Even a structured checklist or internal audit against the most relevant standards can show an early-stage company what it is missing. Starting with a lightweight assessment establishes governance habits that scale as the company grows.

Which standards are most relevant for gap analysis?

ISO/IEC 42001 has the most detailed control framework with 38 specific controls across 10 objectives. The NIST AI RMF offers a complementary risk-focused methodology. The EU AI Act sets the regulatory baseline for anyone operating in or selling into the EU. Sector-specific guidelines from financial regulators, healthcare authorities, and professional associations add requirements for particular industries.

Who should participate in AI governance gap analysis?

Include representatives from AI development, legal, compliance, risk management, IT security, business units using AI, internal audit, and executive leadership. External consultants can provide independent perspective and specialized expertise. Broad participation ensures gaps are found across all governance dimensions.

How do you prioritize gaps identified in the analysis?

Rank by regulatory exposure, potential harm, business impact, and remediation feasibility. Address compliance gaps with near-term enforcement deadlines first. Watch for dependencies, since some gaps may block resolution of others. Build a remediation roadmap with realistic timelines, separating quick wins from longer-term structural changes.

Implement Gap analysis for AI governance in your organization

Get hands-on with VerifyWise's open-source AI governance platform

Gap analysis for AI governance | AI Governance Lexicon | VerifyWise