Back to policy templates
Policy 04 of 15

AI Governance Policy

Umbrella policy establishing the organization's approach to governing AI systems across their lifecycle.

1. Purpose

This policy establishes how [Organization Name] governs the development, procurement, deployment, and retirement of artificial intelligence systems. It creates the governance structure, defines decision rights, sets lifecycle gates, and confirms that AI systems operate within acceptable risk boundaries while meeting regulatory obligations.

2. Scope

This policy applies to:

Out of scope: Standard business software that does not use machine learning or generative AI capabilities (e.g., rule-based automation, traditional analytics dashboards).

  • All AI and machine learning systems developed internally or procured from third parties.
  • All employees, contractors, and partners who develop, deploy, operate, or oversee AI systems.
  • All stages of the AI lifecycle: ideation, development, testing, deployment, monitoring, and retirement.
  • Both general-purpose and domain-specific AI systems, including large language models, decision-support tools, predictive analytics, and automated processing systems.

3. Definitions

  • AI system: A system that uses machine learning, deep learning, or generative AI techniques to produce outputs such as predictions, recommendations, decisions, or content.
  • High-risk AI system: An AI system that makes or materially influences decisions affecting individuals' rights, health, safety, employment, financial standing, or legal status. Also includes systems classified as high-risk under the EU AI Act Annex III.
  • AI lifecycle: The stages an AI system passes through: ideation, data collection, development, validation, deployment, monitoring, and retirement.
  • Model owner: The individual accountable for an AI system's performance, compliance, and risk posture throughout its lifecycle.
  • AI Governance Committee: The cross-functional body that reviews high-risk AI use cases, approves exceptions, and sets governance standards.

4. Governance structure

Placeholder. Populate with your organization's language for 4. Governance structure.

4.1 AI Governance Committee

The organization establishes an AI Governance Committee composed of representatives from:

The Committee meets monthly (or as needed for escalations) and is responsible for:

  • Executive leadership (sponsor)
  • Legal and compliance
  • Information security
  • Data privacy
  • Engineering / data science
  • Business operations
  • Risk management
  • Approving or rejecting high-risk AI use cases.
  • Setting and updating AI governance standards and thresholds.
  • Reviewing aggregate AI risk posture and incident trends.
  • Advising on regulatory changes that affect AI operations.

4.2 AI Governance Lead

A designated AI Governance Lead coordinates day-to-day governance activities, maintains the AI inventory, tracks compliance status, and is the primary escalation point for AI-related concerns.

4.3 Model owners

Each AI system must have an assigned model owner who is accountable for:

  • Maintaining documentation (model card, data sheet, risk assessment).
  • Confirming the system passes required validation before deployment.
  • Monitoring the system in production and responding to incidents.
  • Initiating retirement when the system is no longer fit for purpose.

4.4 All employees

Every employee is expected to:

  • Follow this policy and related AI procedures.
  • Report unauthorized or unvetted AI tool usage.
  • Complete required AI awareness training.
  • Escalate concerns about AI system behavior through established channels.

5. AI risk classification

All AI systems must be classified before development or procurement begins.

Risk levelCriteriaGovernance requirements
HighAffects rights, health, safety, employment, financial standing, or legal status. Operates in a regulated domain. Falls under EU AI Act Annex III.Full Committee review, FRIA, mandatory testing, post-market monitoring, incident response plan.
MediumInfluences business decisions or customer experience but does not directly affect individual rights.Model owner review, documented risk assessment, periodic monitoring.
LowInternal productivity tools, content assistance, or analytics with human review.Registration in AI inventory, basic documentation.

Systems may be reclassified as their use evolves. Any change in risk classification triggers a new review.

6. AI lifecycle gates

AI systems must pass through the following gates. Each gate requires documented evidence before proceeding.

Gate 1: Intake and classification

  • Business justification documented.
  • Risk classification assigned.
  • Data requirements and privacy impact identified.
  • Model owner assigned.

Gate 2: Development and validation

  • Training data sourced, documented, and reviewed for bias.
  • Model validated against acceptance criteria.
  • Security review completed.
  • Bias and fairness testing completed for high-risk systems.

Gate 3: Pre-deployment approval

  • Independent validation completed (high-risk systems).
  • Risk assessment finalized and entered in risk register.
  • Compliance review completed (regulatory mapping).
  • Approval from model owner and, for high-risk systems, the AI Governance Committee.

Gate 4: Deployment

  • Monitoring configured (performance, drift, safety).
  • Incident response plan documented.
  • User notice and transparency requirements met.
  • Rollback procedure tested.

Gate 5: Ongoing monitoring

  • Regular performance reviews against agreed metrics.
  • Periodic bias and fairness re-evaluation.
  • Regulatory change impact assessment.
  • Revalidation triggered by material changes to the model, data, or operating context.

Gate 6: Retirement

  • Stakeholders notified.
  • Data retained or disposed of per retention policy.
  • System decommissioned and removed from inventory.
  • Lessons learned documented.

7. Regulatory alignment

This policy is designed to support compliance with:

FrameworkKey obligations
EU AI ActRisk classification, conformity assessment, transparency obligations, fundamental rights impact assessment, post-market monitoring, registration of high-risk systems.
ISO/IEC 42001AI management system, risk treatment, leadership commitment, operational controls, performance evaluation, continual improvement.
NIST AI RMFGovern, Map, Measure, Manage functions across the AI lifecycle.
ISO/IEC 27001Information security controls applied to AI data and systems.

The AI Governance Lead maintains a mapping between this policy's requirements and applicable regulatory obligations, updated when regulations change.

8. Exceptions

Any request to deviate from this policy must be submitted to the AI Governance Lead with:

High-risk exceptions require AI Governance Committee approval. All exceptions are logged and reviewed at least quarterly.

  • A description of the exception and its justification.
  • An assessment of the additional risk introduced.
  • Compensating controls proposed.
  • A defined expiration date (exceptions are not permanent).

9. Enforcement

Non-compliance with this policy may result in:

The organization reserves the right to immediately suspend any AI system that poses an unacceptable risk to individuals, the organization, or regulatory standing.

  • Suspension of the AI system pending review.
  • Mandatory remediation with a defined timeline.
  • Escalation to executive leadership.
  • Disciplinary action proportional to the risk introduced.

10. Review and update

This policy is reviewed at least annually, or sooner when triggered by:

The AI Governance Lead is responsible for initiating the review. Updates require AI Governance Committee approval.

  • Significant regulatory changes (e.g., new EU AI Act provisions taking effect).
  • Material AI incidents within the organization.
  • Changes to the organization's AI strategy or risk appetite.
  • Findings from internal or external audits.

Document control

FieldValue
Policy owner[AI Governance Lead]
Approved by[AI Governance Committee]
Effective date[Date]
Next review date[Date + 12 months]
Version1.0
ClassificationInternal

Ready to implement this policy?

Use VerifyWise to customize, deploy, and track compliance with this policy template.

AI Governance Policy | VerifyWise AI Governance Templates