← Back to AI Governance Templates

Core AI Governance Policies

Model Approval and Release Policy

Describes the approvals required before AI models are promoted to production.

Owner: VP of Engineering

Purpose

This policy defines the go/no-go criteria and governance checkpoints that must be satisfied before any AI model or major model update is released into a production environment. It establishes a consistent control surface so business stakeholders, compliance, and engineering all rely on the same playbook when evaluating release readiness.

Scope

Applies to all AI and ML assets owned or operated by the organization, including first-party models, fine-tuned foundation models, rules-based ensembles, and vendor-supplied models that are embedded into customer experiences or internal decisioning workflows.

  • Cloud-hosted and on-prem inference services
  • Batch and streaming scoring jobs
  • Embedded vendor models where we control deployment cadence
  • Emergency fixes and routine version bumps

Definitions

  • Model Release: A planned deployment of a net-new model or any change that materially impacts outputs, controls, or infrastructure.
  • Model Owner: Business or technical stakeholder accountable for lifecycle reporting.
  • Change Advisory Group (CAG): Cross-functional committee that adjudicates approvals when high-risk controls are triggered.

Policy

No AI model may enter production without documented evidence that all required controls (risk, quality, security, privacy, and legal) have been satisfied. Release approvals must be recorded in the model inventory with a digitally signed attestation. Any model bypassing this process will be automatically rolled back or disabled until remediation is complete.

Roles and Responsibilities

The Model Owner submits release requests, ensures evidence is current, and coordinates playbacks to the governing council. The VP of Engineering (or delegate) confirms technical readiness. Compliance and Security review control attestations for high-risk classifications. The CAG arbitrates disagreements and approves waivers.

Procedures

Release requests must follow the steps below. Each step is logged in the inventory record and blocked until the previous step is complete.

  • 1. Intake: Model Owner opens a release ticket referencing the model ID, version, intended deployment window, and business justification.
  • 2. Evidence upload: Attach risk assessment, QA summary, monitoring plan, rollback playbook, and security/privacy attestations.
  • 3. Peer playback: Present results to the engineering peer review and capture action items.
  • 4. Governance sign-off: Compliance and Security approve or request changes. High-risk models escalate to the CAG for final decision.
  • 5. Release window lock: Scheduling team confirms the deployment slot; communications go out to business stakeholders.
  • 6. Post-release validation: Within 24 hours the Model Owner verifies health metrics and closes the ticket, or triggers rollback if KPIs drift.

Exceptions

Temporary waivers may be granted only when documented risk is acceptable and a clear remediation path exists. Waivers must specify duration (maximum 30 days), compensating controls, and CAG sponsor. All waivers auto-expire and trigger reminders seven days before expiration.

Review Cadence

This policy is reviewed every six months by Engineering Governance and GRC to align with ISO 42001 control updates, EU AI Act delegated acts, and internal audit findings. Release metrics (approvals denied, exceptions granted, rollbacks executed) are reported quarterly to the risk committee.

References

ISO/IEC 42001:2023 Clause 8 (Operational planning and control)

EU AI Act Articles 9–15 (Risk, data, technical documentation)

Internal documents: AI Quality Assurance Policy, Model Validation and Testing SOP, Change Management SOP

Model Approval and Release Policy | VerifyWise AI Governance Templates