Back to policy templates
Policy 12 of 15

Model Approval and Release Policy

Defines the approvals, evidence, and conditions required before an AI model is promoted to production.

1. Purpose

This policy establishes the gate-keeping process for AI model releases at [Organization Name]. No AI model may be deployed to production without passing the required reviews, producing the required evidence, and obtaining the required approvals. This prevents untested, undocumented, or unapproved models from reaching users or influencing decisions.

2. Scope

This policy applies to:

Excluded: Experiments in sandboxed environments that do not process real data or affect real users.

  • All new AI model deployments to production environments.
  • All model updates, retraining, or fine-tuning promoted to production.
  • All third-party AI models activated for production use.
  • All changes to model configuration, parameters, or data pipelines that affect production behavior.

3. Release tiers

The depth of the approval process depends on the risk classification and the nature of the change:

Release typeDescriptionApproval required
New model (high-risk)First deployment of a model classified as high-riskAI Governance Committee
New model (medium-risk)First deployment of a model classified as medium-riskAI Governance Lead + Model Owner
New model (low-risk)First deployment of a model classified as low-riskModel Owner
Major updateArchitecture change, new training data source, or change to risk classificationSame as new model for the risk tier
Minor updateRetraining with same data pipeline, hyperparameter tuning, bug fixModel Owner (peer review for high-risk)
Configuration changeThreshold adjustment, prompt update, feature flag toggleModel Owner
Third-party model activationNew vendor AI service going liveAI Governance Lead + Security + Legal

4. Pre-release checklist

Before requesting approval, the Model Owner must confirm and document the following:

4.1 Documentation

  • Model card completed (purpose, architecture, training data, limitations, intended use, known biases).
  • Data sheet for training and evaluation datasets.
  • Risk assessment completed and entered in the risk register.
  • Change log documenting what changed from the previous version (for updates).

4.2 Testing evidence

  • Validation test report with pass/fail results against pre-defined thresholds (per Model Validation and Testing Policy).
  • Bias and fairness test results (high and medium-risk systems).
  • Security test results including adversarial and prompt injection testing (where applicable).
  • Independent validation report (high-risk systems only).

4.3 Compliance

  • Regulatory mapping completed: which obligations apply and how they are met.
  • FRIA completed (high-risk systems deployed in the EU).
  • Data protection impact assessment completed (where required by GDPR Article 35).
  • Transparency and user notice requirements documented.

4.4 Operational readiness

  • Monitoring configured: performance metrics, drift detection, alerting thresholds.
  • Incident response plan documented and reviewed.
  • Rollback procedure tested and confirmed working.
  • On-call or support ownership assigned.
  • Capacity planning completed (expected load, scaling approach).

5. Approval process

Placeholder. Populate with your organization's language for 5. Approval process.

Step 1: Request

The Model Owner submits a release request through the governance portal with the completed pre-release checklist and all supporting evidence attached.

Step 2: Review

The approver(s) review the evidence. For high-risk releases, the AI Governance Committee reviews the request at its next meeting (or an ad-hoc session for urgent cases). Reviewers may request additional testing or documentation before approving.

Step 3: Decision

  • Approved: Release may proceed. Approval is logged with approver name, date, and any conditions.
  • Approved with conditions: Release may proceed but specific conditions must be met within a defined period (e.g., "deploy but complete fairness audit within 30 days").
  • Rejected: Release is blocked. Rejection reason documented. Model Owner must address the issues and resubmit.

Step 4: Deployment

After approval, the deployment follows the organization's change management process. The deployment date and deployer are recorded.

Step 5: Post-deployment confirmation

Within 48 hours of deployment, the Model Owner confirms:

  • The model is operating within expected parameters.
  • Monitoring is active and producing data.
  • No immediate issues detected.

6. Emergency releases

In cases where an urgent fix is needed (safety issue, security vulnerability, regulatory deadline):

  • The Model Owner may deploy with verbal approval from the AI Governance Lead.
  • Written approval and full documentation must follow within 5 business days.
  • Emergency releases are logged and reviewed at the next Committee meeting.
  • Emergency releases must not be used to bypass normal governance for convenience.

7. Rollback and suspension

  • Any approved model may be suspended by the AI Governance Lead if post-deployment evidence indicates unacceptable risk.
  • Rollback to the previous version must be executable within the timeframe defined in the operational readiness checklist.
  • Suspension and rollback decisions are logged with rationale and reviewed by the Committee.

8. Audit trail

The following records are maintained for each release:

Records are retained per the organization's document retention policy and made available for internal and external audit.

  • Release request with pre-release checklist.
  • All evidence documents (test reports, risk assessment, FRIA, DPIA).
  • Approval decision with approver identity, date, and any conditions.
  • Deployment record (date, deployer, environment).
  • Post-deployment confirmation.
  • Any rollback or suspension records.

9. Roles and responsibilities

RoleRelease responsibilities
Model OwnerPrepares release request, completes checklist, coordinates testing, deploys, confirms post-deployment.
AI Governance LeadReviews medium-risk requests, coordinates Committee reviews, approves emergency releases, triggers rollbacks.
AI Governance CommitteeApproves high-risk releases, reviews emergency releases retrospectively.
SecurityReviews security test evidence, participates in third-party activation approvals.
LegalReviews compliance evidence, participates in third-party activation approvals.

10. Regulatory alignment

  • EU AI Act: Article 9 (risk management system), Article 16 (provider obligations), Article 43 (conformity assessment before market placement).
  • ISO/IEC 42001: Clause 8.2 (AI system realization), Clause 8.4 (verification and validation).
  • NIST AI RMF: MANAGE function (MG-2: deployment decisions, MG-3: post-deployment monitoring).

11. Review

This policy is reviewed annually or when triggered by changes to the release process, deployment tooling, or patterns in release failures.

Document control

FieldValue
Policy owner[AI Governance Lead]
Approved by[AI Governance Committee]
Effective date[Date]
Next review date[Date + 12 months]
Version1.0
ClassificationInternal

Ready to implement this policy?

Use VerifyWise to customize, deploy, and track compliance with this policy template.

Model Approval and Release Policy | VerifyWise AI Governance Templates