User guideAI governanceModel lifecycle management
AI governance

Model lifecycle management

Track models from development through deployment and retirement.

Overview

AI model lifecycle management is the practice of governing AI systems from conception through retirement. Unlike traditional software that may remain stable for years, AI models require continuous attention — they can degrade over time, their training data may become outdated, and their real-world performance may drift from initial expectations.

Understanding where each model sits in its lifecycle is essential for effective governance. A model in development requires different oversight than one in production. A model being retired needs careful attention to ensure continuity. By tracking lifecycle phases, you can apply the right controls at the right time.

Why track model lifecycle?

  • Right-sized governance: Apply controls appropriate to each phase — development needs flexibility, production needs stability
  • Risk awareness: Each phase introduces different risks that require different mitigation strategies
  • Resource allocation: Focus monitoring and maintenance resources on models that need them most
  • Compliance evidence: Document the journey of each model for regulatory audits and reviews
  • Retirement planning: Ensure orderly transitions when models are replaced or decommissioned

Lifecycle phases

VerifyWise recognizes the following phases in the AI model lifecycle, aligned with industry standards and regulatory expectations:

Problem definition and planning

Initial scoping, requirements gathering, and project planning before development begins.

Data collection and processing

Gathering, cleaning, and preparing training data with appropriate data governance.

Model development and training

Building, training, and iterating on model architecture and parameters.

Model validation and testing

Evaluating model performance, fairness, and safety before deployment.

Deployment and integration

Moving models into production environments and integrating with business processes.

Monitoring and maintenance

Ongoing observation of model performance, drift detection, and updates.

Decommissioning and retirement

Safely retiring models and managing the transition to replacement systems.

Project status tracking

Each AI project in VerifyWise has a status that indicates its current state in the governance workflow:

StatusDescriptionTypical next step
Not startedProject has been registered but work has not begunBegin development
In progressActive development or implementation is underwaySubmit for review
Under reviewProject is being evaluated for compliance or approvalAddress feedback
CompletedProject has met all requirements and is in productionMonitor performance
On holdWork has been temporarily pausedResume when ready
ClosedProject has been concluded or archived—
RejectedProject did not pass review and will not proceedRevise or discontinue

Model approval status

Independent of project status, individual models have their own approval workflow:

StatusMeaningTypical action
PendingAwaiting governance reviewComplete risk assessment
ApprovedAuthorized for production useDeploy with monitoring
RestrictedLimited use cases onlyDocument restrictions
BlockedNot authorized for useSeek alternative models

MLFlow lifecycle integration

For teams using MLFlow, VerifyWise imports lifecycle stage information directly from your ML platform:

  • Staging: Model is being prepared for production evaluation
  • Production: Model is actively serving predictions
  • Archived: Model has been retired from active use

This integration provides visibility into training timestamps, model parameters, and version history without manual data entry.

AI risk classification

VerifyWise supports EU AI Act risk classification for projects, which influences governance requirements throughout the lifecycle:

Prohibited

AI systems banned under EU AI Act (social scoring, real-time biometric identification in public spaces)

High risk

Systems requiring conformity assessment and ongoing monitoring

Limited risk

Systems with transparency obligations (chatbots, emotion recognition)

Minimal risk

Low-risk applications with voluntary code of conduct

High-risk system roles

For high-risk AI systems, VerifyWise tracks your organization's role in the AI value chain, as different roles carry different compliance obligations:

  • Provider: Develops or places the AI system on the market
  • Deployer: Uses an AI system under their authority
  • Importer: Brings AI systems into the EU market
  • Distributor: Makes AI systems available on the market
  • Product manufacturer: Integrates AI into products under their own name
  • Authorized representative: Acts on behalf of a non-EU provider

Lifecycle audit trail

All status changes and lifecycle transitions are automatically logged with timestamps and user attribution. This audit trail demonstrates governance oversight and is essential for regulatory compliance.

Best practice
Define clear criteria for each lifecycle transition in your AI governance policy. Document who has authority to approve status changes and what evidence is required.
PreviousManaging model inventory
NextTask management
Model lifecycle management - AI governance - VerifyWise User Guide