Model lifecycle management
Track models from development through deployment and retirement.
Overview
AI model lifecycle management is the practice of governing AI systems from conception through retirement. Unlike traditional software that may remain stable for years, AI models require continuous attention — they can degrade over time, their training data may become outdated, and their real-world performance may drift from initial expectations.
Understanding where each model sits in its lifecycle is essential for effective governance. A model in development requires different oversight than one in production. A model being retired needs careful attention to ensure continuity. By tracking lifecycle phases, you can apply the right controls at the right time.
Why track model lifecycle?
- Right-sized governance: Apply controls appropriate to each phase — development needs flexibility, production needs stability
- Risk awareness: Each phase introduces different risks that require different mitigation strategies
- Resource allocation: Focus monitoring and maintenance resources on models that need them most
- Compliance evidence: Document the journey of each model for regulatory audits and reviews
- Retirement planning: Ensure orderly transitions when models are replaced or decommissioned
Lifecycle phases
VerifyWise recognizes the following phases in the AI model lifecycle, aligned with industry standards and regulatory expectations:
Problem definition and planning
Initial scoping, requirements gathering, and project planning before development begins.
Data collection and processing
Gathering, cleaning, and preparing training data with appropriate data governance.
Model development and training
Building, training, and iterating on model architecture and parameters.
Model validation and testing
Evaluating model performance, fairness, and safety before deployment.
Deployment and integration
Moving models into production environments and integrating with business processes.
Monitoring and maintenance
Ongoing observation of model performance, drift detection, and updates.
Decommissioning and retirement
Safely retiring models and managing the transition to replacement systems.
Project status tracking
Each AI project in VerifyWise has a status that indicates its current state in the governance workflow:
| Status | Description | Typical next step |
|---|---|---|
| Not started | Project has been registered but work has not begun | Begin development |
| In progress | Active development or implementation is underway | Submit for review |
| Under review | Project is being evaluated for compliance or approval | Address feedback |
| Completed | Project has met all requirements and is in production | Monitor performance |
| On hold | Work has been temporarily paused | Resume when ready |
| Closed | Project has been concluded or archived | — |
| Rejected | Project did not pass review and will not proceed | Revise or discontinue |
Model approval status
Independent of project status, individual models have their own approval workflow:
| Status | Meaning | Typical action |
|---|---|---|
| Pending | Awaiting governance review | Complete risk assessment |
| Approved | Authorized for production use | Deploy with monitoring |
| Restricted | Limited use cases only | Document restrictions |
| Blocked | Not authorized for use | Seek alternative models |
MLFlow lifecycle integration
For teams using MLFlow, VerifyWise imports lifecycle stage information directly from your ML platform:
- Staging: Model is being prepared for production evaluation
- Production: Model is actively serving predictions
- Archived: Model has been retired from active use
This integration provides visibility into training timestamps, model parameters, and version history without manual data entry.
AI risk classification
VerifyWise supports EU AI Act risk classification for projects, which influences governance requirements throughout the lifecycle:
Prohibited
AI systems banned under EU AI Act (social scoring, real-time biometric identification in public spaces)
High risk
Systems requiring conformity assessment and ongoing monitoring
Limited risk
Systems with transparency obligations (chatbots, emotion recognition)
Minimal risk
Low-risk applications with voluntary code of conduct
High-risk system roles
For high-risk AI systems, VerifyWise tracks your organization's role in the AI value chain, as different roles carry different compliance obligations:
- Provider: Develops or places the AI system on the market
- Deployer: Uses an AI system under their authority
- Importer: Brings AI systems into the EU market
- Distributor: Makes AI systems available on the market
- Product manufacturer: Integrates AI into products under their own name
- Authorized representative: Acts on behalf of a non-EU provider
Lifecycle audit trail
All status changes and lifecycle transitions are automatically logged with timestamps and user attribution. This audit trail demonstrates governance oversight and is essential for regulatory compliance.