Model lifecycle management
Track models from development through deployment and retirement.
Overview
AI models aren't static. They degrade over time, their training data gets stale, and production performance can drift from what you measured in testing. Lifecycle management is how you keep track of where each model is and what kind of oversight it needs at that stage.
A model in development needs different governance than one serving production traffic. A model being retired needs a transition plan. VerifyWise tracks these phases so you can apply the right controls at the right time.
What lifecycle tracking gives you
- Proportional governance: Development needs flexibility; production needs stability. You apply different controls depending on the phase.
- Phase-specific risks: A model in testing has different risks than one in production. Tracking the phase tells you what to watch for.
- Audit trail: Regulators want to see how a model went from development to production. The lifecycle record provides that.
- Retirement planning: When you replace a model, there needs to be a transition plan. Tracking the phase makes that visible.
Lifecycle phases
VerifyWise tracks these phases:
Problem definition and planning
Initial scoping, requirements gathering, and project planning before development begins.
Data collection and processing
Gathering, cleaning, and preparing training data with appropriate data governance.
Model development and training
Building, training, and iterating on model architecture and parameters.
Model validation and testing
Evaluating model performance, fairness, and safety before deployment.
Deployment and integration
Moving models into production environments and integrating with business processes.
Monitoring and maintenance
Ongoing observation of model performance, drift detection, and updates.
Decommissioning and retirement
Safely retiring models and managing the transition to replacement systems.
Project status tracking
Each AI project in VerifyWise has a status that indicates its current state in the governance workflow:
| Status | Description | Typical next step |
|---|---|---|
| Not started | Project has been registered but work has not begun | Begin development |
| In progress | Active development or implementation is underway | Submit for review |
| Under review | Project is being evaluated for compliance or approval | Address feedback |
| Completed | Project has met all requirements and is in production | Monitor performance |
| On hold | Work has been temporarily paused | Resume when ready |
| Closed | Project has been concluded or archived | — |
| Rejected | Project did not pass review and will not proceed | Revise or discontinue |
Model approval status
Independent of project status, individual models have their own approval workflow:
| Status | Meaning | Typical action |
|---|---|---|
| Pending | Awaiting governance review | Complete risk assessment |
| Approved | Authorized for production use | Deploy with monitoring |
| Restricted | Limited use cases only | Document restrictions |
| Blocked | Not authorized for use | Seek alternative models |
MLFlow lifecycle integration
For teams using MLFlow, VerifyWise imports lifecycle stage information directly from your ML platform:
- Staging: Model is being prepared for production evaluation
- Production: Model is actively serving predictions
- Archived: Model has been retired from active use
This integration provides visibility into training timestamps, model parameters, and version history without manual data entry.
AI risk classification
Each use case gets an EU AI Act risk classification, which determines how much governance overhead applies:
Prohibited
AI systems banned under EU AI Act (social scoring, real-time biometric identification in public spaces)
High risk
Systems requiring conformity assessment and ongoing monitoring
Limited risk
Systems with transparency obligations (chatbots, emotion recognition)
Minimal risk
Low-risk applications with voluntary code of conduct
GPAI
General-purpose AI models with broad applicability across many tasks (foundation models)
General Risk
Catch-all classification for systems that do not fit the other categories
High-risk system roles
For high-risk systems, you also record your organization's role. Different roles have different obligations under the EU AI Act:
- Provider: Develops or places the AI system on the market
- Deployer: Uses an AI system under their authority
- Importer: Brings AI systems into the EU market
- Distributor: Makes AI systems available on the market
- Product manufacturer: Integrates AI into products under their own name
- Authorized representative: Acts on behalf of a non-EU provider
Lifecycle audit trail
All status changes and lifecycle transitions are logged automatically with timestamps and who made the change. This record is what auditors and regulators will look at when reviewing your governance process.