Discover pillar

Maintain a central registry for all your AI and ML models

Track every model from development to deployment with approval workflows, risk assessments, and compliance documentation.

Model inventory screenshot

The challenge

Shadow AI is your biggest compliance risk

Most organizations don't know what AI models are running in their environment, who deployed them, or what data they access.

Teams deploy AI models without governance oversight, creating compliance blind spots that auditors will find

No single source of truth means duplicate models, conflicting versions, and wasted resources across departments

When regulators ask 'what AI do you have?', you can't answer with confidence or provide documentation

Model risks like bias, security vulnerabilities, and performance issues go untracked until they cause incidents

MLOps teams and governance teams work in silos, leading to outdated records and manual reconciliation

Audit failures and fines increase as regulations like EU AI Act require comprehensive AI inventories

4Approval statuses
5Risk categories
HourlyMLFlow sync
4Risk levels

Benefits

Why use Model inventory?

Key advantages for your AI governance program

Register models with provider, version, and capabilities

Track approval status (Approved, Pending, Restricted, Blocked)

Sync automatically with MLFlow pipelines every hour

Manage model-specific risks across 5 categories

Capabilities

What you can do

Core functionality of Model inventory

Model registry with metadata

Track every model with provider, version, and deployment details, maintaining a complete inventory of your AI assets.

Model providers18 tracked
OpenAI
Anthropic
Google
Meta
Mistral
HuggingFace
Ollama
OpenAI

4-stage approval workflow

Route models through Pending, In Review, Approved, Rejected gates with designated reviewers at each stage.

4-stage approval workflow
GPT-4o deployment
J. Lee, awaiting ML LeadIn Review
Llama 3.1 fine-tune
CISO + DPO, Mar 12Approved
Whisper v3 update
Failed bias thresholdRejected

MLFlow integration sync

Connect to MLFlow experiment tracking to automatically import model metadata, metrics, and lineage information into your governance registry.

Mitigation: Training data biasOwner: Sarah K.
Not Started
In Progress
Completed

Model risk categorization

Classify models by risk category based on use case, data sensitivity, and deployment scope for targeted governance.

GPT-4o (OpenAI)
v2024.08
Production, fraud detection pipeline
Claude 3.5 (Anthropic)
v2024.06
Staging, customer support agent
Llama 3.1 (Meta)
v70B
Development, internal summarization

Model analytics

Visualize model distribution by provider and risk level with approval pipeline throughput metrics.

Models
18
Approved
12
Pending
4

Why VerifyWise

Built for real-world AI governance

What makes our approach different

MLFlow-native integration

Unlike spreadsheet-based approaches, VerifyWise syncs directly with your ML pipelines every hour. No manual data entry, no stale records, no reconciliation headaches.

Compliance-ready from day one

Model approval statuses, risk categories, and documentation requirements are designed around regulatory expectations. When auditors arrive, you're prepared.

Risk-aware by design

Each model has a dedicated risk register with categories that matter for AI: bias, performance, security, data quality, and compliance. Not generic enterprise risk templates.

Regulatory context

What regulations require

Multiple frameworks now mandate AI inventories and documentation. Here's what you need to know.

EU AI Act

Article 9 requires providers of high-risk AI systems to establish a risk management system. Article 11 mandates technical documentation including system description, design specifications, and monitoring capabilities.

ISO 42001

Clause 6.1.2 requires organizations to identify AI system risks. Clause 8.4 mandates documentation of AI system specifications, including model versions, training data, and performance metrics.

NIST AI RMF

The GOVERN function requires organizations to establish policies and procedures for AI system documentation. The MAP function mandates inventory of AI systems and their purposes.

Technical details

How it works

Implementation details and technical capabilities

4 approval statuses: Approved (production-ready), Restricted (limited use), Pending (awaiting review), Blocked (prohibited)

5 model risk categories: Performance, Bias & Fairness, Security, Data Quality, and Compliance

4 risk levels: Low, Medium, High, Critical with status tracking (Open, In Progress, Resolved, Accepted)

MLFlow integration with hourly sync via BullMQ cron job, max 3 retries with exponential backoff (1s, 2s, 4s)

MLFlow auth options: None, Basic (username/password), or Token-based with optional SSL verification

Security assessment documentation with file uploads and structured assessment data (JSONB)

Model-to-project and model-to-framework linking for complete traceability

Field-level change history tracking with old/new values, user attribution, and timestamps

Supported frameworks

EU AI ActISO 42001

Integrations

MLFlowEvidence HubVendor ManagementRisk ManagementUse CasesCompliance Frameworks

FAQ

Common questions

Frequently asked questions about Model inventory

VerifyWise connects to your MLFlow tracking server and syncs automatically every hour. It imports model name, version, description, lifecycle stage, run ID, tags, metrics, and parameters. Authentication supports none, basic (username/password), or token-based with configurable SSL verification.

Each model record includes: provider (e.g., OpenAI, Anthropic), model name, version, approver, capabilities, security assessment flag and data, approval status, biases, limitations, hosting provider, reference link, and linked projects/frameworks. Full change history is maintained.

Model risks are categorized into 5 types: Performance (accuracy, latency), Bias & Fairness (discrimination, representation), Security (vulnerabilities, attacks), Data Quality (training data issues), and Compliance (regulatory violations). Each risk has Low/Medium/High/Critical severity levels.

Models have 4 statuses: Approved (cleared for production use), Restricted (limited use with conditions), Pending (awaiting review), and Blocked (prohibited from use). Each model has a designated approver and status date for audit tracking.

Ready to get started?

See how VerifyWise can help you govern AI with confidence.

Model inventory | AI Governance Platform | VerifyWise