User guideAI governanceManaging model inventory
AI governance

Managing model inventory

Register and track all AI models across your organization.

Overview

A model inventory is a comprehensive catalog of all AI models and systems used within your organization. Just as financial assets require tracking for accounting and compliance, AI models require similar oversight to ensure proper governance, risk management, and regulatory compliance.

Without a centralized inventory, organizations often lose track of which AI models are in use, who is responsible for them, and what data they process. This lack of visibility creates compliance risks, security blind spots, and operational inefficiencies. A well-maintained inventory answers fundamental questions: What AI do we have? Where is it deployed? Who owns it? What risks does it present?

Why maintain a model inventory?

  • Regulatory compliance: The EU AI Act and other regulations require organizations to maintain records of AI systems, especially high-risk applications
  • Risk visibility: You cannot manage risks you do not know exist. An inventory surfaces all AI systems for risk assessment
  • Accountability: Clear ownership ensures someone is responsible for each model's performance, compliance, and maintenance
  • Audit readiness: When auditors or regulators ask about your AI use, you can provide immediate, accurate answers
  • Resource planning: Understanding your AI landscape helps allocate governance resources where they matter most
Maintaining an accurate model inventory is a core requirement for EU AI Act compliance and ISO 42001 certification. VerifyWise automatically tracks changes to your inventory for audit purposes.

Accessing the model inventory

Navigate to Model inventory from the main sidebar. The inventory displays all registered models in a searchable table with filtering options for status, provider, and other attributes.

Model inventory page showing status cards for Approved, Restricted, Pending, and Blocked models, plus a table listing models with provider, version, approver, and security assessment columns
The model inventory provides a centralized view of all AI models in your organization.

Registering a new model

To add a new AI model to your inventory, click the Add model button and provide the required information:

  1. Provider: — The organization or service that provides the model (e.g., OpenAI, Anthropic, internal team)
  2. Model name: — The specific model identifier (e.g., GPT-4, Claude 3, custom-classifier-v2)
  3. Version: — The version number or release identifier
  4. Approver: — The person responsible for approving this model for use
Add a new model form with fields for provider, model name, version, approver, status, capabilities, use cases, frameworks, reference link, biases, hosting provider, and limitations
The model registration form captures comprehensive metadata for governance tracking.

Model attributes

Each model in your inventory can include detailed attributes to support governance and risk assessment:

Capabilities

Document what the model can do — text generation, classification, image analysis, etc.

Known biases

Record any identified biases or fairness concerns with the model

Limitations

Document constraints and scenarios where the model should not be used

Hosting provider

Where the model is hosted — cloud provider, on-premises, or hybrid

Approval status

Every model in the inventory has an approval status that controls whether it can be used in your organization:

  • Pending: Model is awaiting review and approval before use
  • Approved: Model has been reviewed and authorized for production use
  • Restricted: Model is approved for limited use cases or specific projects only
  • Blocked: Model is not authorized for use in the organization
Best practice
Establish clear criteria for each approval status in your AI governance policy. This ensures consistent decision-making when evaluating new models.

Security assessment

Models can be flagged as having completed a security assessment. When enabled, you can attach security assessment documentation directly to the model record for easy reference during audits.

Linking evidence

The model inventory integrates with the Evidence Hub, allowing you to link supporting documentation to each model:

  • Model cards and technical documentation
  • Vendor contracts and data processing agreements
  • Security assessment reports
  • Bias testing results and fairness evaluations
  • Performance benchmarks and validation studies

MLFlow integration

For organizations using MLFlow for ML operations, VerifyWise can import model training data directly. This provides visibility into model development metrics including training timestamps, parameters, and lifecycle stages.

Change history

VerifyWise automatically maintains a complete audit trail for every model in your inventory. Each change records:

  • The field that was modified
  • Previous and new values
  • Who made the change
  • When the change occurred

This history is essential for demonstrating governance practices during compliance audits and regulatory reviews.

Datasets

The datasets tab within model inventory allows you to catalog and manage the data used for training, validating, and testing your AI models. Proper dataset management is essential for AI governance — understanding what data feeds your models helps ensure compliance, identify potential biases, and maintain data quality standards.

Accessing datasets

Navigate to Model inventory from the main sidebar, then select the Datasets tab. The datasets view displays all registered datasets in a searchable table with status summary cards at the top.

Adding a new dataset

To add a new dataset to your inventory, click the Add new dataset button and provide the required information:

  1. Name: — A descriptive name for the dataset
  2. Description: — Detailed explanation of what the dataset contains and its intended use
  3. Version: — The version identifier for tracking dataset iterations
  4. Owner: — The person or team responsible for maintaining the dataset
  5. Type: — The purpose of the dataset (training, validation, testing, production, or reference)
  6. Function: — The dataset's role in AI model development
  7. Source: — Where the data originated from
  8. Classification: — The sensitivity level of the data
  9. Status: — The current lifecycle stage of the dataset
  10. Status date: — When the current status was set

Dataset types

Datasets can be categorized by their purpose in the machine learning lifecycle:

  • Training: Data used to train the model and learn patterns
  • Validation: Data used to tune hyperparameters and prevent overfitting during training
  • Testing: Data used to evaluate final model performance before deployment
  • Production: Data that the deployed model processes in live environments
  • Reference: Baseline or benchmark data used for comparison

Data classification

Each dataset should be classified according to its sensitivity level:

  • Public: Data that can be freely shared without restrictions
  • Internal: Data intended for use within the organization only
  • Confidential: Sensitive data requiring access controls and handling procedures
  • Restricted: Highly sensitive data with strict access limitations and regulatory requirements
PII handling
When a dataset contains personally identifiable information (PII), mark it accordingly and document the specific types of PII present. This is critical for GDPR, CCPA, and other privacy regulation compliance.

Dataset status

Every dataset has a status indicating its current lifecycle stage:

  • Draft: Dataset is being prepared or documented but not yet ready for use
  • Active: Dataset is approved and currently in use for model development or production
  • Deprecated: Dataset is no longer recommended for new use but may still be referenced by existing models
  • Archived: Dataset is retained for historical purposes but not available for active use

Dataset attributes

Each dataset can include additional attributes to support governance and data quality:

Known biases

Document any identified biases in the data that could affect model outcomes

Bias mitigation

Record steps taken to identify, measure, and reduce bias in the dataset

Collection method

Describe how the data was gathered — surveys, scraping, APIs, manual entry, etc.

Preprocessing steps

Document transformations, cleaning, and normalization applied to the raw data

Linking datasets to models

When creating or editing a dataset, you can link it to one or more models in your inventory. This creates traceability between your data assets and the AI systems that use them — essential for impact assessments and understanding how data issues might propagate through your AI portfolio.

Best practice
Link every training and validation dataset to its corresponding models. When data quality issues are discovered, you can quickly identify all affected models.

Linking datasets to use cases

In addition to models, datasets can be linked to specific use cases (projects) in your organization. This helps maintain a clear view of which data supports which business applications, supporting both governance oversight and impact analysis.

Optional fields

Beyond the required fields, you can document additional metadata to enhance governance:

  • License: The licensing terms governing data use (e.g., CC BY 4.0, MIT, proprietary)
  • Format: The data format (e.g., CSV, JSON, Parquet)
  • PII types: Specific types of personally identifiable information when PII is present
PreviousIntake forms
NextModel lifecycle management
Managing model inventory - AI governance - VerifyWise User Guide