Use cases
Create and manage AI use cases with risk classification, framework linking, team assignment, and approval workflows.
Overview
A use case is how you describe an AI system and what it does inside your organization. Think of it as the container that holds everything together: risks, frameworks, assessments, controls, vendor relationships, and linked models all hang off a use case.
Every use case gets a unique ID (UC-001, UC-002, and so on) and follows the AI system from scoping through deployment. You set its risk classification, assign someone to own it, attach compliance frameworks, and build up a risk register, all from one place.
Creating a use case
Head to the Use cases page and hit New use case. You might see a short screening step asking whether the project involves AI. Answer or skip it. The creation form comes next.
What you need to fill in
- Title: A short name, up to 64 characters. Has to be unique in your organization.
- Goal: What the AI system is supposed to accomplish, up to 256 characters.
- Owner: Who is responsible. They get an email when you assign them.
- Start date: When work started or is expected to start.
- AI risk classification: Pick one: Prohibited, High risk, Limited risk, or Minimal risk. This drives how much oversight the EU AI Act expects.
- Type of high risk role: Your organization's relationship to the AI system: Deployer, Provider, Distributor, Importer, Product manufacturer, or Authorized representative.
- Geography: Where the system operates: Global, Europe, North America, South America, Asia, or Africa.
- Target industry: The sector where the AI system is used.
Optional fields
- Description: A longer explanation of the system and how it works.
- Members: Other people who need access. The owner is added automatically.
- Status: Starts at "Not started." You can also choose In progress, Under review, Completed, Closed, On hold, or Rejected.
- Approval workflow: If your organization requires sign-off before work begins, pick a workflow here. Frameworks won't be created until the use case is approved.
Attaching frameworks
During creation you pick which compliance frameworks apply. EU AI Act, ISO 42001, ISO 27001, and NIST AI RMF come built in, and your organization may have plugin frameworks installed on top. You can always add or remove frameworks later from settings.
Frameworks can be scoped to one use case (project-based) or shared across the whole organization. Go with project-based when different AI systems face different regulatory requirements.
Inside a use case
Click a use case to open it. The detail view is split into tabs.
Overview
The landing tab. Shows the use case details, which frameworks are linked, and a breakdown of risk levels.
Use case risks
A risk register scoped to this use case. Create, edit, and delete risks here. Each risk can be tied to specific framework controls and assessments. A badge on the tab shows the count.
Anything you create here also shows up on the global Risk management page alongside risks from other use cases.
Linked models
Which AI models from your inventory are tied to this use case. Link and unlink them here. The connection makes it clear to auditors which models serve which business processes.
Frameworks and regulations
Split into two sub-tabs. Controls tracks progress against each framework's requirements. Assessments tracks questionnaire-style evaluations. Completion percentages update automatically as you work through items.
CE marking
For AI systems that require a CE mark under the EU AI Act. This tab appears when the relevant plugin is active and walks through the conformity assessment steps.
Activity
A chronological log of every change made to the use case: who changed what, when, and the old and new values. Useful for audits and internal reviews.
Monitoring
Post-market monitoring for deployed AI systems. Track ongoing performance, incidents, and compliance status after the system goes live.
Settings
Change the basics (title, goal, status), transfer ownership, manage team members, add or remove frameworks, or delete the use case. Every edit gets recorded in the activity log.
Approval workflows
Some organizations need sign-off before a use case moves forward. If you assign an approval workflow during creation, here is what happens while it is pending:
- Basic fields (title, goal, description, status) stay editable
- Frameworks, Risks, and Linked models tabs are locked
- Framework creation is held back until approval comes through
- A rejection keeps the tabs locked and updates the status accordingly
After approval, the deferred frameworks get created and all tabs open up.
Working with the list
The main page shows every non-organizational use case. A few tools help you find what you need:
- Search: Looks through use case titles and UC IDs.
- Filter: Narrow by name, risk level, owner, status, or start date.
- Group: Organize by risk level, role, owner, or status. Groups collapse and sort.
- Columns: Show or hide columns: UC ID, title, risk classification, role, start date, last updated.
Export
Pull the full list into CSV or Excel. The export covers UC ID, title, risk level, role, start date, last updated, owner, and status.
Defining scope
Each use case can carry a detailed scope that pins down the technical and compliance profile of the AI system.
- AI environment: Where and how the system runs.
- Technology type: Machine learning, NLP, computer vision, or another category.
- Novel technology: Whether it uses new or experimental AI techniques.
- Personal data: Whether the system handles personal data (matters for GDPR).
- Monitoring: Whether you have post-deployment monitoring running.
- Unintended outcomes: Adverse effects spotted during scoping.
From your answers, VerifyWise calculates a risk level (High, Medium, or Low) and flags the compliance requirements that apply. A system processing personal data, for instance, gets GDPR and DPIA requirements added automatically.
Who can do what
- Admin: Everything. Create, edit, delete, manage settings, handle approvals.
- Editor: Create and edit use cases, add frameworks and risks.
- Reviewer: Read plus approve or reject.
- Auditor: Read only.
When you get notified
- Someone assigns you as owner
- You are added as a team member (email varies by role)
- An approval request changes status
- A new use case is created (Slack, if your organization has it connected)
Where use cases plug in
- Risk management: Risks created inside a use case feed into the global register. You can also link existing risks from the risk management page.
- Frameworks: Compliance progress is tracked per use case. Controls and assessments stay scoped to the use case they belong to.
- Model inventory: Link models so there is a clear trail from business process to underlying AI.
- Vendors: Vendor risks can be tied to specific use cases.
- Dashboard: The main dashboard rolls up data from every use case: compliance progress, risk distribution, task status.
- Evidence hub: Attach evidence items to use cases for audit readiness.
Practical tips
- Keep it one-to-one. One AI system, one use case. Bundling unrelated systems together confuses auditors and weakens your compliance posture.
- Classify risk early. The EU AI Act classification shapes how much documentation and oversight you need. Fix it at the start rather than retrofitting later.
- Pick the right owner. They get the notifications and carry the accountability. Choose someone who actually has authority over the AI system, not just a name on paper.
- Attach frameworks on day one. Tracking compliance from the beginning is always easier than catching up.
- Fill in the scope section. The auto-generated compliance requirements can flag obligations you would otherwise overlook.