User guideCompliance frameworksFundamental Rights Impact Assessment
Compliance frameworks

Fundamental Rights Impact Assessment

Assess the impact of high-risk AI systems on fundamental rights as required by EU AI Act Article 27.

Overview

The Fundamental Rights Impact Assessment (FRIA) helps you comply with EU AI Act Article 27, which requires deployers of high-risk AI systems to assess the impact on fundamental rights before putting a system into use.

The FRIA in VerifyWise is an 8-section assessment. Fields auto-save as you work, risk scores update after each change, and you can save snapshots to keep a versioned audit trail.

Accessing the FRIA

  1. Open any project from your dashboard.
  2. Click the FRIA tab in the project view.
  3. The assessment is created automatically the first time you open it.
Auto-creation
You don't need to manually create a FRIA. One is generated per project when you first visit the tab, pre-filled with the project name, organization, and assessment owner.

Assessment sections

The FRIA is divided into 8 sections. Use the sidebar on the left to jump between them, or scroll through the page. The sidebar highlights the section you're currently viewing.

1. Organisation & system profile

Identify the deployer, system name, assessment owner, date, and operational context.

2. Applicability & scope

Classify whether the system is high-risk, select the Annex III category, set the review cycle.

3. Affected persons & groups

Describe who is affected by the AI system and their vulnerability context.

4. Fundamental rights matrix

Assess 10 rights from the EU Charter. Flag affected rights, rate severity and confidence, document mitigation.

5. Specific risks of harm

Build a risk register with likelihood and severity ratings. Import risks from your project risk register.

6. Human oversight & transparency

Document oversight measures, transparency practices, redress processes, and data governance.

7. Stakeholder consultation

Record legal review, DPO review, and owner approval status. Add stakeholder consultation notes.

8. Summary & recommendation

Make a deployment decision and document any conditions.

How auto-save works

Every field auto-saves as you type. When you leave a field or stop typing for half a second, your changes are sent to the server. You'll see a brief "Saving..." indicator next to the action buttons, followed by "Saved" when complete.

No save button needed
You never need to manually save. Just fill in the fields and move on. If you close the browser and come back, your work is there.

Understanding the stat cards

Four cards at the top summarize the current state of your assessment:

  • Completion: Percentage of assessment fields filled in, including whether rights have been reviewed and risk items added.
  • Risk score: A score from 0 to 100 based on flagged rights (weighted by severity and confidence) and risk items (weighted by likelihood and severity).
  • Rights flagged: How many of the 10 fundamental rights you've marked as affected.
  • Status: Current assessment status (draft or submitted) and how many snapshots have been saved.

Fundamental rights matrix

Section 4 contains 10 rights from the EU Charter of Fundamental Rights. For each right:

  1. Check the Flagged box if the AI system could affect this right.
  2. Set the Severity (how serious the impact could be) and Confidence (how certain you are).
  3. Describe the Impact pathway explaining how the system could affect this right.
  4. Document the Mitigation measures you've put in place.
RightCharter articleExample impact
Human dignityArt. 1System makes decisions that undermine autonomy
Right to privacyArt. 7Processing personal data beyond stated purpose
Data protectionArt. 8Insufficient data minimization or retention
Non-discriminationArt. 21Biased outputs across protected groups
Gender equalityArt. 23Gender-based scoring differences
Fair working conditionsArt. 31Worker surveillance or automated management
Consumer protectionArt. 38Misleading AI-generated recommendations
Freedom of expressionArt. 11Content filtering that restricts lawful speech
Effective remedyArt. 47No way to challenge automated decisions
Rights of the childArt. 24System processes children's data without safeguards

Managing risk items

Section 5 lets you build a FRIA-specific risk register. You can add risks manually or import them from your project's existing risk register.

Adding risks manually

  1. Click Add risk item in Section 5.
  2. Describe the risk.
  3. Set the likelihood (Low/Medium/High) and severity (Low/Medium/High).
  4. Document existing controls and any further action needed.

Importing from project risks

  1. Click Import from project risks in Section 5.
  2. Select one or more risks from the list.
  3. Click Import selected. The risk description, likelihood, and severity are copied over.

Attaching evidence

Each section has an Attach evidence button at the bottom. You can link existing files from the evidence hub or upload new ones. Evidence is stored per section, so auditors can see exactly which documents support each part of the assessment.

Saving snapshots

Snapshots are point-in-time copies of your entire assessment. Save one before a review meeting, after completing a major section, or whenever you want a record you can compare against later.

  1. Click Save snapshot above the assessment sections.
  2. Optionally add a note (e.g., "Completed sections 1-4" or "Pre-review baseline").
  3. Click Save snapshot to confirm.
Snapshots vs auto-save
Auto-save continuously saves your latest changes. Snapshots are manual checkpoints you create when you want a permanent record of the assessment at a specific point in time.

Viewing version history

  1. Click Version history above the assessment sections.
  2. A modal shows all saved snapshots with their note, author, and date.
  3. Click any row to expand it and see what changed from the previous version.

The diff view shows changed fields side by side: the old value (with strikethrough) and the new value (in green). It also tracks rights flagging changes and risk item count changes between versions.

How the risk score is calculated

The risk score (0-100) combines two factors:

  • Flagged rights: Each flagged right contributes (severity x 15) + (confidence x 5) points.
  • Risk items: Each risk item contributes likelihood x severity x 3 points, where Low=1, Medium=2, High=3.
Score rangeRisk levelWhat it means
0-29LowMinimal rights impact identified
30-59MediumSome rights concerns that need attention
60-100HighSignificant rights impact requiring review

Who can do what

ActionRequired role
View the FRIAAny authenticated user
Edit assessment fieldsAdmin or Editor
Add/edit/delete risk itemsAdmin or Editor
Update rights matrixAdmin or Editor
Save snapshotsAdmin or Editor
Attach evidenceAdmin or Editor
View version historyAny authenticated user

Next steps

PreviousNIST AI RMF
NextCE Marking
Fundamental Rights Impact Assessment - Compliance frameworks - VerifyWise User Guide