AI Detection

Risk scoring

Understand the AI Governance Risk Score, LLM-enhanced analysis, and suggested risks.

Overview

The AI Governance Risk Score (AGRS) grades scan findings across multiple risk dimensions into a single score (0 to 100) and letter grade (A through F). It gives your team a quick read on the governance risk of a scanned repository.

Scores show up on the scan details page after a scan finishes. You can calculate the score manually, or turn on LLM-enhanced analysis for written summaries, recommendations, and suggested risks.

Score cards

Once calculated, four cards display across the top of the scan details page:

  • Overall score: Score from 0 to 100 with a risk label: Low risk (80+), Moderate risk (60 to 79), or High risk (below 60)
  • Grade: Letter grade from A (Excellent) to F (Critical), with the calculation timestamp
  • Dimensions at risk: Count of dimensions scoring below the 70-point threshold
  • Dimension breakdown: Horizontal progress bars showing the score for each risk dimension

Risk dimensions

The score is made up of 5 weighted dimensions. Each starts at 100 and gets penalties based on what the scan finds. The engine treats inventory items (libraries, dependencies, API calls) differently from risk indicators (secrets, vulnerabilities). Inventory items only penalize when they're medium or high risk; low-risk ones are informational and don't affect the score. Vulnerability findings always count.

  • Data sovereignty: Penalized when data goes to external cloud APIs. High-risk library imports, calls to external providers, and hardcoded secrets all count.
  • Transparency: Penalized when AI usage is poorly documented or hard to audit. Undocumented model references, missing licenses, and low-confidence findings add up.
  • Security: Penalized by model file vulnerabilities, hardcoded credentials, and security findings. Severity (Critical, High, Medium, Low) determines penalty weight.
  • Autonomy: Penalized when autonomous AI agents show up. Agent frameworks, MCP servers, and tool-using agents increase this dimension's risk.
  • Supply chain: Penalized by external dependencies and 3rd-party AI components. Libraries with restrictive licenses, many external providers, and RAG components add to it.

Hover over any dimension bar in the breakdown card to see the top contributors to that dimension's penalties.

Grade scale

GradeScore rangeLabel
A90 to 100Excellent: minimal governance risk
B75 to 89Good: low governance risk
C60 to 74Moderate: some areas need attention
D40 to 59Poor: significant governance gaps
F0 to 39Critical: immediate action needed

Calculating the score

On the scan details page, click Calculate risk score (or Recalculate score if one already exists). A progress dialog walks through each step. The score is saved with the scan and shows up on future visits.

You can recalculate whenever you want, for example after enabling LLM analysis or adjusting dimension weights. The old score gets replaced.

LLM-enhanced analysis

When enabled in AI Detection → Settings → Risk scoring, the scoring engine sends anonymized finding summaries to your configured LLM. The LLM returns:

  • Narrative summary: A written analysis of the repo's risk posture, with key concerns called out in bold
  • Recommendations: Specific steps to improve the score
  • Dimension adjustments: Score tweaks based on context that rule-based scoring alone would miss
  • Suggested risks: Risk suggestions you can add to your risk register

The analysis section appears below the score cards. Click the chevron to expand or collapse it.

LLM analysis needs an LLM key configured in Settings → LLM keys. It uses your organization's own API key; no scan data is stored by the LLM provider.

Suggested risks

With LLM analysis on, the system may suggest specific risks based on what the scan found. These show up in a collapsible "Suggested risks" section below the analysis.

Each suggestion includes:

  • Risk name: Short title for the risk
  • Risk dimension: Which AGRS dimension it relates to
  • Risk level: Likelihood and severity
  • Description: What the risk is and what it could affect
  • Risk categories: Tags like Cybersecurity risk, Compliance risk, etc.

Adding a suggestion to the risk register

Click Add to risk register on any suggestion to open the risk form with pre-filled values (name, description, category, lifecycle phase, likelihood, severity, impact, and mitigation plan). Adjust as needed, then save.

The review notes field gets filled in automatically with a reference to the scan and the findings behind the suggestion.

Dismissing suggestions

Click Ignore on a suggestion to dismiss it. Pick a reason ("Not relevant" or "Already mitigated") from the dropdown. Dismissed suggestions are hidden but don't affect the score.

Cross-referencing findings

After a scan finishes, the system cross-references vulnerability findings with non-vulnerability findings (libraries, agents, security) in the same files. Matched findings get a teal "Cross-ref" badge in the Vulnerabilities tab, so you can see related detections together. For example, a prompt injection finding in the same file as a LangChain library finding.

Customizing dimension weights

Go to AI Detection → Settings → Risk scoring to adjust dimension weights. Use the sliders to shift weight between dimensions (they must total 100%). Click Save, then recalculate existing scores to apply the new weights.

Weight customization
Set weights to match what matters most to your organization. For example, bump Security if vulnerability management is the priority, or increase Data sovereignty if data residency is a regulatory concern.
PreviousScanning repositories
NextRepositories
Risk scoring - AI Detection - VerifyWise User Guide