User guideAI DetectionAI Detection settings
AI Detection

AI Detection settings

Configure GitHub tokens, LLM analysis, and dimension weights.

AI Detection settings

The Settings page has 2 tabs: GitHub integration for repository access tokens, and Risk scoring for LLM-enhanced analysis, dimension weights, and vulnerability detection types.

GitHub integration

To scan private repositories, you need a GitHub Personal Access Token. Without a token, AI Detection can only scan public repositories.

Creating a token

Click the Create a new token on GitHub link to open GitHub's token creation page with the recommended scopes pre-selected:

  • repo: Full access to private and public repositories. Required for scanning private repos.
  • public_repo: Access to public repositories only. Use this if you only need to scan public repos.

Saving your token

Paste your token into the Personal access token field. Optionally give it a descriptive name (e.g., "VerifyWise Scanner Token") to help identify it later. Click Test token to verify it works, then Save token to store it.

Managing your token

Once a token is configured, you'll see a status indicator showing it's active. You can update the token at any time by entering a new one and clicking Update token. To remove the token entirely, click the delete button.

Tokens are stored encrypted on the server and aren't exposed in the browser after saving.

Risk scoring

The Risk scoring tab controls how the AI Governance Risk Score (AGRS) is calculated for your scans.

LLM-enhanced analysis

Toggle LLM-enhanced analysis on to enable AI-powered scoring. The risk scoring engine will send anonymized finding summaries to your configured LLM, which produces a written analysis, recommendations, and suggested risks.

Select which LLM key to use from the dropdown. LLM keys are managed in Settings → LLM keys at the organization level. If no keys are configured, the dropdown shows a message directing you to set one up.

Without LLM enhancement, risk scores use rule-based analysis only. The score is still accurate but won't include written summaries, recommendations, or suggested risks.

Dimension weights

Use the sliders to control how much each risk dimension contributes to the overall score. The 5 dimensions:

  • Data sovereignty: Weight for external data exposure and cloud API usage
  • Transparency: Weight for documentation quality and audit readiness
  • Security: Weight for vulnerabilities and credential exposure
  • Autonomy: Weight for autonomous AI agent detection
  • Supply chain: Weight for third-party dependencies and licensing

Weights must total 100%. A validation message shows if they don't. Click Reset to defaults to go back to the original distribution. After changing weights, click Save and recalculate existing scores to apply them.

Vulnerability type toggles

When LLM-enhanced analysis is on, a Vulnerability detection section appears with toggles for each OWASP LLM Top 10 type. Turn individual types on or off based on what matters to your team:

  • Prompt injection (LLM01): Detect untrusted input concatenated into LLM prompts
  • Insecure output handling (LLM02): Detect LLM output passed to dangerous sinks
  • Training data poisoning (LLM03): Detect insecure model deserialization and untrusted sources
  • Model denial of service (LLM04): Detect missing token limits and timeouts
  • Supply chain (LLM05): Detect unpinned versions and untrusted model URLs
  • Sensitive info disclosure (LLM06): Detect PII and credentials passed to LLM context
  • Insecure plugin design (LLM07): Detect tools without input validation or schemas
  • Excessive agency (LLM08): Detect agents with overly broad access and no human oversight
  • Overreliance (LLM09): Detect missing human review and confidence thresholds
  • Model theft (LLM10): Detect exposed model files and unauthenticated endpoints

Disabled types are skipped during LLM analysis, which cuts scan time and API costs. The regex pre-filter still runs for all types regardless.

Reducing noise
If certain types keep producing false positives for your codebase, turn them off here so you can focus on what's actually relevant.
PreviousScan results
AI Detection settings - AI Detection - VerifyWise User Guide