AI tools pillar

Know what AI is actually in your codebase

Scan GitHub repositories to detect AI/ML libraries, API calls, secrets, model references, RAG components, and AI agents—with security scanning and EU AI Act mapping.

AI detection screenshot

The challenge

You can't govern AI you don't know exists

Organizations have AI/ML libraries scattered across repositories, often introduced by individual developers without oversight. Cloud API keys get committed to code. Third-party models get downloaded without security review. Without visibility into what AI is actually deployed, governance is incomplete.

Developers add AI libraries without going through governance processes

API keys and secrets for AI services get hardcoded in repositories

No inventory of which AI technologies are actually in use

Serialized model files may contain security vulnerabilities

Regulators ask for AI Bill of Materials and you can't produce one

100+AI technologies
8Finding types
15+Languages
3Risk levels

Benefits

Why use AI detection?

Key advantages for your AI governance program

Scan public and private GitHub repositories

Detect 100+ AI/ML libraries and frameworks

Identify exposed secrets and security threats

Export AI Bill of Materials (AI-BOM) for compliance

Capabilities

What you can do

Core functionality of AI detection

Repository scanning

Clone and scan GitHub repositories to detect AI/ML usage across 15+ programming languages and dependency files.

8 finding types

Detect libraries, dependencies, API calls, secrets, model references, RAG components, AI agents, and model security threats.

Security analysis

Scan serialized model files for threats with CWE and OWASP ML mapping. Identify unsafe deserialization and code injection risks.

Governance workflows

Review, approve, or flag findings. Track governance status across your AI detection results.

Enterprise example

How an organization discovered ungoverned AI across 200+ repositories

See how organizations use this capability in practice

The challenge

An organization preparing for EU AI Act compliance realized they had no comprehensive view of AI usage across their engineering teams. Different teams had adopted various AI libraries and cloud services independently. When asked to produce an AI inventory for auditors, they had no way to generate one.

The solution

They scanned their GitHub organization's repositories using the AI detection system. The scan identified OpenAI API usage in 15 repositories, exposed API keys in 3 repositories, LangChain implementations in 8 repositories, and various ML frameworks across the codebase.

The outcome

The organization now has a complete AI Bill of Materials across their codebase. Exposed secrets were rotated immediately. Previously unknown AI implementations were added to the governance registry. New scans run on a schedule to catch new AI additions before they reach production.

Why VerifyWise

Complete visibility into your AI footprint

What makes our approach different

100+ AI technologies detected

Pattern matching for cloud AI providers (OpenAI, Anthropic, Azure AI), ML frameworks (PyTorch, TensorFlow), agent frameworks (LangChain, CrewAI), and vector databases (Pinecone, Weaviate).

8 finding types for complete coverage

Libraries, dependencies, API calls, secrets, model references, RAG components, AI agents, and model security threats. Nothing slips through.

Risk-based classification

High risk for cloud APIs and exposed secrets. Medium risk for configurable frameworks. Low risk for local processing. Prioritize what matters.

Governance workflow built-in

Review findings, approve known-good components, flag items for follow-up. Track governance status across all detected AI usage.

Regulatory context

AI discovery supports compliance documentation

AI regulations require organizations to know what AI systems they deploy. Repository scanning provides evidence of AI inventory completeness and helps identify ungoverned AI usage.

EU AI Act Article 9

Risk management requires knowing what AI systems exist. High-risk library findings map to risk management requirements.

EU AI Act Article 13

Transparency requirements need visibility into AI usage. API call findings help identify where AI interfaces with users.

EU AI Act Article 15

Cybersecurity requirements include supply chain security. Model security scanning identifies threats in serialized models.

Technical details

How it works

Implementation details and technical capabilities

100+ AI/ML technology patterns: OpenAI, Anthropic, LangChain, PyTorch, TensorFlow, HuggingFace, and more

8 finding types: library, dependency, api_call, secret, model_ref, rag_component, agent, model_security

15+ languages: Python, JavaScript, TypeScript, Java, Go, Ruby, Rust, C/C++, C#, Scala, Kotlin, Swift, R, Julia

3 risk levels: High (cloud APIs, secrets, agents), Medium (frameworks, RAG), Low (local ML libraries)

3 confidence levels: High (definitive AI/ML), Medium (likely AI/ML), Low (possibly AI/ML)

Model security scanning for .pt, .pth, .onnx, .h5, .safetensors with CWE-502, CWE-94, CWE-913 detection

EU AI Act compliance mapping: Articles 9, 10, 13, 14, 15 based on finding categories

Supported frameworks

EU AI Act

Integrations

GitHubModel InventoryRisk Management

FAQ

Common questions

Frequently asked questions about AI detection

Ready to get started?

See how VerifyWise can help you govern AI with confidence.

AI detection | AI Governance Platform | VerifyWise