Scan GitHub repositories to detect AI/ML libraries, API calls, secrets, model references, RAG components, and AI agents—with security scanning and EU AI Act mapping.

The challenge
Organizations have AI/ML libraries scattered across repositories, often introduced by individual developers without oversight. Cloud API keys get committed to code. Third-party models get downloaded without security review. Without visibility into what AI is actually deployed, governance is incomplete.
Developers add AI libraries without going through governance processes
API keys and secrets for AI services get hardcoded in repositories
No inventory of which AI technologies are actually in use
Serialized model files may contain security vulnerabilities
Regulators ask for AI Bill of Materials and you can't produce one
Benefits
Key advantages for your AI governance program
Scan public and private GitHub repositories
Detect 100+ AI/ML libraries and frameworks
Identify exposed secrets and security threats
Export AI Bill of Materials (AI-BOM) for compliance
Capabilities
Core functionality of AI detection
Clone and scan GitHub repositories to detect AI/ML usage across 15+ programming languages and dependency files.
Detect libraries, dependencies, API calls, secrets, model references, RAG components, AI agents, and model security threats.
Scan serialized model files for threats with CWE and OWASP ML mapping. Identify unsafe deserialization and code injection risks.
Review, approve, or flag findings. Track governance status across your AI detection results.
Enterprise example
See how organizations use this capability in practice
An organization preparing for EU AI Act compliance realized they had no comprehensive view of AI usage across their engineering teams. Different teams had adopted various AI libraries and cloud services independently. When asked to produce an AI inventory for auditors, they had no way to generate one.
They scanned their GitHub organization's repositories using the AI detection system. The scan identified OpenAI API usage in 15 repositories, exposed API keys in 3 repositories, LangChain implementations in 8 repositories, and various ML frameworks across the codebase.
The organization now has a complete AI Bill of Materials across their codebase. Exposed secrets were rotated immediately. Previously unknown AI implementations were added to the governance registry. New scans run on a schedule to catch new AI additions before they reach production.
Why VerifyWise
What makes our approach different
Pattern matching for cloud AI providers (OpenAI, Anthropic, Azure AI), ML frameworks (PyTorch, TensorFlow), agent frameworks (LangChain, CrewAI), and vector databases (Pinecone, Weaviate).
Libraries, dependencies, API calls, secrets, model references, RAG components, AI agents, and model security threats. Nothing slips through.
High risk for cloud APIs and exposed secrets. Medium risk for configurable frameworks. Low risk for local processing. Prioritize what matters.
Review findings, approve known-good components, flag items for follow-up. Track governance status across all detected AI usage.
Regulatory context
AI regulations require organizations to know what AI systems they deploy. Repository scanning provides evidence of AI inventory completeness and helps identify ungoverned AI usage.
Risk management requires knowing what AI systems exist. High-risk library findings map to risk management requirements.
Transparency requirements need visibility into AI usage. API call findings help identify where AI interfaces with users.
Cybersecurity requirements include supply chain security. Model security scanning identifies threats in serialized models.
Technical details
Implementation details and technical capabilities
100+ AI/ML technology patterns: OpenAI, Anthropic, LangChain, PyTorch, TensorFlow, HuggingFace, and more
8 finding types: library, dependency, api_call, secret, model_ref, rag_component, agent, model_security
15+ languages: Python, JavaScript, TypeScript, Java, Go, Ruby, Rust, C/C++, C#, Scala, Kotlin, Swift, R, Julia
3 risk levels: High (cloud APIs, secrets, agents), Medium (frameworks, RAG), Low (local ML libraries)
3 confidence levels: High (definitive AI/ML), Medium (likely AI/ML), Low (possibly AI/ML)
Model security scanning for .pt, .pth, .onnx, .h5, .safetensors with CWE-502, CWE-94, CWE-913 detection
EU AI Act compliance mapping: Articles 9, 10, 13, 14, 15 based on finding categories
FAQ
Frequently asked questions about AI detection
More from AI tools
Other features in the AI tools pillar
See how VerifyWise can help you govern AI with confidence.