One layer between your organization and every LLM provider. Cost tracking, guardrails and risk detection connected to your compliance frameworks.
The challenge
Organizations deploy LLMs across dozens of use cases, but most have no central layer for security, cost control or compliance evidence.
No visibility into which teams use which models or how much they spend
PII and sensitive data flowing to LLM providers without detection
No audit trail connecting AI usage to regulatory obligations
Budget overruns from uncontrolled API usage across departments
Track every request, every token, every dollar across every provider. Real-time dashboards show cost, error rates, guardrail detections and token usage.
PII detection catches personal data before it reaches an LLM. Content filters block harmful or off-policy prompts. Guardrails scan both requests and responses.
VerifyWise tracks spend and maps it to your EU AI Act obligations, ISO 42001 controls and NIST AI RMF requirements.
How it works
Every AI request passes through security, budget and governance checks before reaching the provider. Every response is scanned before returning to your application.
In action
PII detection and content filtering on every request
8 automated conditions evaluated daily
Every request logged with full payload and cost
Why VerifyWise
| What you get | Typical AI gateway | |
|---|---|---|
| LLM routing | ||
| Cost tracking | ||
| Guardrails | Some | PII + content filters with block/mask |
| Budget controls | Some | Auto-block + email alerts |
| Risk detection | 8 automated conditions, daily | |
| Compliance mapping | EU AI Act, ISO 42001, NIST AI RMF | |
| Risk register integration | Risks surface as suggestions | |
| Audit evidence | Every request is evidence | |
| Global cache | Some | Up to 30% cost reduction |
| Open source | Varies | Yes, fully open source |
Integration
Configure endpoints, test in the playground, manage guardrails and budgets. No code required.
Issue virtual keys. Developers point their existing OpenAI SDK at the gateway URL. Zero code changes.
Route backend LLM calls through the gateway. One endpoint, all providers, full logging.
Providers
OpenAI-compatible format. Swap providers without changing a line of code.
+ OpenRouter, Fireworks AI, DeepInfra, xAI, Together AI, and any OpenAI-compatible endpoint
Benefits
Key advantages for your AI governance program
One API for OpenAI, Anthropic, Google, Mistral, Azure, Bedrock and 100+ providers
PII detection and content filters scan every request and response
Budgets auto-block when exhausted. Email alerts fire before you hit the limit.
8 risk conditions evaluated daily, mapped to EU AI Act and ISO 42001
Capabilities
Core functionality of AI Gateway
One API for 100+ providers in OpenAI-compatible format. Swap providers without changing code. Failover chains route around downtime on their own.
PII detection powered by Microsoft Presidio. Custom content filters with regex and keyword rules. Scans both requests and responses. Block or mask, configurable per rule.
See what you spend, where and on which models. Monthly budgets auto-block when exhausted. Cost breakdowns by provider, endpoint, model and time period.
Give developers and teams scoped API keys with their own rate limits and budget caps. They use any OpenAI-compatible SDK, you keep the oversight.
Store system prompts centrally. Version every change with full diff history. Test prompts against datasets before deploying. Compare versions side by side.
8 risk conditions evaluated continuously: PII exposure, endpoints without guardrails, single-provider concentration, budget exhaustion, guardrail spike trends, missing audit trails, no system prompts and unusual cost patterns.
Test any endpoint with any model before routing production traffic. Adjust temperature, max tokens, and system prompts. See responses in real time.
Every request logged with full payload, latency, token count, cost and guardrail outcome. Searchable, filterable, exportable. Your auditors get the evidence they need without asking.
Why VerifyWise
What makes our approach different
Risk conditions map to EU AI Act, ISO 42001 and NIST AI RMF. VerifyWise tracks spend and connects it to your regulatory obligations.
Detected risks surface as suggestions in your existing risk register. Accept, dismiss or escalate with full justification tracking.
VerifyWise logs for auditors. Full payload, latency, token count, cost and guardrail outcome on every request.
Regulatory context
The AI Gateway enforces technical controls that directly address regulatory requirements across multiple frameworks.
Risk management systems must identify and mitigate risks throughout the AI lifecycle. The gateway's eight automated risk conditions provide continuous risk identification and surface findings in the risk register.
High-risk AI systems must enable automatic recording of events (logs). The gateway's complete audit trail logs every request with payload, cost, and guardrail outcomes.
Organizations must determine risks and opportunities related to AI systems. Automated risk detection identifies eight risk conditions daily, each mapped to specific ISO 42001 controls.
Technical details
Implementation details and technical capabilities
OpenAI-compatible API format: point any SDK at the gateway URL with zero code changes
Failover chains: if Provider A fails, the gateway automatically routes to Provider B
PII detection powered by Microsoft Presidio with configurable entity types
Request flow: Application→Guardrail scan → Budget check → Rate limit → Provider→Response scan → Cost log → Risk evaluation → Return
Virtual keys: scoped API keys with per-key rate limits, budget caps, and usage tracking
Prompt versioning: full diff history with side-by-side comparison and dataset testing
FAQ
Frequently asked questions about AI Gateway
Each of the 8 risk conditions maps to specific EU AI Act articles and ISO 42001 clauses. When a condition triggers, it surfaces as a suggestion in your risk register with the regulatory reference. You can accept it as a tracked risk or dismiss it with justification.
No. Developers use virtual keys with any OpenAI-compatible SDK. They point their SDK at the gateway URL and use their virtual key. All governance happens transparently at the gateway layer.
The gateway auto-blocks requests for that virtual key or endpoint. You receive an email alert before the limit is hit so you can adjust. Budget controls are configurable per virtual key, endpoint, or globally.
Yes. Any OpenAI-compatible endpoint can be added as a custom provider. The gateway routes requests to it with the same guardrails, logging and budget controls as built-in providers.
About 5 minutes. Add a provider API key in Settings, create an endpoint that maps to a model, generate a virtual key, and point your SDK at the gateway URL. You're live.
Email addresses, phone numbers (US and international), credit cards, person names (NLP-based), IBANs, Turkish national IDs, US Social Security numbers and EU phone formats. All scanning runs within your infrastructure.
Yes. Each endpoint maps a URL slug to a provider and model. Applications call the slug. Swap the underlying model in the gateway UI and all traffic routes to the new model instantly.
You can set a fallback endpoint on any primary endpoint. If the primary provider returns errors, the gateway routes traffic to the fallback with no downtime for your application.
Prompts are versioned lists of messages with variable placeholders. The editor shows a side-by-side view: messages on the left, test chat on the right. You can test with real endpoints before publishing.
More from AI tools
Other features in the AI tools pillar
See how VerifyWise can help you govern AI with confidence.