The AI gateway built for regulated industries.

One layer between your organization and every LLM provider. Cost tracking, guardrails and risk detection connected to your compliance frameworks.

The challenge

Your AI traffic is ungoverned

Organizations deploy LLMs across dozens of use cases, but most have no central layer for security, cost control or compliance evidence.

No visibility into which teams use which models or how much they spend

PII and sensitive data flowing to LLM providers without detection

No audit trail connecting AI usage to regulatory obligations

Budget overruns from uncontrolled API usage across departments

100+Supported providers
2,500+Models in catalog
8Risk conditions

See everything

Track every request, every token, every dollar across every provider. Real-time dashboards show cost, error rates, guardrail detections and token usage.

Stop what shouldn't go through

PII detection catches personal data before it reaches an LLM. Content filters block harmful or off-policy prompts. Guardrails scan both requests and responses.

Connect AI usage to compliance

VerifyWise tracks spend and maps it to your EU AI Act obligations, ISO 42001 controls and NIST AI RMF requirements.

How it works

Request flow through the gateway

Every AI request passes through security, budget and governance checks before reaching the provider. Every response is scanned before returning to your application.

In action

Governance controls, visualized

Guardrails

PII detection and content filtering on every request

Risk detection

8 automated conditions evaluated daily

Audit trail

Every request logged with full payload and cost

Why VerifyWise

Other gateways route your AI traffic. VerifyWise governs it.

What you getTypical AI gatewayVerifyWise
LLM routing
Cost tracking
Guardrails
Some
PII + content filters with block/mask
Budget controls
Some
Auto-block + email alerts
Risk detection
8 automated conditions, daily
Compliance mapping
EU AI Act, ISO 42001, NIST AI RMF
Risk register integration
Risks surface as suggestions
Audit evidence
Every request is evidence
Global cache
Some
Up to 30% cost reduction
Open source
Varies
Yes, fully open source

Integration

Three ways to connect

Through the UI

Configure endpoints, test in the playground, manage guardrails and budgets. No code required.

Direct SDK access

Issue virtual keys. Developers point their existing OpenAI SDK at the gateway URL. Zero code changes.

Application integration

Route backend LLM calls through the gateway. One endpoint, all providers, full logging.

Providers

One API. Every major LLM provider.

OpenAI-compatible format. Swap providers without changing a line of code.

OpenAI
Anthropic
Google
Meta
Mistral
Cohere
HuggingFace
Groq
Perplexity
Stability
OpenAI
Anthropic
Google
Meta
Mistral
Cohere
HuggingFace
Groq
Perplexity
Stability
OpenAI
Anthropic
Google
Meta
Mistral
Cohere
HuggingFace
Groq
Perplexity
Stability
OpenAI
Anthropic
Google
Meta
Mistral
Cohere
HuggingFace
Groq
Perplexity
Stability
Claude
Gemini
Grok
Nvidia
Azure
AWS
Replicate
OpenAI
Anthropic
Google
Claude
Gemini
Grok
Nvidia
Azure
AWS
Replicate
OpenAI
Anthropic
Google
Claude
Gemini
Grok
Nvidia
Azure
AWS
Replicate
OpenAI
Anthropic
Google
Claude
Gemini
Grok
Nvidia
Azure
AWS
Replicate
OpenAI
Anthropic
Google

+ OpenRouter, Fireworks AI, DeepInfra, xAI, Together AI, and any OpenAI-compatible endpoint

Benefits

Why use AI Gateway?

Key advantages for your AI governance program

One API for OpenAI, Anthropic, Google, Mistral, Azure, Bedrock and 100+ providers

PII detection and content filters scan every request and response

Budgets auto-block when exhausted. Email alerts fire before you hit the limit.

8 risk conditions evaluated daily, mapped to EU AI Act and ISO 42001

Capabilities

What you can do

Core functionality of AI Gateway

Unified LLM access

One API for 100+ providers in OpenAI-compatible format. Swap providers without changing code. Failover chains route around downtime on their own.

Guardrails that actually guard

PII detection powered by Microsoft Presidio. Custom content filters with regex and keyword rules. Scans both requests and responses. Block or mask, configurable per rule.

Spend visibility and budget control

See what you spend, where and on which models. Monthly budgets auto-block when exhausted. Cost breakdowns by provider, endpoint, model and time period.

Virtual keys for teams

Give developers and teams scoped API keys with their own rate limits and budget caps. They use any OpenAI-compatible SDK, you keep the oversight.

Prompt management and versioning

Store system prompts centrally. Version every change with full diff history. Test prompts against datasets before deploying. Compare versions side by side.

Automated risk detection

8 risk conditions evaluated continuously: PII exposure, endpoints without guardrails, single-provider concentration, budget exhaustion, guardrail spike trends, missing audit trails, no system prompts and unusual cost patterns.

Interactive playground

Test any endpoint with any model before routing production traffic. Adjust temperature, max tokens, and system prompts. See responses in real time.

Complete audit trail

Every request logged with full payload, latency, token count, cost and guardrail outcome. Searchable, filterable, exportable. Your auditors get the evidence they need without asking.

Why VerifyWise

Other gateways route your AI traffic. VerifyWise governs it.

What makes our approach different

Compliance mapping built in

Risk conditions map to EU AI Act, ISO 42001 and NIST AI RMF. VerifyWise tracks spend and connects it to your regulatory obligations.

Risk register integration

Detected risks surface as suggestions in your existing risk register. Accept, dismiss or escalate with full justification tracking.

Every request is evidence

VerifyWise logs for auditors. Full payload, latency, token count, cost and guardrail outcome on every request.

Regulatory context

Gateway controls mapped to regulations

The AI Gateway enforces technical controls that directly address regulatory requirements across multiple frameworks.

EU AI Act Article 9

Risk management systems must identify and mitigate risks throughout the AI lifecycle. The gateway's eight automated risk conditions provide continuous risk identification and surface findings in the risk register.

EU AI Act Article 12

High-risk AI systems must enable automatic recording of events (logs). The gateway's complete audit trail logs every request with payload, cost, and guardrail outcomes.

ISO 42001 Clause 6.1

Organizations must determine risks and opportunities related to AI systems. Automated risk detection identifies eight risk conditions daily, each mapped to specific ISO 42001 controls.

Technical details

How it works

Implementation details and technical capabilities

OpenAI-compatible API format: point any SDK at the gateway URL with zero code changes

Failover chains: if Provider A fails, the gateway automatically routes to Provider B

PII detection powered by Microsoft Presidio with configurable entity types

Request flow: Application→Guardrail scan → Budget check → Rate limit → Provider→Response scan → Cost log → Risk evaluation → Return

Virtual keys: scoped API keys with per-key rate limits, budget caps, and usage tracking

Prompt versioning: full diff history with side-by-side comparison and dataset testing

Supported frameworks

EU AI ActISO 42001NIST AI RMF

Integrations

OpenAI
OpenAI
Anthropic
Anthropic
Google
Google Gemini
Mistral
Mistral
Azure
Azure OpenAI
AWS
AWS Bedrock
Together AIOpenRouterFireworks AIDeepInfra
Cohere
Cohere
Replicate
Replicate
xAI

FAQ

Common questions

Frequently asked questions about AI Gateway

Each of the 8 risk conditions maps to specific EU AI Act articles and ISO 42001 clauses. When a condition triggers, it surfaces as a suggestion in your risk register with the regulatory reference. You can accept it as a tracked risk or dismiss it with justification.

No. Developers use virtual keys with any OpenAI-compatible SDK. They point their SDK at the gateway URL and use their virtual key. All governance happens transparently at the gateway layer.

The gateway auto-blocks requests for that virtual key or endpoint. You receive an email alert before the limit is hit so you can adjust. Budget controls are configurable per virtual key, endpoint, or globally.

Yes. Any OpenAI-compatible endpoint can be added as a custom provider. The gateway routes requests to it with the same guardrails, logging and budget controls as built-in providers.

About 5 minutes. Add a provider API key in Settings, create an endpoint that maps to a model, generate a virtual key, and point your SDK at the gateway URL. You're live.

Email addresses, phone numbers (US and international), credit cards, person names (NLP-based), IBANs, Turkish national IDs, US Social Security numbers and EU phone formats. All scanning runs within your infrastructure.

Yes. Each endpoint maps a URL slug to a provider and model. Applications call the slug. Swap the underlying model in the gateway UI and all traffic routes to the new model instantly.

You can set a fallback endpoint on any primary endpoint. If the primary provider returns errors, the gateway routes traffic to the fallback with no downtime for your application.

Prompts are versioned lists of messages with variable placeholders. The editor shows a side-by-side view: messages on the left, test chat on the right. You can test with real endpoints before publishing.

Ready to get started?

See how VerifyWise can help you govern AI with confidence.

AI Gateway for Regulated Industries | VerifyWise