Route, monitor, and protect LLM requests through a unified gateway with cost tracking and guardrails.
9 articles
Set up the AI Gateway from scratch: add a provider key, create an endpoint, test it, and start routing requests.
Monitor LLM usage, costs, and guardrail activity across all providers.
Configure LLM provider endpoints with model selection, API keys, and system prompts.
Test endpoints with an interactive chat interface before routing production traffic.
Configure PII detection and content filtering rules to protect AI requests.
Manage API keys, budget limits, and guardrail configuration.
Generate API keys for developers to access the gateway with any OpenAI-compatible SDK.
View, filter, and inspect every request that flows through the AI Gateway.
Create versioned prompt templates with variables, test them with streaming responses, and bind them to endpoints.