User guideAI GatewayGetting started
AI Gateway

Getting started

Set up the AI Gateway from scratch: add a provider key, create an endpoint, test it, and start routing requests.

Overview

The AI Gateway sits between your applications and LLM providers like OpenAI, Anthropic, and Google. Every request passes through it, so you get cost tracking, guardrails, and audit logs without changing your application code.

By the end you'll have a working endpoint that your developers can hit with the standard OpenAI SDK.

What you need

  • A VerifyWise account with Admin role
  • An API key from at least one LLM provider (OpenAI, Anthropic, Google, etc.)
  • About 5 minutes

Step 1: Add a provider API key

The gateway needs your provider's API key to forward requests. Keys are encrypted at rest (AES-256-CBC) and only decrypted when proxying a request.

  1. Go to AI Gateway > Settings.
  2. Under API keys, click Add key.
  3. Pick your provider from the dropdown (e.g., OpenAI).
  4. Give it a name you'll recognize later (e.g., "Production OpenAI").
  5. Paste your provider API key and click Add key.
Multiple keys per provider
You can add several keys for the same provider. Useful if different teams have separate billing accounts, or if you want a production key and a testing key.

Step 2: Create an endpoint

An endpoint maps a slug (like prod-gpt4o) to a specific provider, model, and API key. Your code references the slug. If you need to swap the model later, change the endpoint config and your application code stays the same.

  1. Go to AI Gateway > Endpoints.
  2. Click Create endpoint.
  3. Enter a slug (lowercase, hyphens allowed, e.g., prod-gpt4o).
  4. Give it a display name (e.g., "Production GPT-4o").
  5. Select the provider and model.
  6. Pick the API key you just added.
  7. Optionally add a system prompt, max tokens, temperature, or rate limit.
  8. Click Create.
What's a slug?
The slug is the identifier your code uses to route requests. When a developer sends model: "prod-gpt4o" in their API call, the gateway looks up the endpoint with that slug and forwards the request to the right provider and model.

Step 3: Test it in the Playground

Before handing the endpoint to developers, make sure it works.

  1. Go to AI Gateway > Playground.
  2. Select your endpoint from the dropdown.
  3. Type a message and hit send.
  4. You should see a response from the LLM, plus the cost and token count.

If you get an error, check that the API key is correct and the model name matches what your provider expects.

Step 4: Use it from code

Two options for calling the gateway from code:

Option A: Virtual key (recommended for production)

Virtual keys let developers use the gateway without a VerifyWise account. Create one in AI Gateway > Virtual keys, copy the key, and use it like this:

python
from openai import OpenAI

client = OpenAI(
    base_url="https://your-verifywise-host/v1",
    api_key="sk-vw-your-virtual-key",
)

response = client.chat.completions.create(
    model="prod-gpt4o",  # your endpoint slug
    messages=[{"role": "user", "content": "Hello!"}],
)
print(response.choices[0].message.content)

Option B: Playground (for testing)

The Playground page in VerifyWise uses your logged-in session. Good for testing prompts and checking costs, but not for production code.

Step 5 (optional): Set a budget

If you want to cap monthly spending:

  1. Go to AI Gateway > Settings.
  2. Under Budget, click Set budget.
  3. Enter a monthly limit in USD.
  4. Set an alert threshold (e.g., 80%) to get a warning before the limit hits.
  5. Toggle Hard limit on if you want requests rejected when the budget runs out.

Virtual keys can also have their own per-key budgets, separate from the org-wide budget.

Step 6 (optional): Add guardrails

Guardrails scan every request before it reaches the LLM. You can block or mask personal data (PII) and filter prohibited content.

  1. Go to AI Gateway > Guardrails.
  2. Click Add rule.
  3. Choose a type: PII detection (catches emails, phone numbers, credit cards, etc.) or content filter (keywords or regex patterns).
  4. Set the action: block the request entirely, or mask the detected content before forwarding.
  5. Use the Test button to try your rule against sample text before enabling it.

What's next

  • Monitor usage: Check the Analytics page for cost trends, token usage, and top users
  • Review logs: The Logs page shows every request with filters for status, source, and search
  • Create more endpoints: Set up separate endpoints for different models or environments (staging vs production)
  • Distribute virtual keys: Give each team or service its own key with a budget cap
NextAnalytics
Getting started - AI Gateway - VerifyWise User Guide