AI Gateway

Logs

View, filter, and inspect every request that flows through the AI Gateway.

Overview

The Logs page records every request through the AI Gateway, whether it came from the Playground or a virtual key. Each row shows the endpoint, model, cost, tokens, latency, status, and who sent it. Click any row to see the full prompt and response.

Filtering logs

A filter bar sits at the top of the page. All filtering happens server-side, so the total count and pagination update to match.

The search box matches against endpoint name, model, user name, and virtual key name. It's case-insensitive and matches partial strings. There's a short debounce so it doesn't hammer the server on every keystroke.

Status filter

Toggle between All, Success (HTTP 200), and Error (anything else). Useful when you're hunting down failed requests and don't want to scroll past hundreds of green 200s.

Source filter

Toggle between All, Playground (requests from logged-in users), and Virtual key (programmatic requests from developer API keys). Handy for separating test traffic from production.

Date grouping

Logs are grouped under day headers: "Today", "Yesterday", or a short date like "Mar 14". You can tell at a glance which day a cluster of requests belongs to without reading individual timestamps.

Reading a log row

Each row shows, left to right:

  • Endpoint: Which endpoint handled the request, plus the model
  • Source: User name for Playground requests, or the virtual key name (with a key icon) for programmatic ones
  • Tokens: Total tokens (prompt + completion combined)
  • Cost: Dollar cost, shown to 6 decimal places
  • Status: Green chip for 200, red for errors
  • Time: When it happened

Expanded view

Click a row to expand it. You'll see up to 4 sections:

Request

Chat messages are rendered as a conversation with colored role labels (system, user, assistant) rather than a wall of JSON. If the data isn't in the standard message format, you get formatted JSON as a fallback.

Response

The LLM's output text. Long responses scroll within the panel.

Error

Shown in red when the request failed. Contains the error message from the provider or from a guardrail block.

Metadata

Custom tags the caller attached to the request (e.g., {"department": "engineering"}). These are stored as JSON and show up in search results too.

A footer row below these sections shows latency in milliseconds, prompt tokens, and completion tokens.

Request/response logging is opt-in
Prompts and responses are only stored when "Log request body" and "Log response body" are turned on in Settings. If they're off, expanded rows won't have Request or Response sections.

Auto-refresh

Hit the "Auto" button next to Refresh to poll for new logs every 10 seconds. It turns green while active. Click it again to stop. Auto-refresh keeps running as you page through results.

Pagination

Pick 10, 25, or 50 rows per page. Your choice sticks across sessions. The total count in the top right reflects whatever filters are active, not just the visible page.

Compliance audit trail
Every gateway request is logged with timestamp, model, cost, tokens, and status. That's your evidence for EU AI Act Article 12 (record-keeping) and ISO 42001 Clause 9 (performance evaluation). The source filter is especially useful here: you can pull just the programmatic traffic and ignore test requests from the Playground.
PreviousVirtual keys
NextPrompts
Logs - AI Gateway - VerifyWise User Guide