User guideLLM EvalsManaging models
LLM Evals

Managing models

View and manage the AI models used across your evaluation experiments.

Managing models

The Models page is your central registry for the AI models used across your evaluation experiments. It shows which models have been tested, their providers and how they've been accessed (via API, locally through Ollama or through HuggingFace).

Model list

Each model entry shows the model name, provider and access method. Models are automatically added to this list when they're used in experiments. You don't need to manually register models before running evaluations.

Supported providers

VerifyWise supports a wide range of model providers:

  • OpenAI: GPT-4, GPT-4 Turbo, GPT-3.5 Turbo and newer models.
  • Anthropic: Claude 3 and 3.5 family models (Opus, Sonnet, Haiku).
  • Google Gemini: Gemini Pro and Ultra.
  • xAI: Grok models.
  • Mistral: Mistral Large and Medium.
  • OpenRouter: Aggregated access to 600+ models from many providers via a single API.
  • Ollama (self-hosted): Locally-hosted models running on your own hardware.

API key configuration

API keys for cloud providers are configured in the Settings tab of your evals project. Keys are stored securely and shared across all experiments in the project. You can add, update or remove keys at any time.

For local models (Ollama), no API key is needed. Just make sure your Ollama instance is running and accessible from the server.
PreviousRunning bias audits
NextLLM Arena
Managing models - LLM Evals - VerifyWise User Guide