User guideLLM EvalsProject configuration
LLM Evals

Project configuration

Set the LLM use case type for your evaluation project.

Project configuration

This tab sets what type of LLM application your project evaluates. The choice determines which scorers and metrics are available when you run experiments.

Use case types

Each project targets one of three use case types:

RAG

Evaluate retrieval-augmented generation. Scorers measure recall, precision, relevancy and faithfulness.

Chatbot

Evaluate single and multi-turn conversations. Scorers focus on coherence, correctness and safety.

Agent

Evaluate AI agents for planning, tool usage, task completion and step efficiency.

Setting the use case

  1. Open your project and click the Configuration tab in the sidebar.
  2. Select RAG, Chatbot or Agent from the radio buttons.
  3. Click Save changes.

Use case locking

Once you create your first experiment, the use case becomes locked. This prevents inconsistencies between your experiments and the configured scorers.

If you need to evaluate a different use case, create a new project. The configuration page shows a Create new project button when the use case is locked.

Choose your use case carefully before running experiments. You can't change it once experiments exist in the project.
PreviousEvaluation reports
NextLLM Evals settings
Project configuration - LLM Evals - VerifyWise User Guide