LangChain
toolactive

OpenEvals

LangChain

View original resource

LangChain's open-source evaluation harness offering prebuilt and customisable evaluators for LLM and agent outputs, including trajectory matching, exact match, LLM-as-judge, and safety checks. Integrates with LangSmith or runs standalone.

Tags

agentic AIevaluation

At a glance

Published

2025

Jurisdiction

Global

Category

Evaluation and benchmarks

Access

Public access

Build your AI governance program

VerifyWise helps you implement AI governance frameworks, track compliance, and manage risk across your AI systems.

OpenEvals | VerifyWise AI Governance Library