U.S. AI Safety Institute (NIST)
researchactive

Strengthening AI Agent Hijacking Evaluations

U.S. AI Safety Institute (NIST)

View original resource

US AI Safety Institute (NIST) technical blog strengthening hijacking evaluations for agents, proposing adversarial test suites that measure resistance to indirect prompt injection. Publishes methodology and early results from frontier model evaluations.

Tags

agentic AIevaluation

At a glance

Published

2025

Jurisdiction

United States

Category

Evaluation and benchmarks

Access

Public access

Build your AI governance program

VerifyWise helps you implement AI governance frameworks, track compliance, and manage risk across your AI systems.

Strengthening AI Agent Hijacking Evaluations | VerifyWise AI Governance Library