NIST
frameworkactive

NIST Adversarial Machine Learning Taxonomy

NIST

View original resource

NIST Adversarial Machine Learning Taxonomy

Summary

NIST's Adversarial Machine Learning Taxonomy is a comprehensive classification system that brings order to the chaotic landscape of AI attacks. Rather than leaving organizations to guess at potential threats, this framework systematically categorizes adversarial attacks into three primary buckets: evasion (fooling models at inference time), poisoning (corrupting training data), and privacy attacks (extracting sensitive information). What sets this taxonomy apart is its dual focus—not just cataloging attacks, but providing a structured foundation for building defenses. It's essentially NIST's answer to the question: "How do we think systematically about AI security threats?"

The Three Pillars of AI Adversity

Evasion Attacks target deployed models by crafting inputs designed to cause misclassification. Think adversarial examples that make a stop sign invisible to an autonomous vehicle's vision system, or subtle perturbations that fool fraud detection algorithms.

Poisoning Attacks corrupt the training process itself by injecting malicious data or manipulating the learning algorithm. These attacks are particularly insidious because they embed vulnerabilities directly into the model's foundation, making them harder to detect post-deployment.

Privacy Attacks extract sensitive information from trained models, including membership inference (determining if specific data was used in training), model inversion (reconstructing training data), and model extraction (stealing the model's functionality).

Why This Framework Changes the Game

Most AI security discussions happen in ad-hoc terms, with teams scrambling to understand emerging threats without a common vocabulary. NIST's taxonomy provides that missing lingua franca—a shared language for security professionals, ML engineers, and risk managers to discuss threats systematically.

The framework also bridges the gap between academic research and practical implementation. While researchers have identified hundreds of attack variants, this taxonomy groups them into actionable categories that organizations can actually defend against. Instead of playing whack-a-mole with individual attack types, teams can build defenses around the fundamental attack patterns.

Who This Resource Is For

ML Security Engineers who need to design robust defenses and understand the full threat landscape beyond basic adversarial examples.

Risk Management Teams responsible for AI governance who require a structured way to assess and communicate ML security risks to executives and stakeholders.

AI Product Managers who need to understand potential attack vectors during system design and make informed decisions about security investments.

Compliance Officers working in regulated industries who must demonstrate systematic approaches to AI risk management and align with emerging regulatory requirements.

Security Researchers looking for a authoritative framework to position their work within the broader adversarial ML landscape.

Putting the Taxonomy to Work

Start by conducting a threat modeling exercise using the three-category structure. For each ML system in your organization, systematically evaluate exposure to evasion, poisoning, and privacy attacks based on your threat model, data sensitivity, and deployment context.

Use the taxonomy as a checklist for security reviews. When evaluating new ML systems or conducting security assessments, ensure you're considering threats across all three categories rather than focusing solely on the most obvious attack vectors.

Align your defense strategies with the taxonomic structure. Instead of implementing random security measures, build layered defenses that address each category: input validation and detection for evasion, data provenance and anomaly detection for poisoning, and differential privacy and output monitoring for privacy attacks.

Watch Out For

The taxonomy is descriptive, not prescriptive—it tells you what attacks exist but doesn't provide step-by-step implementation guidance for defenses. You'll need to combine this with other NIST resources and security frameworks for actionable guidance.

Attack categories aren't mutually exclusive in practice. Sophisticated adversaries often combine techniques across categories, so avoid treating these as isolated threat vectors.

The framework reflects the current state of adversarial ML research, which evolves rapidly. New attack variants emerge regularly, so use this as a foundation rather than a comprehensive catalog of every possible threat.

Tags

NISTadversarial MLattackssecurity

At a glance

Published

2024

Jurisdiction

United States

Category

Risk taxonomies

Access

Public access

Build your AI governance program

VerifyWise helps you implement AI governance frameworks, track compliance, and manage risk across your AI systems.

NIST Adversarial Machine Learning Taxonomy | AI Governance Library | VerifyWise