National Telecommunications and Information Administration
policyactive

AI System Disclosures

National Telecommunications and Information Administration

View original resource

AI System Disclosures

Summary

The NTIA's AI System Disclosures policy framework tackles one of AI governance's thorniest challenges: how to make AI systems transparent without revealing trade secrets or creating security vulnerabilities. This resource introduces a dual-track approach to disclosure—confidential reporting to regulators and standardized public transparency mechanisms, including the innovative concept of AI "nutritional labels." Unlike other transparency frameworks that focus primarily on documentation, this policy explicitly balances public accountability with legitimate business interests, offering practical pathways for both mandatory and voluntary disclosure regimes.

The Dual-Track Disclosure Model

The framework distinguishes between two complementary disclosure streams:

Confidential Regulatory Reporting allows companies to provide sensitive technical details about AI systems directly to government authorities without public exposure. This includes proprietary training data, detailed algorithmic processes, security vulnerabilities, and competitive intelligence that could be harmful if disclosed publicly.

Public Transparency Mechanisms focus on information that users, researchers, and civil society need to understand AI system capabilities, limitations, and potential impacts. This includes performance metrics, use case boundaries, data sources (in general terms), and known risks or biases.

This dual approach recognizes that effective AI governance requires different types of information for different stakeholders—regulators need technical depth for oversight, while the public needs accessible, standardized information for informed decision-making.

AI Nutritional Labels: Standardization in Action

The policy's most innovative contribution is the concept of AI "nutritional labels"—standardized, comparable disclosure formats that present key AI system information in an accessible way. Just as food labels allow consumers to compare products across manufacturers using consistent metrics, AI nutritional labels would enable stakeholders to evaluate different AI systems using standardized categories.

These labels would include standardized sections for:

  • Intended use cases and explicit limitations
  • Performance metrics across different demographic groups
  • Training data characteristics without revealing proprietary sources
  • Known risks and mitigation strategies
  • Environmental impact of training and deployment
  • Human oversight requirements and capabilities

The standardization aspect is crucial—it prevents the current situation where each company creates its own disclosure format, making meaningful comparison nearly impossible.

Implementation Pathways

The framework acknowledges that disclosure requirements can emerge through multiple regulatory pathways. Federal agencies could implement sector-specific disclosure rules, states could require disclosures for AI systems used in certain contexts, and industry could adopt voluntary standards that later become regulatory expectations.

The policy suggests starting with high-risk applications (healthcare, criminal justice, employment) and gradually expanding to other domains as disclosure practices mature. It also emphasizes the importance of international coordination to prevent regulatory fragmentation that could undermine both innovation and accountability.

Who This Resource Is For

Policy makers developing AI transparency regulations who need practical frameworks that balance competing interests while remaining enforceable and technically feasible.

AI companies seeking proactive approaches to transparency that can satisfy stakeholder demands while protecting legitimate business interests—particularly those anticipating disclosure requirements.

Regulatory agencies at federal, state, and local levels who need structured approaches to AI oversight that can scale across different technical contexts and risk levels.

Civil society organizations and researchers advocating for AI transparency who want to understand viable policy mechanisms and contribute to standardization efforts.

International regulatory bodies looking for models that could inform global AI governance frameworks and cross-border cooperation on disclosure standards.

What Makes This Different

Unlike transparency frameworks that treat disclosure as binary (disclosed or not), this policy recognizes disclosure as a spectrum with different audiences needing different types of information. It's also uniquely practical in its recognition that purely voluntary approaches often fail to create comprehensive transparency, while purely mandatory approaches can stifle innovation or drive activity offshore.

The nutritional label concept borrows from successful precedents in other industries, making it more likely to achieve political feasibility and public understanding. Most importantly, the framework provides concrete pathways for implementation rather than just high-level principles, making it immediately actionable for policy makers.

Tags

AI governancetransparencydisclosure requirementsAI accountabilitystandardizationAI safety

At a glance

Published

2024

Jurisdiction

United States

Category

Transparency and documentation

Access

Public access

Build your AI governance program

VerifyWise helps you implement AI governance frameworks, track compliance, and manage risk across your AI systems.

AI System Disclosures | AI Governance Library | VerifyWise