Congressional Research Service
Original-Ressource anzeigenThis Congressional Research Service report dissects Executive Order 14110, the most comprehensive federal AI policy framework to date. Released just after the Biden Administration's landmark October 2023 order, this analysis translates 60+ pages of executive policy into digestible insights for practitioners who need to understand what the order actually requires. Unlike the executive order itself, this CRS report organizes the sprawling policy into clear themes, identifies key implementation deadlines, and explains how different provisions interact across federal agencies. It's the authoritative government analysis of a policy that affects everything from federal AI procurement to national security applications.
The executive order establishes a multi-layered approach that fundamentally changes how AI is governed at the federal level. The report reveals how the order creates new oversight mechanisms, including mandatory safety evaluations for large AI models and standardized impact assessments for federal AI use. It introduces the concept of "dual-use foundation models" - AI systems that could pose national security risks - and subjects them to specific reporting requirements when they exceed defined computational thresholds.
The order doesn't just regulate - it accelerates AI development in strategic areas while imposing guardrails on high-risk applications. This includes fast-tracking AI talent visas, establishing AI safety institutes, and requiring watermarking for government-generated AI content.
One of the report's most valuable contributions is mapping out the cascade of deadlines and agency assignments. Within 90 days of the order, NIST must establish AI safety and security standards. The Department of Commerce gets 365 days to develop guidance on dual-use model evaluations. Meanwhile, OMB has 180 days to issue government-wide AI use policies.
The report clarifies which agencies lead on which aspects: DHS handles critical infrastructure protection, while the Department of Energy oversees AI's intersection with power systems. This division of labor matters for organizations trying to determine which agency guidelines will affect their operations.
Unlike sectoral approaches or voluntary frameworks, this executive order creates government-wide mandates with teeth. The report emphasizes how the order leverages existing authorities - particularly the Defense Production Act - to compel private sector compliance with safety evaluations and reporting requirements for the largest AI models.
The order also breaks new ground by addressing AI's intersection with civil rights, worker displacement, and algorithmic discrimination in a unified framework rather than leaving these issues to individual agencies to address piecemeal.
The report helps clarify that the executive order doesn't ban or restrict most AI development - it focuses on the most capable models and highest-risk applications. Many AI systems won't trigger any new requirements.
The order also doesn't override existing sector-specific regulations. Instead, it works alongside FDA device regulations, financial services rules, and other established frameworks. The CRS analysis explains how these layers interact rather than conflict.
Additionally, while the order creates new reporting requirements for some large AI models, it doesn't establish a pre-approval process for AI development or deployment in most cases.
Veröffentlicht
2023
Zuständigkeit
Vereinigte Staaten
Kategorie
Vorschriften und Gesetze
Zugang
Ă–ffentlicher Zugang
VerifyWise hilft Ihnen bei der Implementierung von KI-Governance-Frameworks, der Verfolgung von Compliance und dem Management von Risiken in Ihren KI-Systemen.