U.S. Government
lawactive

Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence

U.S. Government

View original resource

Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence

Summary

President Biden's comprehensive executive order on AI represents the most ambitious federal attempt to govern artificial intelligence in U.S. history. Published in November 2023, this sweeping directive doesn't just offer guidance—it establishes binding requirements for federal agencies and creates new oversight mechanisms for AI development. The order leverages the Defense Production Act to compel AI companies to share safety test results for their most powerful models, while simultaneously addressing everything from algorithmic bias in hiring to AI-generated deepfakes in political campaigns. This isn't a framework you can choose to ignore if you're working with the federal government or developing frontier AI models.

The Federal AI Governance Revolution

This executive order marks a fundamental shift from voluntary AI principles to mandatory compliance requirements. Unlike previous efforts that relied on industry self-regulation, the order uses existing federal authorities—including the Defense Production Act—to create enforceable standards. It establishes new reporting requirements for companies training AI models with substantial computational resources (defined as using more than 10^26 integer or floating-point operations), effectively capturing the frontier AI developers building the most powerful systems.

The order also creates the U.S. AI Safety Institute within NIST, giving the federal government a dedicated entity to develop AI safety standards and conduct evaluations. This institutional framework represents a long-term commitment to AI oversight that extends beyond any single administration.

Who this resource is for

Federal contractors and grantees who need to understand new AI compliance requirements in government work. AI developers building large-scale models who must now report safety testing results to the federal government. Federal agency personnel implementing AI systems who face new mandatory safety and bias testing protocols. Chief compliance officers at companies that work with government agencies and need to understand how AI governance requirements affect their operations. Policy teams at technology companies who must track how federal AI policy impacts their products and services.

Implementation Timeline and Critical Deadlines

The executive order operates on an aggressive timeline with over 150 specific deadlines scattered across 18 months. Key early milestones include AI companies reporting safety test results within 90 days for models meeting the computational threshold, and federal agencies conducting AI system inventories within 60 days. The Commerce Department has 270 days to establish new safety and security standards for AI development.

Federal agencies must designate Chief AI Officers within 60 days and publish their first AI governance policies within 365 days. For contractors, the most critical deadline is the 365-day mark when new AI procurement requirements take effect, fundamentally changing how agencies can purchase AI systems and services.

How Federal Authority Gets Exercised

The order strategically uses existing presidential powers rather than waiting for new legislation. The Defense Production Act compels private companies to share information about their AI safety testing—the same authority used during wartime to mobilize industrial resources. Federal procurement rules become a lever to impose AI governance requirements on contractors, while immigration policy gets adjusted to attract international AI talent.

This approach means the requirements have immediate legal force. Companies cannot simply wait for Congress to modify the rules—the executive branch is implementing AI governance using authorities it already possesses.

Beyond Compliance: The Broader AI Policy Ecosystem

This executive order doesn't exist in isolation—it explicitly builds on the Blueprint for an AI Bill of Rights and the NIST AI Risk Management Framework while incorporating racial equity principles from Executive Order 14091. Understanding these connections is crucial for implementation, as agencies will interpret the new requirements through the lens of existing policy commitments.

The order also coordinates with international AI governance efforts, particularly around AI safety research and standards development. Organizations working globally need to understand how U.S. federal requirements interact with emerging international frameworks, especially as the EU AI Act and other international regulations create overlapping compliance obligations.

FAQs

Q: Does this apply to my company if we don't work directly with the federal government? A: Possibly. If you're training AI models above the computational threshold (10^26 operations), you must report safety test results regardless of your customer base. Additionally, if you're a subcontractor on federal projects or your AI systems are used by federal agencies through third-party services, compliance requirements may apply.

Q: How does this relate to state and local AI regulations? A: This executive order specifically governs federal activities and doesn't preempt state and local laws. However, it may influence state-level policy development and creates a federal floor for AI governance that other jurisdictions might reference or exceed.

Q: What happens if we don't comply with reporting requirements? A: The executive order leverages existing federal authorities, meaning non-compliance could result in enforcement actions under the Defense Production Act or exclusion from federal contracting opportunities. The specific penalties depend on which authority is being exercised.

Tags

AI governanceexecutive orderAI safetytrustworthy AIfederal policyAI development

At a glance

Published

2023

Jurisdiction

United States

Category

Regulations and laws

Access

Public access

Build your AI governance program

VerifyWise helps you implement AI governance frameworks, track compliance, and manage risk across your AI systems.

Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence | AI Governance Library | VerifyWise