Volver al blog
AI Governance
Mar 20, 2026
13 min read

State of AI governance regulations in the United States: what you need to know in 2026

The US has no federal AI law, but that doesn't mean AI is unregulated. Between executive orders, FTC enforcement, and state laws taking effect across Colorado, California, Texas, and Illinois, the compliance picture is getting complicated fast. Here's what's actually happening and what your organization needs to do about it.

State of AI governance regulations in the United States

As of March 2026, the United States still doesn't have a comprehensive federal AI law. Congress has held hearings, floated proposals, and debated endlessly, but nothing binding has passed. If you're waiting for a single, clear rulebook, you'll be waiting a while.

That doesn't mean AI is unregulated. What's actually happened is messier: a combination of executive orders pulling in one direction, state legislatures pushing in another, federal agencies enforcing existing laws against AI companies, and voluntary frameworks like the NIST AI RMF gaining real operational weight. The result is a fragmented compliance environment where your obligations depend on where you operate, what sector you're in, and what your AI systems actually do.

We published a full white paper analyzing this regulatory environment in detail. This post walks through the key findings and what they mean for your organization.

No federal AI law, but plenty of federal activity

The absence of omnibus legislation doesn't equal a regulatory vacuum. The federal government has been active through executive orders, agency enforcement, and voluntary frameworks. The picture is just harder to read than a single statute would be.

US AI regulatory landscape showing federal and state level governance

The Trump administration signed 2 executive orders that set the federal tone. The first, in January 2025 (Executive Order 14179), revoked the Biden-era AI safety order and directed agencies to remove barriers to AI adoption. The second, in December 2025, went further: it created an AI Litigation Task Force to challenge state AI laws, directed the Commerce Department to evaluate which state laws conflict with federal objectives, and threatened to pull federal funding from states with AI laws deemed "onerous."

The December 2025 order is the more consequential one. It establishes mechanisms for the federal government to push back against state regulation, but it doesn't create any federal AI standards or requirements on its own. Existing state AI laws remain in effect unless successfully challenged in court.

There are carve-outs worth knowing about. The executive order specifically exempts 3 areas from preemption: child safety in AI contexts, AI compute and data center infrastructure, and state government procurement of AI systems. If you operate in any of these areas, assume continued state enforcement authority.

States are filling the gap, fast

Key US AI compliance deadlines 2025-2027

With federal legislation stalled, states have become the primary drivers of binding AI regulation. The pace picked up dramatically in 2025 and 2026, with multiple laws taking effect on January 1, 2026, and more coming throughout the year.

Colorado AI Act

Colorado passed the most comprehensive state-level AI governance law in the country. It targets developers and deployers of "high-risk" AI systems, defined as those making consequential decisions about education, employment, government services, healthcare, housing, insurance, or legal services.

The law requires risk management programs, consumer disclosures, and mitigation of algorithmic discrimination. Originally set for February 1, 2026, implementation was pushed to June 30, 2026, after significant industry pushback. A legislative commission is investigating implementation practicalities.

If you deploy AI systems that affect any of those decision categories and have users in Colorado, this law applies to you. The risk management requirements are specific: documented risk assessments, impact evaluations, and ongoing monitoring.

For organizations already using governance platforms, the Colorado AI Act's requirements map directly to structured risk management workflows. VerifyWise's project risk module tracks risks across all 7 AI lifecycle phases (problem definition through decommissioning) with automatic risk level calculation, which directly addresses the "systematic risk management" language in the Colorado law.

California: the most complex compliance environment

California has gone further than any state in terms of sheer volume. Multiple laws took effect on January 1, 2026:

The Transparency in Frontier AI Act (SB 53) requires developers of large frontier models (trained using more than 10^26 FLOPS) to publish risk frameworks, report safety incidents, and implement whistleblower protections. Penalties can reach $1 million per violation for companies with annual revenue exceeding $500 million.

The AI Training Data Transparency Act (AB 2013) mandates that developers of generative AI systems publish summaries of their training datasets, including data sources, types, intellectual property information, and personal information details.

The AI Transparency Act (SB 942) requires AI providers to disclose when content is AI-generated, including through watermarking. California's CCPA Automated Decision-Making regulations add risk assessment requirements taking effect January 1, 2026, with full automated decision-making provisions (pre-use notices, consumer opt-outs) scheduled for January 1, 2027.

If you're building or deploying AI in California, you're dealing with a layered compliance environment where multiple laws may apply simultaneously.

Texas TRAIGA

Texas enacted the Responsible AI Governance Act on January 1, 2026. The final version was narrowed significantly during the legislative process. It eliminates most private sector obligations and limits compliance requirements primarily to government use of AI.

That said, TRAIGA still imposes categorical bans on AI systems designed for behavioral manipulation, unlawful discrimination, violence incitement, or deepfake production of child sexual abuse material. It also restricts state entities from using AI for social scoring or biometric identification without consent, and creates a Texas Artificial Intelligence Council with a regulatory sandbox program.

Illinois and New York

Illinois requires employers to notify job candidates when AI analyzes video interviews, obtain consent before AI evaluation occurs, and follow data retention rules. These provisions took effect in February 2026.

New York City's Local Law 144 continues as one of the most operationally significant local AI regulations. It requires bias audits for automated employment decision tools and remains actively enforced. New York State has expanded further with the RAISE Act, synthetic performer disclosure requirements, and broader government oversight of automated decision tools.

If your organization uses AI in hiring decisions, you're likely subject to overlapping requirements from NYC Local Law 144, Illinois's video interview rules, Maryland and New Jersey hiring restrictions, California's civil rights department regulations, and federal anti-discrimination statutes.

Key compliance deadlines at a glance

StateLawEffective dateFocus
ColoradoColorado AI ActJun 30, 2026High-risk AI governance
CaliforniaFrontier AI Act (SB 53)Jan 1, 2026Frontier model safety
CaliforniaTraining Data Transparency (AB 2013)Jan 1, 2026Training data disclosure
CaliforniaAI Transparency Act (SB 942)Jan 1, 2026AI content disclosure
CaliforniaCCPA ADM RegulationsJan 1, 2027Consumer rights / ADM
TexasTRAIGAJan 1, 2026Prohibited AI uses / Gov use
IllinoisAI Video Interview ActFeb 2026Employment / AI hiring
NYCLocal Law 144In effectBias audits / hiring
UtahAI Policy ActIn effectConsumer disclosure

The FTC isn't waiting for Congress

The Federal Trade Commission has been the most active federal agency on AI enforcement, using its existing authority under Section 5 of the FTC Act to go after unfair or deceptive AI practices. This is happening regardless of which party controls the White House.

The FTC's "Operation AI Comply" initiative, launched in September 2024, targets companies making unsubstantiated claims about AI products. Notable enforcement actions include:

Workado (Content at Scale AI) was found advertising its AI content detection tool as 98% accurate when testing showed approximately 53% accuracy. The FTC issued a consent order requiring the company to stop making unsubstantiated claims.

DoNotPay settled in January 2025 for marketing an AI chatbot as "the world's first robot lawyer" without adequate testing. Growth Cave resolved allegations in January 2026 about misrepresenting its AI software's automation capabilities.

The current FTC leadership under Chairman Andrew Ferguson has signaled a narrower enforcement philosophy compared to the previous administration, but "AI washing" enforcement continues on a bipartisan basis. The practical takeaway: every claim you make about your AI system's capabilities, accuracy, or performance must be supported by documented evidence.

Organizations need to treat AI marketing claims with the same rigor as financial disclosures. VerifyWise's evidence center provides a centralized repository for compliance evidence, with complete audit logging of all file operations, making it straightforward to maintain the documented substantiation that FTC enforcement now demands.

NIST AI RMF: the voluntary framework with growing teeth

The NIST AI Risk Management Framework, released in January 2023, has become the de facto operational standard for AI governance in the US. It's voluntary, but its influence extends well beyond optional adoption.

The framework is built on 4 core functions: Govern (organizational structures and accountability), Map (identify context, risks, and impacts), Measure (assess risks quantitatively and qualitatively), and Manage (implement treatment and monitoring).

Federal contractors are increasingly expected to follow NIST-aligned governance. State legislatures reference the framework in their laws. International regulatory bodies use it as a technical companion for EU AI Act compliance. The Treasury Department's February 2026 financial services framework translates NIST AI RMF principles into 230 mapped control objectives for financial institutions.

Recent developments include the Generative AI Profile (NIST-AI-600-1), a Cyber AI Profile draft (NIST IR 8596), and expected updates through AI RMF 1.1 with expanded profiles and evaluation methodologies through 2026.

If you're choosing a single framework to anchor your AI governance program, NIST AI RMF is the strongest bet for US-based organizations. It aligns with current state requirements and positions you well for whatever federal legislation eventually emerges.

Get ahead of the compliance curve. VerifyWise provides built-in support for NIST AI RMF, EU AI Act, ISO 42001, and ISO 27001 with structured risk assessment workflows, automated reporting, and audit-ready evidence management. Create your free account and start governing your AI systems today.

The federal vs. state showdown

Federal vs State AI preemption positions

This is probably the most consequential development in US AI governance right now. The federal government and the states are heading toward a confrontation over who gets to regulate AI.

The federal position is clear: state-by-state regulation creates unworkable compliance challenges and threatens US competitiveness. The December 2025 Executive Order identifies 3 concerns. First, 50 different regulatory regimes make compliance disproportionately burdensome, particularly for startups. Second, certain state laws may force AI systems to produce inaccurate results to satisfy anti-discrimination requirements. Third, some state laws regulate beyond their borders, impinging on interstate commerce.

States have pushed back hard. Multiple state officials argue the executive order overreaches on states' traditional police powers and consumer protection authority. Several states have signaled intent to continue enforcing their AI laws regardless of the federal position.

Here's what matters for compliance planning: the executive order doesn't suspend or invalidate any state law. Even if specific AI statutes face federal challenges, state attorneys general retain the ability to pursue enforcement under general consumer protection and anti-competition statutes. Until courts resolve preemption disputes, organizations must maintain compliance with all existing state requirements.

Congress has also entered the debate. In May 2025, House Republicans attempted to insert a clause banning state AI laws for 10 years into a spending bill. The proposal faced strong opposition and didn't advance. The scope of any eventual federal legislation remains deeply uncertain.

Sector-specific regulations you can't ignore

Sector-specific AI regulation coverage

Beyond the broad state laws, AI regulation is also happening through sector-specific channels. If you operate in employment, healthcare, financial services, or consumer-facing sectors, you face additional obligations.

Employment is the most heavily regulated area. Organizations using AI in hiring, promotion, or workforce decisions must navigate NYC Local Law 144's bias audit requirements, Illinois's video interview consent provisions, restrictions in Maryland and New Jersey, California's civil rights department regulations on discriminatory AI use, and federal anti-discrimination statutes (Title VII, ADA, ADEA) as applied by the EEOC to algorithmic decision-making.

Financial services faces the most mature regulatory expectations. The Treasury Department's February 2026 framework maps NIST AI RMF principles into 230 operational control objectives covering model lifecycle governance, identity resolution, data governance, and integration with SOC 2 and the NIST Cybersecurity Framework.

Healthcare has California's Health Care Services AI Act requiring providers using generative AI for patient communications to disclose that fact and provide instructions for contacting a human. Several states are considering additional regulations for clinical decision support and insurance claims processing.

Consumer protection spans multiple states with chatbot disclosure requirements, safety protocols for high-risk AI uses involving minors, and restrictions on personal data use in algorithmic pricing.

What your organization should do now

AI compliance decision framework

Based on the current regulatory environment, here are the practical steps that matter most.

Build a complete AI inventory. Document every AI system in use, including ownership, deployment context, risk classification, and governance status. This is a baseline expectation across multiple state laws and voluntary frameworks. You can't govern what you can't see, which is why shadow AI detection should be part of your governance program from day one.

Map jurisdiction-specific requirements. For each AI system, identify which state and federal requirements apply based on where it's developed, deployed, and used. A single compliance standard won't cut it in this fragmented environment.

Adopt NIST AI RMF as your operational foundation. It aligns with current state requirements, positions you for anticipated federal standards, and provides the structured approach that auditors and regulators expect to see.

Prepare for preemption uncertainty. Build compliance programs that can adapt quickly. Continue complying with all existing state laws while monitoring DOJ Task Force activities, Commerce Department evaluations, and Congressional developments. The worst outcome is getting caught flat-footed when regulatory clarity finally arrives.

Document everything. Risk assessments, bias testing results, impact evaluations, transparency disclosures, governance decisions. Documentation is the common thread across nearly every current and proposed AI regulation. Organizations using VerifyWise's automated reporting system can generate compliance documentation across 10+ report categories in minutes rather than the days or weeks manual compilation requires.

Substantiate your AI marketing claims. The FTC's bipartisan enforcement on AI washing means every claim about capabilities, accuracy, or performance needs rigorous, documented evidence behind it.

Watch the calendar. Key dates coming up include FTC policy statements on AI application of Section 5 (March 2026), Commerce Department evaluation of state AI laws (March 2026), TAKE IT DOWN Act provisions taking effect (May 2026), Colorado AI Act implementation (June 30, 2026), and California CCPA automated decision-making provisions (January 2027).

The bottom line

The US AI governance picture in 2026 is defined by tension. Federal deregulatory ambitions vs. aggressive state lawmaking. Voluntary frameworks vs. enforceable mandates. Innovation promotion vs. consumer protection.

For organizations operating AI systems, the absence of a comprehensive federal law doesn't mean the space is unregulated. Between state statutes, federal agency enforcement under existing authorities, and the growing influence of frameworks like NIST AI RMF, you face a complex web of obligations that will only continue to expand.

The organizations that will navigate this best are those treating AI governance as a strategic capability rather than a compliance checkbox. Building systematic risk management, maintaining comprehensive documentation, and staying adaptable to regulatory changes isn't just about avoiding penalties. It's about building the kind of trust that customers, regulators, and stakeholders are increasingly demanding.

For the complete analysis with detailed state law breakdowns, compliance timelines, and sector-specific guidance, download the full white paper here.

This analysis is current as of March 2026 and is subject to change as the regulatory environment continues to evolve. Organizations should consult qualified legal counsel regarding specific compliance obligations.

¿Le resultó útil este artículo? Compártalo con su red.

Share:

Sobre el equipo de VerifyWise

VerifyWise desarrolla software de gobernanza de IA de código abierto utilizado por organizaciones para gestionar riesgos, cumplimiento y supervisión en sus carteras de IA. Nuestro equipo editorial se basa en experiencia práctica implementando flujos de trabajo de gobernanza para industrias reguladas y equipos de IA en rápido crecimiento.

Más información sobre VerifyWise →

¿Listo para gobernar su IA de manera responsable?

Comience hoy su viaje de gobernanza de IA con VerifyWise.

State of AI governance regulations in the United States: what you need to know in 2026 - VerifyWise Blog