Texas Responsible AI Governance Act

Texas AI Act compliance guide

Navigate Texas's AI governance law with confidence. We help you conduct impact assessments, implement transparency requirements, and maintain compliance for high-risk AI systems deployed in Texas.

What is the Texas AI Act?

The Texas Responsible AI Governance Act (TRAIGA), enacted as HB 149, establishes requirements for deploying and developing high-risk AI systems in Texas. Texas is the third state, after Colorado and Utah, to adopt comprehensive AI legislation. The law focuses on prohibiting specific harmful AI practices through an intent-based liability framework affecting employment, education, healthcare, housing, insurance, financial services, and government services.

Key details: Signed by Governor Greg Abbott on June 22, 2025, effective January 1, 2026. Requires impact assessments, transparency disclosures, governance programs, and human oversight for high-risk AI. Enforced exclusively by the Texas Attorney General with civil penalties ranging from $10,000 to $200,000 per violation (which can accrue daily). NIST AI RMF compliance provides a safe harbor/affirmative defense.

Risk-based

Focuses on high-risk consequential decisions

AG enforcement

No private right of action

Complements Colorado AI Act and aligns with EU AI Act principles.

Who needs to comply?

Deployers of high-risk AI

Any organization using AI in employment, education, healthcare, housing, financial services, or government

AI developers & vendors

Companies developing or providing AI systems used in high-risk contexts

Texas-based employers

Organizations using AI for hiring, promotion, or other employment decisions

Financial institutions

Banks, lenders, and insurers using AI for underwriting, lending, or risk assessment

Healthcare providers

Hospitals and health systems deploying clinical decision support or diagnostic AI

Government agencies

Texas state and local agencies using AI for benefit determination or service delivery

How VerifyWise supports Texas AI Act compliance

VerifyWise provides a Texas TRAIGA preset operating in compliance checklist mode, tailored to the Act's emphasis on governance and disclosure

Texas TRAIGA requirement
VerifyWise coverage
Risk management policy
Structured checklist item for policy documentation and maintenance
AI disclosure to applicants
Checklist item for documenting disclosure procedures and timing
Record retention
Metadata fields and checklist tracking for compliance documentation
Protected category awareness
Pre-configured categories for race, color, disability, religion, sex, national origin, and age

Additional compliance capabilities

Impact assessment workflows

Conduct comprehensive impact assessments for high-risk AI systems with structured templates covering all TRAIGA requirements. Document risk analysis, mitigation measures, and stakeholder impacts with automated evidence collection.

Addresses: Article 4: Impact assessment documentation and approval workflows

Consumer notice and disclosure

Generate compliant consumer notices and disclosures for AI-assisted decisions. The platform maintains templates for transparency requirements and tracks when and how notices are provided to affected individuals.

Addresses: Article 5: Transparency obligations and consumer notice requirements

AI governance program management

Establish and maintain governance programs aligned with Texas AI Act requirements. Track policies, procedures, risk mitigation practices, and human oversight mechanisms with centralized documentation.

Addresses: Article 6: Deployer governance program and oversight requirements

Developer documentation tracking

Maintain comprehensive documentation packages for AI developers including system descriptions, risk assessments, and deployment guidance. Ensure developers meet their disclosure obligations to deployers.

Addresses: Article 7: Developer documentation and disclosure duties

Ongoing monitoring and reporting

Track AI system performance, adverse outcomes, and compliance metrics over time. Generate reports for internal governance reviews and maintain audit trails for potential AG investigations.

Addresses: Article 6: Continuous monitoring and reporting obligations

Third-party risk management

Manage relationships between deployers and developers with contract tracking, documentation exchanges, and ongoing due diligence. Ensure both parties fulfill their respective obligations under the Act.

Addresses: Articles 6 & 7: Deployer-developer relationship management

All documentation is timestamped and maintains complete audit trails. This evidence demonstrates proactive compliance rather than retroactive documentation created in response to AG inquiries.

Complete Texas AI Act requirements coverage

VerifyWise provides dedicated tooling for all TRAIGA compliance obligations

26

Texas AI Act requirements

26

Requirements with dedicated tooling

100%

Coverage across all obligation categories

Impact assessments8/8

Risk identification, mitigation, documentation

Transparency & disclosure6/6

Notice, explainability, consumer information

Deployer obligations7/7

Governance, monitoring, human oversight

Developer duties5/5

Documentation, risk disclosure, cooperation

Built for state-level AI compliance

Impact assessment templates

TRAIGA-specific workflows for all required elements

Consumer notice builder

Automated disclosure generation and delivery tracking

Multi-state compliance

Crosswalk to Colorado AI Act and EU AI Act requirements

AG audit readiness

Documentation packages for enforcement inquiries

Key compliance requirements

Core obligations for deployers of high-risk AI systems under TRAIGA

Article 4

Impact assessments for high-risk AI

Deployers must conduct and document comprehensive impact assessments before deploying high-risk AI systems.

Required elements:

Purpose and intended benefits of the AI system
Known or reasonably foreseeable limitations and risks
Measures to mitigate identified risks
Description of data inputs and relevance assessment
Training, testing, and performance monitoring procedures
Human review and oversight mechanisms
Assessment of potential discriminatory impacts (intent-based liability framework)
Documentation of stakeholder engagement (if applicable)
Article 5

Transparency and consumer notice

Deployers must provide clear notice to consumers when high-risk AI is used to make consequential decisions.

Required elements:

Provide timely notice before using AI in consequential decisions
Explain the role AI played in the decision-making process
Disclose what data was used and its relevance
Inform individuals of the right to appeal or correct information
Provide statement of purpose and intended uses
Make disclosure information publicly available (when feasible)
Healthcare providers must disclose AI system use in treatment
Government agencies must disclose AI interactions to citizens
Article 6

Deployer governance obligations

Organizations deploying high-risk AI must implement governance programs and maintain ongoing oversight.

Required elements:

Establish AI governance policies and procedures
Designate responsible personnel for oversight
Implement risk mitigation and management practices
Maintain human review for consequential decisions
Monitor AI system performance and adverse outcomes
Document all governance activities and decisions
Update assessments when systems change materially

High-risk AI system definitions

TRAIGA applies to AI systems making consequential decisions in these domains

Employment

AI systems that make or are a substantial factor in consequential decisions regarding:

  • Recruiting and hiring
  • Promotion and advancement
  • Termination or discipline
  • Compensation and benefits
  • Work assignment or scheduling
  • Performance evaluation and monitoring

Education

AI systems used in educational settings for:

  • Student admissions decisions
  • Academic placement or advancement
  • Financial aid or scholarship allocation
  • Disciplinary actions
  • Academic performance assessment
  • Educational opportunity access

Financial services

AI systems making decisions about:

  • Credit and lending decisions
  • Insurance underwriting and pricing
  • Risk assessment for financial products
  • Fraud detection resulting in account actions
  • Investment recommendations
  • Financial service eligibility

Healthcare

AI systems involved in:

  • Diagnosis or treatment recommendations
  • Patient risk stratification
  • Healthcare resource allocation
  • Insurance coverage determinations
  • Care pathway recommendations
  • Clinical decision support affecting treatment

Housing

AI systems used for:

  • Rental application screening
  • Tenant selection and approval
  • Mortgage lending decisions
  • Property valuation affecting access
  • Housing opportunity recommendations
  • Eviction risk assessment

Government services

AI systems deployed by government entities for:

  • Public benefit eligibility and distribution
  • Permit and license decisions
  • Law enforcement risk assessments
  • Social services allocation
  • Regulatory enforcement actions
  • Public resource allocation

Note: AI systems must make or be a substantial factor in consequential decisions to be considered high-risk. Administrative or minor uses in these domains may not trigger compliance obligations.

Get help classifying your AI systems

Compliance implementation roadmap

Practical path to Texas AI Act compliance before January 1, 2026

Phase 1Weeks 1-4

Preparation & inventory

  • Complete inventory of all AI systems in use
  • Classify systems as high-risk or not under TRAIGA
  • Identify gaps in current documentation
  • Establish governance committee and assign roles
  • Review contracts with AI developers/vendors
  • Assess current notice and disclosure practices
Phase 2Weeks 5-10

Impact assessments

  • Conduct impact assessments for all high-risk systems
  • Document risk mitigation measures
  • Identify and address potential discriminatory impacts
  • Establish human oversight procedures
  • Create stakeholder engagement records
  • Complete documentation packages
Phase 3Weeks 11-14

Transparency implementation

  • Draft consumer notice templates
  • Implement disclosure mechanisms
  • Create public-facing transparency materials
  • Establish appeal and correction processes
  • Train staff on notice requirements
  • Deploy notice delivery systems
Phase 4Week 15+

Ongoing compliance

  • Implement continuous monitoring processes
  • Establish regular governance reviews
  • Create incident response procedures
  • Maintain documentation and update assessments
  • Monitor for regulatory guidance
  • Conduct periodic compliance audits

Penalties and enforcement

Understanding enforcement mechanisms and compliance incentives

Civil penalties

$10,000 to $200,000 per violation

  • Penalties range from $10,000 to $200,000 per violation
  • Violations can accrue daily for ongoing non-compliance
  • Violations determined per incident or affected individual
  • AG has discretion in penalty assessment

Enforcement authority

Texas Attorney General has exclusive enforcement

  • No private right of action for individuals
  • AG may investigate compliance on own initiative
  • AG can issue civil investigative demands
  • Cooperative compliance may reduce penalties

Safe harbor provisions

NIST AI RMF compliance provides affirmative defense

  • NIST AI Risk Management Framework compliance provides safe harbor
  • Documented governance programs weigh in favor
  • Prompt remediation of identified issues
  • Voluntary disclosure of violations
  • Evidence of reasonable compliance efforts

Compliance timeline

Effective January 1, 2026

  • All high-risk systems must be compliant by January 1, 2026
  • Systems deployed after effective date must comply from launch
  • AG may issue implementation guidance
  • Monitor for regulatory clarifications

Important deadline: January 1, 2026

All deployers and developers of high-risk AI systems must be fully compliant by the effective date. Start your compliance program now to ensure adequate time for assessments, documentation, and implementation.

Start compliance assessment today
Policy templates

Texas AI Act policy templates

Ready-to-use policy templates aligned with TRAIGA requirements, Colorado AI Act, and EU AI Act

Impact assessments

  • • Impact Assessment Policy
  • • Risk Identification Template
  • • Mitigation Documentation
  • • Discriminatory Impact Analysis
  • • Stakeholder Engagement Records
  • • Assessment Update Procedures
  • + 3 more templates

Transparency & disclosure

  • • Consumer Notice Template
  • • AI Decision Disclosure Policy
  • • Transparency Statement
  • • Appeal & Correction Process
  • • Public Disclosure Guidelines
  • • Notice Delivery Tracking
  • + 4 more templates

Governance & oversight

  • • AI Governance Program Policy
  • • Human Oversight Procedures
  • • Roles & Responsibilities Matrix
  • • Continuous Monitoring Policy
  • • Developer Documentation Requirements
  • • Incident Response Plan
  • + 5 more templates

Frequently asked questions

Common questions about Texas AI Act compliance

The Texas Responsible AI Governance Act (TRAIGA), also known as HB 149, was signed into law by Governor Greg Abbott on June 22, 2025 and becomes effective on January 1, 2026. All deployers and developers of high-risk AI systems must be compliant by this date. See the official bill text for the full legislation.
An AI system is considered high-risk if it makes or is a substantial factor in consequential decisions affecting employment, education, healthcare, housing, insurance, financial services, or government services. The determination depends both on the domain of use and the significance of the decision being made. Minor or purely administrative uses typically don't qualify as high-risk.
The Texas Attorney General has exclusive enforcement authority. There is no private right of action, meaning individuals cannot sue directly for violations. The AG can investigate potential violations, issue civil investigative demands, and impose civil penalties ranging from $10,000 to $200,000 per violation (which can accrue daily for ongoing violations). Visit the Texas AG website for enforcement information.
No. Impact assessments are only required for AI systems classified as high-risk (those making consequential decisions in employment, education, healthcare, housing, financial services, insurance, or government services). Low-risk AI systems (chatbots, content recommendations, etc.) do not require formal impact assessments under TRAIGA, though general risk management is still advisable.
Both laws focus on high-risk AI systems and share similar requirements for impact assessments and transparency. However, Texas law applies only within Texas jurisdiction and is enforced exclusively by the Attorney General with no private right of action, while Colorado AI Act provides for both AG enforcement and private rights of action. Texas TRAIGA uses an intent-based liability framework (disparate impact alone is not sufficient to demonstrate intent to discriminate) and provides NIST AI RMF compliance as a safe harbor/affirmative defense.
Developers must provide comprehensive documentation to deployers including system descriptions, known risks, performance limitations, and deployment guidance. Deployers must conduct impact assessments, implement governance programs, provide consumer notices, and maintain human oversight. If you both develop and deploy AI, you have obligations under both roles.
Before using high-risk AI to make consequential decisions, you must provide timely notice explaining: (1) that AI is being used, (2) what role AI played in the decision, (3) what data was used and why it's relevant, and (4) the right to appeal or correct information. The notice should be clear, conspicuous, and provided in a manner accessible to the affected individual.
Under TRAIGA, developers have legal obligations to provide documentation to deployers. If a vendor refuses, this creates compliance risk for you as the deployer. You should: (1) document your requests for information, (2) consider switching vendors, (3) conduct your own assessments to the extent possible, and (4) consult legal counsel. Deployers cannot outsource compliance responsibility to vendors.
Yes. Impact assessments must be updated when there are material changes to the AI system, its intended use, or the data it processes. Material changes include: new features, expanded use cases, significant changes to training data, or deployment to new populations. We recommend reviewing assessments at least annually even without material changes.
The law applies to high-risk AI systems that make consequential decisions affecting Texas residents, regardless of where the deployer or developer is located. If you're using AI to make employment, lending, or other covered decisions about people in Texas, you're subject to TRAIGA even if your organization is headquartered elsewhere.
TRAIGA is similar in structure to the EU AI Act with its risk-based approach and focus on high-risk systems. Organizations operating globally will find overlap in requirements (impact assessments, transparency, human oversight). Many companies implement a unified AI governance program that satisfies multiple regulations including TRAIGA, EU AI Act, Colorado AI Act, and FTC guidelines.
Deployers must implement meaningful human review for consequential decisions made with high-risk AI. The human reviewer must: (1) have authority to alter or override AI recommendations, (2) understand the AI system's limitations, (3) be trained to identify potential errors or bias, and (4) document their review process. Rubber-stamping AI decisions without genuine review does not satisfy this requirement.
TRAIGA does not include explicit small business exemptions. The law applies to any deployer or developer of high-risk AI systems regardless of organization size. However, smaller organizations may have simpler AI deployments requiring less extensive documentation. The key is whether you're using AI for high-risk decisions, not how large your organization is.
Yes, VerifyWise provides comprehensive tools for TRAIGA compliance including impact assessment workflows, documentation management, consumer notice templates, governance program tracking, and ongoing monitoring capabilities. Our platform helps both deployers and developers meet their respective obligations with automated workflows and evidence collection. We also provide crosswalks to EU AI Act and Colorado AI Act for multi-jurisdiction compliance.

Ready for Texas AI Act compliance?

Start your compliance journey with our comprehensive assessment and implementation platform.

Texas AI Act (TRAIGA) Compliance Guide | VerifyWise