Hard law, binding or proposed.
34 resources
The EU Artificial Intelligence Act is the world's first comprehensive legal framework for AI. It establishes a risk-based approach to AI regulation, categorizing AI systems into prohibited, high-risk, limited-risk, and minimal-risk categories. The regulation sets requirements for high-risk AI systems including risk management, data governance, transparency, human oversight, and accuracy. It applies to providers and deployers of AI systems in the EU market.
Official guidelines from the European Commission providing clarity on the definition of AI systems, prohibited practices, and requirements for general-purpose AI models under the EU AI Act. These guidelines help organizations understand compliance obligations and implementation timelines.
Executive Order 14110 establishes new standards for AI safety and security, protects Americans' privacy, advances equity and civil rights, promotes innovation and competition, and advances American leadership in AI. It directs federal agencies to develop guidelines and standards for AI development and deployment.
China's regulatory framework for generative AI services, requiring providers to ensure AI-generated content adheres to socialist core values, does not contain illegal content, and maintains data security. Providers must register algorithms and implement real-name verification for users.
Colorado's Consumer Protections for Artificial Intelligence Act requires developers and deployers of high-risk AI systems to use reasonable care to avoid algorithmic discrimination. It establishes disclosure requirements, impact assessments, and creates a framework for AI governance in consequential decisions affecting consumers.
This executive order issued by the U.S. federal government establishes comprehensive requirements for the safe, secure, and trustworthy development and use of artificial intelligence across federal agencies and regulated industries. The order builds upon previous AI governance initiatives including the Blueprint for an AI Bill of Rights and the AI Risk Management Framework, while advancing racial equity considerations from Executive Order 14091. It sets forth standards for AI system development, testing, and deployment, with particular emphasis on safety measures, security protocols, and establishing trust in AI technologies. Federal agencies, AI developers, and organizations working with government contracts should reference this order as it represents the current U.S. federal approach to AI regulation and establishes binding requirements for AI governance in government operations and federally regulated sectors.
The EU Artificial Intelligence Act (Regulation (EU) 2024/1689) is the European Union's comprehensive legal framework for regulating artificial intelligence systems, officially published in June 2024. This landmark regulation establishes a risk-based approach to AI governance, categorizing AI systems into different risk levels and imposing corresponding obligations on developers, deployers, and users of AI technologies. The Act covers prohibited AI practices, high-risk AI systems, transparency requirements, and conformity assessments, while also establishing governance structures and enforcement mechanisms across EU member states. This resource provides access to the full official text of the regulation and is essential for AI developers, businesses deploying AI systems, legal professionals, compliance officers, and policymakers who need to understand and comply with the EU's AI regulatory requirements.
This resource provides up-to-date developments and comprehensive analyses of the European Union's Artificial Intelligence Act, the landmark regulation governing AI systems across the EU. The resource covers key implementation aspects including Article 4 on AI literacy requirements, regulatory sandboxes for AI innovation, and practical guidance for organizations navigating compliance obligations. It serves as a crucial reference for businesses, legal professionals, policymakers, and compliance officers who need to understand and implement the EU AI Act's requirements. The resource appears to offer both regulatory updates and practical training materials, making it particularly valuable for organizations seeking accessible guidance on one of the world's most comprehensive AI governance frameworks.
The EU AI Act is the world's first comprehensive legal framework regulating artificial intelligence, adopted by the European Union in June 2024. This landmark legislation establishes a risk-based approach to AI regulation, categorizing AI systems by their potential risks and imposing corresponding obligations on developers, deployers, and users. The Act covers key areas including prohibited AI practices, high-risk AI systems, transparency requirements, and conformity assessments, while promoting innovation through regulatory sandboxes and support for SMEs. Organizations developing or deploying AI systems in the EU market must understand and comply with this regulation, as it will be fully applicable 24 months after entry into force, with some provisions taking effect sooner, making it essential for businesses, policymakers, and legal professionals working with AI technologies in Europe.
This is a White House policy document published in January 2025 that aims to remove regulatory barriers to American leadership in artificial intelligence. The document appears to revoke or modify previous AI governance measures, specifically referencing Executive Order 14110 from October 30, 2023, which focused on safe, secure, and trustworthy AI development. The policy involves coordination between the Assistant to the President for Science and Technology (APST) and a Special Advisor to review and potentially eliminate various directives, regulations, and orders that may be hindering American AI competitiveness. This resource is particularly relevant for AI industry stakeholders, policymakers, and researchers who need to understand the current US regulatory landscape for AI development and deployment, as it represents a significant shift in the federal approach to AI governance.
This Congressional Research Service report provides an analysis and summary of Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, issued by the Biden Administration on October 30, 2023. The report breaks down the key provisions of this landmark federal AI policy, which establishes a comprehensive government-wide framework for responsible AI development and deployment across federal agencies. The document covers critical areas including AI safety standards, security requirements, federal AI use guidelines, and oversight mechanisms for AI systems. This resource is essential for government officials, contractors, researchers, and industry professionals who need to understand federal AI policy requirements and compliance obligations under the executive order.
This regulatory tracker report by White & Case LLP provides comprehensive analysis of China's AI governance landscape, with particular focus on the Interim AI Measures - China's first specific administrative regulation governing generative AI services. The report tracks and analyzes the evolving regulatory framework in China, offering insights into compliance requirements and regulatory developments that affect AI service providers operating in the Chinese market. Legal professionals, compliance officers, and organizations developing or deploying AI systems in China should utilize this resource to stay informed about regulatory obligations and emerging requirements. The tracker represents part of White & Case's global AI regulatory monitoring initiative, providing specialized expertise on one of the world's most significant AI markets and regulatory jurisdictions.
China has released its comprehensive 'AI Plus' plan alongside new AI labeling legislation that establishes mandatory requirements for AI developers and service providers operating within Chinese jurisdiction. The law mandates that all covered AI projects undergo ethics review processes, either through internal organizational review or external assessment, ensuring compliance with core principles including fairness, accountability, justice, risk responsibility, and respect for life and human dignity. This regulatory framework is designed for AI companies, developers, and service providers who must implement labeling systems and demonstrate adherence to ethical standards in their AI deployments. The legislation represents China's significant step toward structured AI governance, providing clear compliance pathways while establishing accountability mechanisms that align AI development with societal values and safety requirements.
This legal analysis report from Herbert Smith Freehills tracks the proposed China AI Law that was introduced by representatives from the National People's Congress on June 24, 2025. The proposed legislation aims to create a comprehensive framework that balances AI innovation support with regulatory oversight through a risk-based classification system for AI technologies. The law would establish structures for assessing AI's ethical impact and clearly define legal obligations across the AI development and deployment ecosystem. This resource is essential for multinational corporations, AI developers, legal practitioners, and policy makers who need to understand and prepare for China's evolving AI regulatory landscape, as it provides expert legal analysis of one of the world's most significant AI governance developments.
The Artificial Intelligence and Data Act (AIDA) is proposed Canadian federal legislation introduced by the Government of Canada as part of the Digital Charter Implementation Act, 2022. The Act aims to establish a comprehensive regulatory framework for the responsible design, development, and deployment of artificial intelligence systems that impact Canadians' lives. AIDA would set foundational requirements for AI governance, focusing on ensuring AI systems are developed and used in ways that protect citizens while fostering innovation. This legislation is particularly important for Canadian organizations, AI developers, and businesses that deploy AI systems, as it would establish legal obligations and standards for AI governance in Canada, making it a critical resource for understanding the emerging Canadian regulatory landscape for artificial intelligence.
This companion document provides detailed guidance on Canada's proposed Artificial Intelligence and Data Act (AIDA), which was tabled in June 2022 as part of Bill C-27, the Digital Charter Implementation Act, 2022. Published by Innovation, Science and Economic Development Canada, the document explains the key provisions and requirements of AIDA, which aims to regulate AI systems and their impact on Canadian businesses and society. The resource is designed for businesses, legal professionals, policymakers, and stakeholders who need to understand Canada's emerging AI regulatory landscape and prepare for compliance requirements. As a companion document to proposed legislation, it serves as an essential reference for interpreting the technical and legal aspects of Canada's comprehensive approach to AI governance and data protection.
This regulatory tracker report is published by White & Case LLP as part of their AI Watch series, focusing specifically on Canada's evolving AI regulatory landscape. The report provides comprehensive coverage of Canada's federal approach to AI regulation, particularly through the Artificial Intelligence and Data Act (AIDA), which forms a key component of Bill C-27 alongside consumer protection provisions. Legal professionals, compliance officers, and organizations operating in Canada should utilize this resource to stay informed about current and emerging AI regulatory requirements at the federal level. The tracker offers valuable insights into Canada's regulatory strategy and helps stakeholders understand how AIDA fits within the broader legislative framework of Bill C-27, making it an essential resource for navigating Canadian AI compliance obligations.
This policy document outlines the UK government's strategic approach to artificial intelligence regulation, emphasizing a pro-innovation regulatory framework that balances technological advancement with appropriate oversight. Published as part of the UK Science and Technology Framework, it positions AI as one of five critical technologies essential to achieving the government's vision of making the UK a science and technology superpower by 2030. The policy establishes principles for AI governance that aim to foster innovation while addressing potential risks and ensuring responsible development and deployment of AI systems. This resource is particularly valuable for UK-based organizations, policymakers, researchers, and industry stakeholders who need to understand the regulatory landscape and compliance expectations for AI development and implementation within the UK jurisdiction.
The UK Government has established a comprehensive framework for AI regulation that outlines key principles for governing artificial intelligence systems across sectors. The framework centers on five core principles: safety, security and robustness, appropriate transparency and explainability, fairness, accountability and governance, and contestability and redress. This resource, analyzed by Deloitte UK, provides insights into how the UK's approach to AI regulation differs from other jurisdictions and offers practical guidance for organizations operating AI systems within the UK. The framework is designed to be flexible and principles-based rather than prescriptive, allowing for adaptation across different industries while ensuring responsible AI development and deployment that protects citizens and maintains public trust in AI technologies.
This regulatory tracker report by White & Case LLP provides comprehensive analysis of the United Kingdom's approach to AI governance and regulation as part of their global AI regulatory monitoring series. The report examines the UK's distinctive strategy of prioritizing flexible regulatory frameworks over comprehensive omnibus legislation, with particular emphasis on sector-specific laws and regulations rather than broad horizontal AI governance. The resource is designed for legal practitioners, compliance professionals, and organizations operating in the UK market who need to understand the current regulatory landscape and anticipate compliance requirements. The tracker offers valuable insights into how the UK's pragmatic, principles-based approach differs from more prescriptive regulatory models like the EU AI Act, making it essential reading for multinational organizations developing AI governance strategies across different jurisdictions.
This legal analysis from Skadden, Arps, Slate, Meagher & Flom LLP provides comprehensive insights into Colorado's pioneering AI Act (CAIA), which was enacted on May 17, 2024. The resource examines the landmark state legislation that specifically targets the development and deployment of 'high-risk' AI systems and addresses their potential to cause 'algorithmic discrimination.' The analysis is designed for companies and legal professionals who need to understand the compliance requirements and implications of this groundbreaking state-level AI regulation. As one of the first comprehensive AI governance laws at the U.S. state level, the Colorado AI Act represents a significant development in AI regulation, making this legal analysis particularly valuable for organizations operating AI systems in Colorado or those seeking to understand emerging regulatory trends in AI governance across different jurisdictions.
This report by the National Association of Attorneys General provides an in-depth analysis of Colorado's groundbreaking Artificial Intelligence Act, formally titled 'An Act Concerning Consumer Protections for Interactions with Artificial Intelligence,' which was signed into law by Governor Jared Polis on May 17, 2024. The analysis examines the key provisions of this pioneering state-level AI legislation, focusing on consumer protection measures and regulatory requirements for AI systems operating within Colorado. Legal practitioners, compliance professionals, state attorneys general, and organizations deploying AI systems should utilize this resource to understand the legal landscape and compliance obligations under Colorado's AI Act. This report represents one of the first comprehensive legal analyses of state-level AI legislation in the United States, making it a crucial reference for understanding the emerging patchwork of AI regulations at the state level.
This resource from the International Association of Privacy Professionals (IAPP) provides a comprehensive analysis of Singapore's approach to AI governance law and policy. The report highlights Singapore's pioneering role in sectoral AI regulation, particularly through the Monetary Authority of Singapore, which became the first sectoral authority globally to implement AI governance regulation in the financial services sector. The analysis covers Singapore's regulatory framework, policy initiatives, and governance approaches that organizations operating in Singapore should understand for compliance purposes. This resource is essential for legal professionals, compliance officers, and organizations seeking to understand Singapore's AI regulatory landscape and how it compares to global AI governance trends, making it particularly valuable for multinational companies and policy researchers studying international AI regulation approaches.
Singapore's Personal Data Protection Commission (PDPC) has developed a comprehensive AI governance approach that includes the Model AI Governance Framework and AI Verify testing toolkit. The framework is built around 11 internationally recognized AI ethics principles that align with major global frameworks from the EU, OECD, and other leading jurisdictions. AI Verify serves as both a testing framework and software toolkit designed to help organizations assess and verify their AI systems against these established ethical principles. This resource is particularly valuable for organizations operating in Singapore or those seeking to align with international AI governance standards, as it provides practical tools and guidance for implementing responsible AI practices while maintaining consistency with global regulatory trends.
This comprehensive guide examines Singapore's AI governance framework, building upon the AI Verify testing framework that validates AI system performance against internationally recognized principles through standardized tests. The resource covers how organizations can leverage automated compliance monitoring to maintain alignment with Singapore's evolving sector-specific AI requirements across multiple jurisdictions. It provides practical guidance for businesses operating in Singapore's regulatory environment, explaining the implementation of AI Verify's systematic approach to AI governance testing. The guide is particularly valuable for compliance officers, AI developers, and organizations seeking to understand and implement Singapore's distinctive approach to AI regulation, which emphasizes practical testing and validation over prescriptive rules.
This regulatory tracker from White & Case LLP provides comprehensive analysis of Brazil's emerging AI governance landscape, focusing primarily on Bill No. 2,338/2023, which represents Brazil's proposed framework for AI regulation. The resource examines Brazil's current regulatory environment, noting the absence of specific codified AI laws while tracking the development of proposed legislation that would establish comprehensive AI governance requirements. Legal professionals, compliance officers, and organizations operating in or considering expansion to Brazil should use this tracker to stay informed about evolving regulatory requirements and prepare for potential compliance obligations. The analysis offers valuable insights into how Brazil intends to structure its AI regulatory approach, making it an essential resource for understanding the Latin American AI governance landscape and preparing for future regulatory developments in one of the region's largest markets.
The Brazil AI Act is proposed legislation that aims to establish comprehensive operational guidelines and requirements for artificial intelligence systems in Brazil. The bill adopts a risk-based approach to AI regulation, focusing on protecting human rights while providing clear penalties for non-compliance with the established requirements. This legislation is designed for AI developers, deployers, and organizations operating AI systems within Brazilian jurisdiction, offering them structured guidance on regulatory compliance and operational standards. The act represents Brazil's commitment to responsible AI governance and positions the country among nations taking proactive steps to regulate AI technology while balancing innovation with protection of citizens' rights and interests.
Brazil's AI Act represents comprehensive artificial intelligence legislation that establishes rules and requirements for AI development, deployment, and use within Brazilian jurisdiction. The Act focuses on key principles including accountability, ethics, and the protection of fundamental rights, while mandating transparency requirements for AI systems. This legislation is essential for organizations developing or deploying AI systems in Brazil, as it creates legal obligations and compliance requirements that must be met to operate within the Brazilian market. The Act represents Brazil's commitment to responsible AI governance and joins the growing international movement toward comprehensive AI regulation, positioning Brazil as a leader in AI policy within Latin America.
This report by the Future of Privacy Forum provides an in-depth analysis of Japan's AI Promotion Act, which represents a distinctive "innovation-first" approach to AI governance. The Act is structured as a fundamental law that establishes high-level principles and national policy direction rather than prescriptive regulatory rules, reflecting Japan's strategy to promote AI development while maintaining governance oversight. The analysis covers the Act's framework, its implications for businesses and developers operating in Japan, and how it contrasts with other international AI regulatory approaches. This resource is particularly valuable for policymakers, legal practitioners, and organizations seeking to understand Japan's unique position in the global AI governance landscape and how it balances innovation promotion with responsible AI development.
This regulatory tracker report by White & Case LLP analyzes Japan's groundbreaking AI Bill, which became Japan's first law expressly regulating artificial intelligence when passed by Parliament on May 28, 2025. The report provides detailed analysis of the 'Act on Securing Safe and Reliable Use of Artificial Intelligence-Related Technologies' and its regulatory framework for AI governance in Japan. Legal practitioners, compliance professionals, and organizations operating AI systems in Japan should use this resource to understand the new regulatory requirements and ensure compliance with Japan's AI legislation. This tracker represents a significant milestone in AI regulation, as it marks Japan's entry into comprehensive AI governance alongside other major jurisdictions, providing essential insights for multinational organizations navigating the evolving global AI regulatory landscape.
Japan has passed comprehensive AI governance legislation that takes an innovation-focused approach to regulating artificial intelligence technologies. The bill establishes an AI strategy headquarters and empowers a task force led by Prime Minister Shigeru Ishiba to create operational guidelines for businesses and companies operating in the AI space. The legislation addresses key concerns such as copyright infringement through social consensus mechanisms, building on Japan's 2019 copyright law amendments that included exceptions for AI data training. The law emphasizes resolving disputes between content creators and technology companies through collaborative measures like licensing agreements, positioning Japan as taking a business-friendly approach to AI regulation that balances innovation promotion with risk mitigation.
The South Korea AI Basic Act is comprehensive legislation passed in December 2024 by the South Korean government to establish a foundational legal framework for artificial intelligence governance in the country. The act aims to balance advancing Korea's national competitiveness in AI development and deployment while ensuring ethical standards and maintaining public trust in AI systems. Key provisions include establishing legal grounds for creating a national AI control tower to coordinate AI policy across government agencies and an AI safety institute to oversee AI risk management and safety standards. This legislation is particularly significant for AI developers, businesses operating in South Korea, and policymakers as it represents one of the first comprehensive national AI laws in Asia, providing clear regulatory guidelines for AI innovation while addressing safety and ethical concerns in the rapidly evolving AI landscape.
The South Korean AI Basic Law, also known as the AI Basic Act or South Korean AI Act (SKAIA), is comprehensive AI legislation approved and adopted by the South Korean National Assembly on December 26, 2024. This landmark law establishes a foundational regulatory framework for artificial intelligence governance in South Korea and will take effect in January 2026. The legislation addresses key aspects of AI development, deployment, and oversight within South Korean jurisdiction, providing legal structure for AI governance similar to other major AI regulatory initiatives globally. This law is essential for organizations operating AI systems in South Korea, legal professionals working in AI compliance, and policymakers studying national approaches to AI regulation, as it represents South Korea's comprehensive approach to governing artificial intelligence technologies at the national level.
South Korea's comprehensive Framework Act on the Development of Artificial Intelligence and Establishment of Trust represents the country's overarching national AI legislation, passed in January 2025 and scheduled to take effect in January 2026. This landmark law establishes a regulatory framework for AI development while emphasizing the establishment of public trust in AI systems, positioning South Korea among the leading nations in AI governance alongside the EU AI Act and other major regulatory initiatives. The law is designed to govern AI development practices, ensure responsible AI deployment, and create standards for trustworthy AI systems across various sectors in South Korea. Organizations operating in or with South Korea, AI developers, and policymakers should pay close attention to this legislation as it will significantly impact AI governance practices in one of the world's most technologically advanced nations and may influence AI regulatory approaches in other Asian markets.