
Global AI regulations and frameworks: a country-by-country guide for legal, policy, and GRC teams
AI is reshaping industries, societies, and the global economy at a pace that demands clear and effective regulation. Governments, international organizations, and industry bodies are responding by building frameworks that ensure the safe, ethical, and lawful development and use of AI.
Understanding these regulatory approaches is critical for AI lawyers, policymakers, and GRC professionals aiming to guide companies through compliance and risk management.
Assume that AI governance will continue to grow more complex as new risks emerge and jurisdictions refine their positions. This blog post presents a detailed overview of key AI regulations and frameworks worldwide, organized by country and region. It also provides a comparison table and a global timeline of major regulatory milestones to support professionals who need to align organizational practices with evolving global standards.
Global overview: Key AI frameworks and regulations
The international community has produced several foundational frameworks to promote responsible AI. These frameworks influence national laws and offer guidance for both public and private sectors.
OECD AI Principles (2019)
Scope: International
Classification: Non-binding principles
Regulator/Body: OECD AI Observatory
Requirements: Encourage fairness, transparency, accountability, human-centric AI
Enforcement: Voluntary adoption by member and partner countries
Link
UNESCO Recommendation on the Ethics of Artificial Intelligence (2021)
Scope: Global
Classification: Ethical guidelines, non-binding
Regulator/Body: UNESCO
Requirements: Protect human rights, promote fairness, ensure AI transparency
Enforcement: Voluntary national policy alignment
Link
G7 Hiroshima AI Process (2023)
Scope: G7 countries and partners
Classification: Voluntary guiding principles
Regulator/Body: G7 Leaders’ process
Requirements: Conduct risk analysis, encourage best practices for trustworthy AI
Enforcement: Political commitment only
Link
Council of Europe AI Convention (Draft)
Scope: Council of Europe member states
Classification: Treaty, human rights protection focus
Regulator/Body: Council of Europe
Requirements: Prohibit harmful AI uses, mandate human rights impact assessments
Enforcement: Treaty obligations once ratified
Link
Global AI governance comparison table
Framework | Scope | Classification | Regulator/Body | Enforcement |
---|---|---|---|---|
OECD AI Principles | International | Non-binding principles | OECD AI Observatory | Voluntary |
UNESCO AI Ethics Recommendation | Global | Ethical guidelines | UNESCO | Voluntary |
G7 Hiroshima AI Process | G7 countries and partners | Voluntary guiding principles | G7 Process | Voluntary |
Council of Europe AI Convention | Europe (members) | Treaty on human rights protections | Council of Europe | Binding upon ratification |
North America: United States and Canada
The United States is taking a multi-layered approach to AI regulation, combining federal executive actions, voluntary frameworks, and emerging state-level initiatives. While there is no single federal AI law yet, the government has issued important directives such as Executive Order 14110, focusing on safe and trustworthy AI practices. Agencies like NIST, the FTC, and the Department of Defense are leading efforts to set standards and encourage ethical AI development. Several states, including New York and California, have started to introduce their own rules, especially concerning AI use in hiring and consumer protection.
Executive Order 14110 (2023)
Scope: Federal agencies and government-funded AI development
Classification: Strategic directive (not a law)
Regulator/Body: White House, NIST, FTC, DoD, OSTP
Requirements: Risk management, responsible development, AI content labeling, workforce training
Enforcement: Agency-level implementation through updated procurement and operational guidelines
Link
NIST AI Risk Management Framework (2023)
Scope: Voluntary guidance for developers, organizations, government bodies
Classification: Voluntary best practices framework
Regulator/Body: National Institute of Standards and Technology (NIST)
Requirements: Promote trustworthy AI through risk assessments, documentation, continuous monitoring
Enforcement: Voluntary adoption, encouraged in federal procurement
Link
State-level initiatives
New York: Bias audit requirements for AI-driven hiring tools
California: Draft regulations for automated decision systems under the Consumer Privacy Act
Canada has taken a different route by focusing first on regulating automated decision systems within the public sector. The Directive on Automated Decision-Making introduced mandatory Algorithmic Impact Assessments, which require risk evaluations and transparency for AI tools used by government bodies. Additionally, Canada is moving toward a broader national AI oversight strategy, which could result in the creation of a dedicated AI regulatory office.
Directive on Automated Decision-Making (2019)
Scope: Federal government agencies
Classification: Mandatory policy for automated decision systems
Regulator/Body: Treasury Board of Canada Secretariat
Requirements: Conduct Algorithmic Impact Assessments (AIA), transparency notifications to individuals, quality assurance
Enforcement: Compliance monitoring through internal audits and oversight by the Chief Information Officer
Link
Proposed National AI Oversight Office (2023)
Scope: National coordination of AI regulation across sectors (pending)
Classification: Proposed strategic entity, not yet law
Regulator/Body: Innovation, Science and Economic Development Canada (ISED)
Requirements: Policy alignment, funding ethical AI research, cross-sector oversight
Enforcement: Under discussion
Link
Key AI regulation milestones in North America
Country | Regulation | Scope | Status |
---|---|---|---|
United States | Executive Order 14110 | Federal agencies and AI R&D | In effect |
United States | NIST AI Risk Management Framework | Voluntary for developers | Available |
Canada | Directive on Automated Decision-Making | Federal government | Mandatory |
Canada | National AI Oversight Office | National, cross-sector | In development |
Europe: European Union and member states
Europe is leading the global effort to regulate artificial intelligence with the adoption of the EU AI Act, the world’s first legally binding AI law. This regulation creates a structured risk-based framework that governs the development, deployment, and use of AI systems across the European Union. Complementary guidelines such as the EU Trustworthy AI Guidelines and various digital regulations also play a critical role. Many member states have released national strategies or sectoral policies to align with or complement the EU’s overarching approach.
EU AI Act (Regulation 2024/1689)
Scope: European Union, 27 member states
Classification: Binding regulation with four risk levels (unacceptable, high, limited, minimal)
Regulator/Body: European Commission, European AI Office, national supervisory authorities
Requirements: Risk classification, conformity assessments, post-market monitoring, public database registration for high-risk AI
Enforcement: Penalties up to 7 percent of global annual turnover for serious violations
Link
EU Ethics Guidelines for Trustworthy AI (2019)
Scope: European stakeholders and organizations
Classification: Voluntary ethical guidelines
Regulator/Body: EU High-Level Expert Group on AI
Requirements: Transparency, accountability, non-discrimination, human agency, societal well-being
Enforcement: None, promotes voluntary adoption and self-assessment
Link
EU Coordinated Plan on AI (2021 update)
Scope: Coordinated strategy across member states
Classification: Strategic plan (non-binding)
Regulator/Body: European Commission
Requirements: Increase investment, build data spaces, promote trustworthy AI
Enforcement: Monitoring progress through national AI strategies
Link
Selected national strategies in Europe
Several European countries have published their own AI frameworks, often aligning with EU law but adding national priorities. Below are a few notable examples.
Germany: AI Strategy (Updated 2020)
Scope: Federal innovation and regulation strategy
Classification: Strategic plan
Regulator/Body: Federal Ministry of Education and Research, Federal Ministry for Economic Affairs and Energy
Requirements: Ethical AI R&D, public sector AI deployment, cross-sector AI standards
Enforcement: Policy coordination through federal programs
Link
France: National AI Strategy (AI for Humanity, 2018)
Scope: Public investment and ethical guidelines
Classification: Strategic initiative
Regulator/Body: Ministry for the Economy and Finance, Ministry of Higher Education
Requirements: Promote ethical AI, fund research, ensure European digital sovereignty
Enforcement: Public funding with conditional requirements
Link
Spain: National AI Strategy (ENIA 2020)
Scope: Promote trustworthy AI and innovation
Classification: National strategy aligned with EU AI Act principles
Regulator/Body: Secretariat of State for Digitalization and Artificial Intelligence
Requirements: AI innovation hubs, ethical guidelines, focus on SMEs
Enforcement: Voluntary adoption through public-private partnerships
Link
United Kingdom: National AI Strategy (2021)
Scope: UK-wide post-Brexit AI vision
Classification: Strategic framework
Regulator/Body: Office for Artificial Intelligence
Requirements: Develop skills, promote innovation, ensure responsible AI governance through sectoral regulators
Enforcement: Sectoral oversight, AI Safety Institute proposal in development
Link
European AI regulation comparison table
Country | Regulation | Scope | Status |
---|---|---|---|
European Union | EU AI Act | All member states | Binding, phased application |
Germany | AI Strategy | National innovation and regulation | Strategic plan |
France | AI for Humanity Strategy | Public investment and ethics | Strategic plan |
Spain | ENIA 2020 | Ethical innovation and SME focus | Strategic plan |
United Kingdom | National AI Strategy | Post-Brexit national strategy | Strategic plan |
Asia-Pacific: China, Japan, South Korea, Singapore, and others
Asia-Pacific countries have been developing their own regulatory and policy frameworks for artificial intelligence, with approaches that vary widely depending on national priorities. Some countries, like China and South Korea, have adopted binding regulations, while others, like Singapore and Japan, focus more on voluntary ethical guidelines and standards. Regional trends show an increasing focus on risk-based regulation, human rights protections, and responsible AI deployment.
China has implemented strict mandatory rules targeting specific AI applications, especially in the areas of recommender systems and generative AI. These regulations reflect the government’s emphasis on social stability, content control, and national security. China’s regulatory bodies are actively enforcing these rules through audits, penalties, and operational requirements for AI providers.
Algorithm Recommendation Regulation (2022)
Scope: Internet platforms offering algorithmic recommendations
Classification: Binding regulation
Regulator/Body: Cyberspace Administration of China (CAC)
Requirements: Registration, content controls, user opt-out options for algorithmic feeds
Enforcement: Audits, penalties, possible license suspension
Link
Generative AI Interim Measures (2023)
Scope: Providers of generative AI services such as large language models
Classification: Binding interim regulation
Regulator/Body: CAC with supporting ministries
Requirements: Registration, content labeling, data security reviews
Enforcement: Administrative penalties, service bans for non-compliance
Link
Japan promotes a human-centric vision for AI, emphasizing ethical values and voluntary compliance. Its Social Principles for Human-Centric AI were released to guide industry and government on fairness, transparency, and accountability. Japan’s approach heavily aligns with international frameworks like the OECD AI Principles.
Social Principles of Human-Centric AI (2019)
Scope: National advisory guidelines
Classification: Voluntary
Regulator/Body: Cabinet Office of Japan, Ministry of Economy, Trade and Industry (METI)
Requirements: Promote human dignity, transparency, security, and fairness in AI systems
Enforcement: Voluntary; indirectly reinforced through sectoral regulation
Link
South Korea has passed one of the region’s first dedicated AI laws. The AI Framework Act takes a risk-based approach, focusing on high-impact sectors like healthcare, energy, and transportation. It also requires providers of foundational models and generative AI tools to meet transparency and labeling obligations.
AI Framework Act (2025)
Scope: Civilian sectors excluding defense
Classification: Binding national law
Regulator/Body: Ministry of Science and ICT (MSIT), new AI Agency
Requirements: High-impact AI registration, transparency standards, content labeling
Enforcement: Business suspension orders, administrative fines up to KRW 30 million
Link
Singapore takes a proactive but non-binding approach through voluntary model frameworks for AI governance. The Model AI Governance Framework and the AI Verify toolkit promote transparency, fairness, and human oversight in AI deployment across sectors.
Model AI Governance Framework (2020 update)
Scope: Organizations operating in Singapore
Classification: Voluntary best practices
Regulator/Body: Personal Data Protection Commission (PDPC)
Requirements: Fairness, transparency, human oversight, accountability
Enforcement: Encouraged through sectoral initiatives and public-private partnerships
Link
AI Verify (2022)
Scope: Voluntary AI model testing framework
Classification: Industry-led self-assessment
Regulator/Body: InfoComm Media Development Authority (IMDA)
Requirements: Validate AI systems against fairness and transparency criteria
Enforcement: Voluntary adoption
Link
Other countries in the region are moving at different speeds. Australia has updated its AI Ethics Principles and introduced AI assurance frameworks at the state level. India recently passed its Digital Personal Data Protection Act, affecting AI data governance, while working on a broader AI strategy. Indonesia, Malaysia, and the Philippines are also actively developing ethical guidelines and draft national AI strategies.
Asia-Pacific AI regulation comparison table
Country | Regulation | Scope | Status |
---|---|---|---|
China | Algorithm Recommendation Regulation | Internet platforms | In force |
China | Generative AI Interim Measures | Generative AI providers | In force |
Japan | Human-Centric AI Principles | National advisory | Voluntary |
South Korea | AI Framework Act | Civilian sectors | Effective Jan 2026 |
Singapore | Model AI Governance Framework | Organizational guidance | Voluntary |
Singapore | AI Verify Toolkit | AI model testing | Voluntary |
Middle East and Africa: Emerging AI regulatory initiatives
AI regulation efforts across the Middle East and Africa are gaining momentum, although most countries are still in the early stages of formalizing policies. National strategies tend to emphasize ethical AI development, data protection, and the use of AI for economic growth and public sector modernization. Binding AI-specific laws remain rare, but frameworks for responsible AI use and draft national strategies are becoming more common.
In the Middle East, the United Arab Emirates has established NORA, a federal office tasked with overseeing the national AI and automation strategy. While no binding AI law is in place yet, the UAE government is actively drafting its first AI regulation, alongside ethical guidelines. Individual emirates, such as Dubai and Abu Dhabi, are also building their own AI governance structures.
NORA Office (2024)
Scope: Federal coordination of AI, robotics, and automation policies
Classification: Policy coordination body, pending regulations
Regulator/Body: UAE Prime Minister’s Office
Requirements: National AI strategy drafting, ethical frameworks, standards development
Enforcement: Pending issuance of formal regulations
Link
In Africa, Kenya has integrated AI principles into its data protection and digital economy policies. The country requires justification for automated decision-making under its Data Protection Act, offering some basic protections. A national AI strategy is under development, aiming to align AI growth with privacy and ethical standards.
Data Protection Act (2019)
Scope: Nationwide personal data governance, including AI-generated decisions
Classification: Binding law
Regulator/Body: Office of the Data Protection Commissioner
Requirements: Notification and explanation rights for individuals subject to automated decisions
Enforcement: Regulatory fines and compliance orders
Link
National AI Strategy (Draft 2023)
Scope: Development of a national AI policy framework (in progress)
Classification: Strategic draft
Regulator/Body: Ministry of ICT
Requirements: Ethical AI use, innovation promotion, protection of fundamental rights
Enforcement: Pending final approval
Link
Saudi Arabia is positioning itself as a regional AI leader with its National Strategy for Data and AI, known as the “Artificial Intelligence Vision 2030.” The strategy focuses on building infrastructure, establishing data governance, and encouraging the development of AI ethics guidelines.
National Strategy for Data and AI (Vision 2030)
Scope: Nationwide data and AI development strategy
Classification: Strategic plan
Regulator/Body: Saudi Data and Artificial Intelligence Authority (SDAIA)
Requirements: Promote ethical AI use, national data protection law compliance
Enforcement: Sectoral oversight through SDAIA initiatives
Link
South Africa is developing a National AI Policy Framework that aims to balance innovation with ethics and inclusivity. Although still at the draft stage, it envisions the expansion of existing legal protections and ethical guidelines to cover AI applications.
Draft National AI Policy Framework (2023)
Scope: National AI governance vision
Classification: Policy draft
Regulator/Body: Department of Science and Innovation
Requirements: Ethical deployment, risk mitigation, skills development
Enforcement: No binding mechanism yet; anticipated through sectoral legislation
Link
Protection of Personal Information Act (2013)
Scope: Data privacy law applicable to AI use of personal data
Classification: Binding legislation
Regulator/Body: Information Regulator South Africa
Requirements: Transparency in automated decisions, data subject rights
Enforcement: Fines and penalties for non-compliance
Link
Middle East and Africa AI regulation comparison table
Country | Regulation | Scope | Status |
---|---|---|---|
United Arab Emirates | NORA Office | Federal coordination | Active, drafting regulations |
Kenya | Data Protection Act | Personal data and AI decisions | In force |
Saudi Arabia | National Strategy for Data and AI | Infrastructure, ethics, innovation | Active strategy |
South Africa | Draft National AI Policy Framework | National AI vision | Draft stage |
Global timeline of key AI regulation milestones
Tracking the major milestones in AI regulation provides essential context for understanding how quickly the global regulatory landscape has evolved. Below is a timeline of pivotal developments.
May 2019: OECD adopts the first intergovernmental AI standards through the OECD AI Principles. Link
April 2019: The European Commission publishes the Ethics Guidelines for Trustworthy AI. Link
November 2021: UNESCO member states unanimously adopt the Recommendation on the Ethics of Artificial Intelligence. Link
September 2021: United Kingdom launches the National AI Strategy. Link
October 2023: United States issues Executive Order 14110 on Safe, Secure, and Trustworthy AI. Link
October 2023: G7 leaders endorse the Hiroshima AI Process Principles. Link
March 2024: European Parliament passes the final text of the EU AI Act. Link
May 2024: European Council gives formal approval to the EU AI Act, finalizing the world’s first comprehensive AI regulation. Link
August 2024: EU AI Act officially enters into force, with phased obligations beginning in 2026.
January 2026: South Korea’s AI Framework Act comes into effect, marking the first comprehensive AI law in the Asia-Pacific region. Link
Preparing for the future of AI regulation
Artificial intelligence governance is no longer a theoretical discussion. It is an active and expanding area of law and policy that organizations must treat with serious attention. Assume that companies operating internationally will need to navigate a complex matrix of binding regulations, voluntary frameworks, and sector-specific requirements.
For AI lawyers, policymakers, and GRC professionals, staying updated on national strategies and enforcement trends is crucial. Building internal processes that reflect risk-based thinking, ethical principles, and regulatory foresight will help ensure that AI deployments are legally compliant and trusted by users and regulators alike.
Monitoring the rapid development of AI laws and frameworks is now a permanent responsibility for all stakeholders involved in AI governance.