Based on our analysis of regulatory frameworks across multiple jurisdictions (work that directly informs how we build VerifyWise), we've observed how differently countries are approaching AI governance. This post examines how the European Union, China, and the United States are handling AI governance.
We'll compare their strategies and discuss the strengths and weaknesses of each approach.

The European Union: A risk-based framework
The European Union is taking a direct approach with its AI Act (sometimes referred to as AIA in references).
This law is the first of its kind and aims to regulate AI comprehensively.
Key features
- AI systems are categorized by risk levels
- High-risk AI applications face strict rules
- There are transparency requirements for all AI systems
- A European AI Office will oversee compliance
Strengths
- Covers many AI applications
- Provides clear guidelines for developers and users
- Focuses on protecting people's rights
Weaknesses
- May slow down innovation due to heavy regulations
- Could create tensions with other countries because of its broad reach (we find it unlikely, however)
China: Data and algorithmic governance
China mostly focuses on data protection and how algorithms work. The government prioritizes national security and social stability in its approach.
Key features
- Requires data to be stored within China
- Regulates how algorithms recommend content
- Links AI governance to its social credit system
Strengths
- Fast implementation of rules
- Strong government support for AI growth
- Focuses on keeping data within the country
Weaknesses
- Risks government overreach and surveillance
- Less emphasis on individual privacy rights
United States: Sector-specific and voluntary guidelines
The United States takes a different route. It focuses on specific industries and uses voluntary guidelines for AI development.
Key features
- An Executive Order promotes safe and trustworthy AI
- Regulations vary by sector (like healthcare and finance)
- Encourages industry standards that are not mandatory
Strengths
- Allows flexibility for innovation and quick tech development
- Regulations can be tailored to fit different industries
- Encourages collaboration between the government and businesses
Weaknesses
- May lead to gaps in regulations and inconsistent rules
- Lacks a comprehensive approach to protect against AI risks
Comparative analysis
The EU's approach is the most thorough, offering clear rules and strong protections for rights. However, it might slow down innovation due to its strict regulations.
China can quickly implement rules and supports AI development, but it raises concerns about privacy and surveillance. The U.S. model promotes flexibility and innovation but may create gaps in protection across different sectors.
Global implications
These different approaches will likely shape global standards for AI. The EU's AI Act could become a global benchmark.
China's focus on data control might influence international data policies. The U.S. approach may encourage fast innovation but could struggle with cross-border issues.
What this means for your organization
If you operate AI systems across borders, start with these steps:
- Map your jurisdictional exposure. Identify which regulations apply based on where your AI systems are deployed, where your users are located, and where your data is processed.
- Build on the EU AI Act as a baseline. Its requirements are the most prescriptive, so compliance with the EU AI Act often covers a significant portion of other frameworks' requirements.
- Watch US state-level developments. With Colorado, Texas, and NYC leading, expect more jurisdictions to adopt AI-specific rules. Build flexibility into your governance program now.
- Don't wait for harmonization. Global alignment on AI governance is years away. Organizations that build adaptable compliance infrastructure today will have a structural advantage.