There's no single AI act. The regulation is already here.
If you're waiting for a Middle East equivalent of the EU AI Act, you'll be waiting a while. The region is building AI governance through 3 separate channels: national AI strategies and ethical charters (mostly nonbinding but gaining teeth), data protection laws (binding and increasingly enforced), and sector regulators (especially central banks) issuing operational AI guidance.
We researched 14 jurisdictions, scored them across 10 governance dimensions, and organized the results into 3 tiers.
The 3 channels of AI governance

The first channel is national AI strategies and ethical charters. Every Tier 1 country and most Tier 2 countries have published them. They set expectations for transparency, fairness, human oversight, and accountability. They're usually nonbinding. But they're gaining real weight through what we call "soft law hardening": government procurement teams reference them in RFPs, regulators fold them into examination frameworks, and large enterprises embed them in vendor contracts. Calling these documents "aspirational only" understates the compliance risk.
The second channel is data protection law. This is the binding core. The UAE, Saudi Arabia, Qatar, Oman, Egypt, Bahrain, Jordan, and Kuwait have all enacted or are operationalizing personal data protection statutes. For AI systems that touch personal data, these laws impose concrete obligations: lawful processing basis, data minimization, security controls, cross-border transfer restrictions, and in some cases automated decision-making safeguards.
The third channel is sector regulators. Financial regulators are out front. The UAE Central Bank's 2026 guidance note on responsible AI/ML use, Qatar Central Bank's AI guideline, and ADGM's rulebook provisions on AI and big data analytics are the most compliance-specific instruments we found. They create supervisory expectations that go beyond national charters: governance frameworks, approval processes, monitoring, transparency, and consumer complaint handling.
Maturity scores across 14 countries
We assessed each country across 10 dimensions on a 0-5 scale (50 points max):
- National AI strategy presence and specificity
- Published AI-specific guidelines or ethical frameworks
- Binding AI-specific regulations
- Privacy regime maturity and enforceability
- Sector regulator AI guidance
- Public-sector AI procurement standards
- Dedicated AI governance institutions
- Demonstrated enforcement capability
- Cross-border data governance rules
- Availability of operational tooling and templates

The results split into 3 tiers:
Tier 1 (Advanced/Operational, 27-36 points): UAE (36), Saudi Arabia (35), Israel (31), Qatar (30), Oman (28), Egypt (27). These countries have binding data protection regimes, published AI governance instruments, and sector-specific regulatory activity. They differ in posture: the UAE operates a layered multi-regulator ecosystem, Saudi Arabia centralizes through SDAIA, Qatar channels AI controls through its central bank, Israel favors soft law and public sector risk management.
Tier 2 (Emerging, 18-26 points): Bahrain (26), Jordan (22), Kuwait (18). Credible privacy foundations, fewer AI-specific operational instruments. Bahrain is closest to Tier 1 with a dedicated PDPL and procurement-facing AI guidance developed with the World Economic Forum.
Tier 3 (Early, 1-12 points): Lebanon (12), Iraq (7), Palestine (5), Syria (4), Yemen (1). Partial digital frameworks, political instability in some cases, and limited accessible primary sources.
How the top 3 countries compare

The UAE and Saudi Arabia lead, but their approaches look different in practice. The UAE's strength is breadth: it scores consistently across strategy, AI guidance, sector rules, public-sector AI, and tooling. Saudi Arabia concentrates power in SDAIA and scores higher on institutional structure. Qatar's scores are tighter: strong on strategy, privacy, and sector rules (driven by its central bank), but thinner on tooling and cross-border governance.
Regulatory coverage heatmap

The heatmap shows where governance is concentrated and where gaps remain.
- Privacy is the most consistent dimension. Tier 1 countries all score 3-4 on privacy, making data protection law the most reliable compliance anchor across the region.
- Binding AI-specific regulation barely exists. No country scores above 2 on this dimension. The region governs AI through privacy law and sector guidance, not dedicated AI statutes.
- Tooling and operational guidance vary widely. The UAE and Israel score 4 on tooling (templates, self-assessment tools, implementation guides). Most other countries score 1-2.
- Tier 3 is sparse across the board. Iraq and Palestine have institutional activity (Supreme AI Committee, cybercrime law) but almost nothing on enforcement, sector rules, or operational tooling.
Tier 1 score breakdown

The stacked breakdown shows how each Tier 1 country builds its total score. Privacy and strategy are consistently strong. The variance comes from sector rules, tooling, and institutional structure. The advanced threshold (35 points) is only crossed by the UAE and Saudi Arabia.
Enforcement is arriving through data protection, not AI law
Regulators aren't waiting for AI-specific legislation to act. Enforcement is coming through data protection channels first:
- Free zone enforcement. DIFC's Commissioner of Data Protection has issued enforcement decisions. ADGM's Office of Data Protection has published compliance guidance and conducted supervisory reviews. These are the most structured enforcement capabilities for data-related AI obligations in the region.
- Central bank supervision. Financial regulators can fold AI expectations into standard examination cycles. The UAE Central Bank's 2026 guidance creates consumer protection and responsible AI expectations for every licensed financial institution.
- PDPL activation. Saudi Arabia's SDAIA has begun operationalizing PDPL enforcement. Egypt's executive regulations (Decree 816/2025) are creating a compliance timeline. Bahrain's PDPA has enforcement powers under Law 30/2018.
If your AI system handles personal data in any of these jurisdictions, you're already within scope of an active regulator.
Regional milestones

The timeline shows acceleration since 2021. Before that, the region was mostly publishing strategies and charters. Since 2021: the UAE enacted its federal PDPL, Saudi Arabia operationalized its PDPL and implementing regulations, Oman enacted its PDPL, Jordan passed a new PDPL, Egypt issued executive regulations, and the UAE Central Bank published its AI guidance note. The pace of binding instruments is picking up.
What this means for compliance teams
If you're running AI governance across the Middle East:
- Start with data protection. Privacy law is the binding layer everywhere. Map your AI systems to the applicable data protection regime in each jurisdiction (and watch for fragmentation: the UAE alone has 3 regimes: federal, DIFC, ADGM).
- Layer sector-specific requirements on top. If you're in financial services, the UAE Central Bank and Qatar Central Bank guidance creates real supervisory expectations. Don't treat these as optional.
- Don't ignore soft law. National AI charters and ethical principles are becoming mandatory in practice through procurement, supervision, and contractual flow-down.
- Map to NIST AI RMF for cross-jurisdictional coherence. The GOVERN, MAP, MEASURE, MANAGE functions provide a baseline that works across the region and also maps to the EU AI Act's risk-based structure.
- Build regulator-ready evidence now. Data protection enforcement comes first, sector-specific AI oversight follows. Having documentation ready beats scrambling after a supervisory inquiry.
Download the full report
The complete research covers all 14 country dossiers, sector deep dives (financial services, public sector, healthcare), regulatory comparison tables, framework mapping against the EU AI Act and NIST AI RMF, and enforcement case studies.