EU AI Act omnibus: what changed on 7 May 2026 and what to do about it
The Council and Parliament reached provisional political agreement on a Digital Omnibus that delays the EU AI Act's high-risk obligations and reshapes how the Act interacts with sectoral law. Here is the new timeline, the new prohibitions, and where compliance teams should still keep their foot on the gas.
On 7 May 2026, the Council and the European Parliament reached provisional political agreement on the Digital Omnibus on AI. High-risk obligations under Annex III now apply from 2 December 2027, and high-risk AI embedded in regulated Annex I products from 2 August 2028. Formal adoption is still pending, but the AI Office, the Council press release and the major law firms covering the deal are all working from these dates. Treat them as your planning baseline.
Here's what moved, what didn't, and where the work this quarter still belongs.
The new timeline
The Act's overall architecture didn't move. Risk-based classification, the 4 tiers, the conformity assessment regime, the GPAI track, the AI Office's oversight role. All of it stays. What changed is the application dates for high-risk obligations, plus the grace period for Article 50(2) transparency.
| Provision | Previous date | New date | Status |
|---|---|---|---|
| Article 5 prohibitions | 2 February 2025 | 2 February 2025 | In force |
| Article 4 AI literacy | 2 February 2025 | 2 February 2025 | In force |
| GPAI obligations (Articles 51–55) | 2 August 2025 | 2 August 2025 | In force |
| Article 50(2) watermarking and synthetic content disclosure | 2 August 2026 | 2 December 2026 | 4-month delay (grace period compressed from 6 months to 3) |
| National regulatory sandboxes | 2 August 2026 | 2 August 2027 | 12-month delay |
| High-risk Annex III standalone systems | 2 August 2026 | 2 December 2027 | 16-month delay |
| High-risk AI embedded in Annex I regulated products | 2 August 2027 | 2 August 2028 | 12-month delay |
Two things about these dates that aren't obvious from the table.
The dates apply regardless of whether harmonised standards and Commission guidance are ready by then. The original argument for the delay was that businesses needed access to the technical standards the Commission is still drafting. The political compromise was to fix new dates anyway. If standards arrive late, the dates don't move with them.
And the watermarking deadline is the nearest live obligation. If you ship any generative feature into the EU market, you need UI labelling, machine-readable metadata embedding and detection capability operational by 2 December 2026. That's about 7 months of engineering. Plan accordingly.
A new prohibition was added to Article 5
The agreement adds a new prohibited practice under Article 5: AI systems that generate non-consensual intimate imagery (NCII) or child sexual abuse material (CSAM), including so-called nudifier apps. There's a safe harbour for systems with effective preventive safeguards.
This wasn't in the Commission's original proposal. Both Council and Parliament pushed it through during trilogue. Two practical implications:
- If you provide a general-purpose image generation model, the safe-harbour design needs to be part of your risk management documentation. There is no "we'll add it later" path.
- If you build downstream apps on top of someone else's generative model, the prohibition still applies to you. You're the provider of the AI system producing the output, so the safeguards have to live somewhere in your stack too.
Sectoral law and the Annex I compromise
The hardest part of the negotiation was how the AI Act should interact with existing product-safety legislation. Where sectoral law already imposes AI-relevant requirements (Machinery Regulation, MDR/IVDR for medical devices, toys, lifts, watercraft and others), parallel AI Act applicability creates double regulation. Nobody wanted that.
The compromise has 2 pieces.
Machinery gets a full carve-out from direct AI Act applicability. Health and safety requirements for high-risk AI in machinery products are added through delegated acts under the Machinery Regulation itself. Machinery manufacturers face one conformity assessment regime, with AI-specific requirements added on top.
Other Annex I sectors get a conditional carve-out via implementing acts. The Commission can limit AI Act application where sectoral law already covers comparable AI-specific requirements. The Commission also picks up a new obligation: publish guidance helping operators of high-risk AI systems in regulated sectors comply without doing the work twice. The actual scope decisions are deferred to 2027 implementing acts.
If you produce regulated AI-enabled products outside the machinery regulation, your exact compliance pathway is still being drafted. Plan for AI Act applicability by default. Watch the 2027 implementing-act track for sector-specific scope reductions.
What didn't change (the operationally important bits)
3 items survived the simplification round that the headline coverage will mostly miss.
Article 6(3) registration was retained. The Commission proposed deleting the obligation to register, in the EU database, AI systems operating in Annex III contexts that providers self-assess as not high-risk. Both Council and Parliament rejected the deletion. The obligation survives in a streamlined form under Annex VIII Section B.
Self-assessment used to be an internal memo. Now it's a public filing. If you decide your HR or credit or biometric tool isn't high-risk, that decision has to go in the EU database. National competent authorities and the AI Office get a searchable list of borderline classification calls, which is the kind of input that drives thematic enforcement sweeps. The "classify out of scope and hope nobody notices" approach was always a stretch. It now requires a public defence of the position.
Strict necessity for bias correction was preserved. The Commission had proposed loosening the standard for processing special categories of personal data for bias detection from "strictly necessary" to "necessary." Parliament refused. The strict necessity standard stays. Bias auditing programs that process race, health or sexual orientation data still need a documented strict-necessity justification.
GPAI obligations continue to apply. Articles 51–55 have applied since August 2025 and the omnibus doesn't touch them. Foundation model providers should keep working through the GPAI Code of Practice and systemic risk thresholds as before.
What to keep on the 2026 work plan
The temptation, with a 16-month delay, is to ease off on inventory and classification. That would be a mistake, for 2 reasons.
The work itself doesn't get easier with time. The hard part of AI Act compliance isn't the documentation template. It's finding every AI system in your organisation, deciding which Annex III category each falls into, and getting product and engineering to maintain the inventory as new systems ship. None of that depends on the standards being final. Start now and you have roughly 18 months to refine. Start in late 2027 and you have weeks, not months.
The underlying risk also doesn't move with the AI Act dates. AI-caused harm in 2026 is still subject to existing sectoral law: product liability, GDPR, MDR, anti-discrimination statutes, sector regulators like the FCA or BaFin. The omnibus extends formal Article 73 incident reporting and conformity assessment. It does not extend the law of negligence or the GDPR.
Three pieces of work to keep moving this year:
- Inventory and classify every AI system. Standalone, embedded, internally built, procured. Everything else depends on the inventory. The omnibus doesn't change that.
- Stand up watermarking and synthetic content disclosure. Live by 2 December 2026 if you ship any generative feature into the EU. That's 7 months of engineering time.
- Decide your Article 5 position on NCII/CSAM. Image generation, multimodal, any downstream app producing output. Either you're clearly out of scope, you have a documented safe-harbour design, or you stop offering the feature in the EU.
And three you can ease off on:
- Notified body engagement for Annex III systems. December 2027 is far enough out that queues are real but not yet critical. Stay in dialogue, hold off on full assessment commitments.
- Final conformity documentation for high-risk systems. Draft the structure, fill in detail once standards are closer to final. The Commission still has harmonised standards under Article 40 to publish.
- CE marking workflows for AI-embedded products. Wait for the 2027 implementing acts before you decide whether your sectoral pathway absorbs the AI Act requirements or you need a parallel track.
Don't bet on a second delay
Could the AI Act be delayed again? Probably not, and planning around the possibility isn't sensible.
The political case for the first postponement leaned on harmonised standards readiness and competitiveness pressure from the Commission's Digital Networks roadmap. Both arguments have been spent. The institutions held the line on architecture and on the dates. Reopening the file again would mean conceding that the simplification flagship failed.
A regulation that gets delayed every time enforcement approaches loses its credibility as a regulation. The Brussels Effect depends on the AI Act being a binding text. The institutions know this, and the competitiveness framing of the current deal is the framing they will defend if anyone proposes a second postponement.
So plan for December 2027 and August 2028. The work that needs to happen before then is the work for this quarter.
Sources
- Council of the EU press release, 7 May 2026
- Hogan Lovells: EU legislators agree to delay for high-risk AI rules
- Travers Smith: EU agrees to delay key AI Act compliance deadlines
- Lewis Silkin: Council and Parliament agree to slim down and delay parts of the EU AI Act
For how VerifyWise maps to each AI Act obligation, see our EU AI Act compliance solution page.
Über das VerifyWise-Team
VerifyWise entwickelt quelloffen verfügbare Software für KI-Governance (Source-available), mit der Organisationen Risiken, Compliance und Aufsicht über ihre KI-Portfolios verwalten. Unser Redaktionsteam stützt sich auf praktische Erfahrung bei der Implementierung von Governance-Workflows für regulierte Branchen und schnell wachsende KI-Teams.
Mehr über VerifyWise erfahren →Bereit, Ihre KI verantwortungsvoll zu steuern?
Starten Sie noch heute Ihre KI-Governance-Reise mit VerifyWise.
Ähnliche Artikel
KI-Regulierung im Nahen Osten: 14 Länder bewertet, verglichen und kartiert
Mar 11, 2026
KI-GovernanceShadow-AI-Erkennung: Wie Sie unautorisierte KI-Nutzung finden, bewerten und steuern
Feb 13, 2026
KI-GovernanceNYC Local Law 144 Compliance: Eine praktische Checkliste fuer Arbeitgeber, die KI bei der Einstellung nutzen
Feb 13, 2026