EU releases pivotal guidelines for general-purpose AI models under the AI Act
On July 18, the European Commission took another concrete step toward operationalizing the landmark EU AI Act by publishing official guidelines for providers of general-purpose AI (GPAI) models. These guidelines come just two weeks before the AI Act’s obligations begin to apply on 2 August 2025, and they offer long-awaited clarity for developers, researchers, and companies building foundational AI technologies.
These guidelines signal the EU’s commitment to tech sovereignty, regulatory clarity, and responsible innovation. By translating high-level legal language into practical instructions, the Commission is making it easier for both open-source communities and commercial developers to comply with confidence.
What’s inside the guidelines?
The document defines GPAI models as those:
Trained with computational power exceeding 10²³ floating point operations (FLOPs)
Capable of generating language, text-to-image, or text-to-video content
It also:
Clarifies who qualifies as a provider
Defines what constitutes placing a model on the market
Explains exemptions for open-source models, provided they meet transparency requirements
Details how aligning with the General-Purpose AI Code of Practice can support compliance
Outlines additional duties for high-impact models that may pose systemic risks, including risks to safety, fundamental rights, and autonomy
These high-impact models — the types that could power autonomous agents or major consumer applications — must undergo risk assessments, implement risk mitigation strategies, and demonstrate alignment with EU values like human dignity, democratic oversight, and safety-by-design.
Why this matters
This is a critical moment for Europe’s position in global AI governance.
By releasing these detailed guidelines, the Commission is:
Empowering developers across the EU to innovate without fear of regulatory ambiguity
Providing a competitive advantage to AI providers who align early with EU norms
Encouraging responsible development and deployment of foundational AI models
Strengthening the global relevance of European AI standards, especially as other jurisdictions look to the EU as a regulatory role model
Implications for developers and companies
For startups, academia, and open-source contributors, the message is clear: transparency and documentation are your compliance currency. For larger AI companies, particularly those building multi-modal or large-scale generative models, this is a wake-up call to embed governance and risk management deeper into their development lifecycle.
The guidelines also reinforce that open-source models aren’t automatically exempt. If a model is released under a free and open license, providers still need to ensure transparency conditions are met such as publishing documentation, disclosing training data sources (where feasible), and informing users of known limitations.
Looking ahead
This move complements the ongoing AI Pact, a voluntary framework where companies can show early compliance with the AI Act before it becomes fully enforceable. With these guidelines now public, companies participating in the AI Pact have a solid foundation to build on.