European Commission releases official GPAI guidelines clarifying provider obligations, transparency requirements, and compliance paths before August 2025 enforcement.
At VerifyWise, we track regulatory developments across multiple jurisdictions to keep our governance tooling current. On July 18, the European Commission took another step toward operationalizing the EU AI Act by publishing official guidelines for providers of general-purpose AI (GPAI) models. These guidelines come two weeks before the AI Act's obligations begin to apply on 2 August 2025, and they offer long-awaited clarity for developers, researchers, and companies building foundational AI technologies.
These guidelines signal the EU's commitment to tech sovereignty, regulatory clarity, and responsible innovation. By translating high-level legal language into practical instructions, the Commission is making it easier for both open-source communities and commercial developers to comply with confidence.

The document defines GPAI models as those:
It also:
These high-impact models, the types that could power autonomous agents or major consumer applications, must undergo risk assessments, implement risk mitigation strategies, and demonstrate alignment with EU values like human dignity, democratic oversight, and safety-by-design.
These guidelines reshape Europe's position in global AI governance.
By releasing these detailed guidelines, the Commission is:
For startups, academia, and open-source contributors, the message is clear: transparency and documentation are your compliance currency. For larger AI companies, particularly those building multi-modal or large-scale generative models, this is a wake-up call to embed governance and risk management deeper into their development lifecycle.
The guidelines also reinforce that open-source models aren't automatically exempt. If a model is released under a free and open license, providers still need to ensure transparency conditions are met.
This includes publishing documentation, disclosing training data sources (where feasible), and informing users of known limitations.
This move complements the ongoing AI Pact, a voluntary framework where companies can show early compliance with the AI Act before it becomes fully enforceable. With these guidelines now public, companies participating in the AI Pact have a solid foundation to build on.
VerifyWise builds source-available AI governance software used by organizations to manage risk, compliance, and oversight across their AI portfolios. Our editorial team draws on hands-on experience implementing governance workflows for regulated industries and fast-scaling AI teams.
Learn more about VerifyWise →Start your AI governance journey with VerifyWise today.