Featured resources
EU AI Act - Official Full Text
The EU Artificial Intelligence Act is the world's first comprehensive legal framework for AI. It establishes a risk-based approach to AI regulation, categorizing AI systems into prohibited, high-risk, limited-risk, and minimal-risk categories. The regulation sets requirements for high-risk AI systems including risk management, data governance, transparency, human oversight, and accuracy. It applies to providers and deployers of AI systems in the EU market.
OECD Principles on AI
The OECD Principles on AI were the first intergovernmental standard on AI. They promote AI that is innovative and trustworthy and respects human rights and democratic values. The five principles cover inclusive growth, human-centered values, transparency, robustness, and accountability.
Recently added
California SB 243: Companion AI Guardrails Act
California Senate Bill 243, signed into law on October 13, 2025 by Governor Gavin Newsom, makes California the first state to mandate specific safety safeguards for AI companion chatbots. The law takes effect January 1, 2026, and requires chatbot operators to implement critical safety measures around interactions with AI, particularly for minors. Key requirements include disclosure that users are interacting with AI, content guardrails preventing sexually explicit material for minors, suicide prevention protocols with crisis resources, and annual reporting to California's Office of Suicide Prevention. The law creates a private right of action for injured individuals.
Practices for Governing Agentic AI Systems
OpenAI's white paper providing a definition of agentic AI systems, the parties in the agentic AI system lifecycle, and a set of baseline responsibilities and safety best practices for each party. The paper offers seven practices for keeping autonomous AI agents' operations safe and accountable, addressing systems that can pursue complex goals with limited direct supervision.
OWASP AI Bill of Materials (AIBOM)
OWASP's AI Bill of Materials (AIBOM) project establishes a standard format for documenting AI system components, training data sources, model provenance, and security configurations. Similar to how SBOMs transformed software supply chain transparency, AIBOMs aim to bring clarity to AI system composition, enabling organizations to track data lineage, model dependencies, and security configurations throughout the AI lifecycle.
C2PA Content Credentials Specification
The C2PA (Coalition for Content Provenance and Authenticity) Content Credentials specification establishes a technical standard for cryptographically binding provenance information to digital content. Led by Adobe, Microsoft, Intel, BBC, Truepic, Sony, OpenAI, Google, Meta, and Amazon, this standard enables verification of content origin, modifications, and AI generation across the media ecosystem.
IAPP AIGP: AI Governance Professional Certification
The AIGP (AI Governance Professional) is the IAPP's premier global credential for professionals who design, implement, and oversee responsible AI governance programs. Updated in February 2025 with a new Body of Knowledge, the certification validates competency in AI governance across the full lifecycle including policy, risk, controls, documentation, oversight, and continuous monitoring aligned with frameworks like the EU AI Act, NIST AI RMF, and ISO 42001.