Governance frameworks
Governance practices, policy papers, and frameworks specific to AI agents and autonomous systems.
19 resources
Practices for Governing Agentic AI Systems
Shavit et al. at OpenAI propose seven practices for governing agentic systems: evaluating suitability, constraining action space, setting default behaviours, ensuring legibility, automatic monitoring, attributability, and interruptibility. Foundational reference for later governance frameworks.
Infrastructure for AI Agents
Chan et al. (Centre for the Governance of AI) argue agents need shared infrastructure - identity, credentials, reputation, action logs, dispute resolution - analogous to financial and web plumbing. Maps building blocks to governance goals.
AI Agents: Governing Autonomy in the Digital Age
Joe Kwon (Center for AI Policy) policy paper on governing autonomous AI, examining levers like capability evaluations, pre-deployment approvals, liability rules, and compute governance. Focused on US federal policy options for increasingly autonomous systems.
Scalable Runtime Governance for Agentic AI in Financial Services
Szpruch et al. propose a scalable runtime-governance architecture for agents in financial services, with policy engines, audit trails, and real-time guardrails tied to regulatory obligations. Worked examples cover algorithmic trading and customer-facing agents.
Model AI Governance Framework for Agentic AI
Singapore IMDA's Model AI Governance Framework dedicated to agentic AI, translating its general MGF into agent-specific practices across five dimensions: oversight, traceability, reliability, interaction, and ecosystem. Aligned with the country's Companion Guide.
AI Agents in Action: Foundations for Evaluation and Governance
World Economic Forum white paper offering foundations for evaluating and governing enterprise agents, covering definitions, capability assessment, risk taxonomies, and oversight mechanisms. Synthesises practitioner input from WEF's AI Governance Alliance.
AI Agent Governance: A Field Guide
Institute for AI Policy and Strategy field guide cataloguing governance interventions across the agent lifecycle - alignment research, evaluations, deployment constraints, and incident response. Written for policymakers and deployers navigating overlapping proposals.
AI Agent Governance: A Field Guide (arXiv)
Kraprayoon et al. arXiv version of the IAPS field guide, systematising governance levers for AI agents across technical, organisational, and policy layers. Provides a taxonomy of interventions with maturity ratings and open research questions.
Characterising AI Agents for Alignment and Governance
Kasirzadeh and Gabriel propose a four-dimensional characterisation of AI agents - autonomy, goal complexity, generality, and sociality - and use it to structure alignment and governance questions across today's and future systems.
The State of Agentic AI Security and Governance
OWASP v1.0 state-of-the-field report on agentic AI security and governance in 2025, summarising threat models, vendor controls, standards activity, and practitioner surveys. Identifies gaps between attacker capabilities and defender tooling.
The regulation of delegation: Are AI advisers, agents and companions regulated in the UK?
Julia Smakman (Ada Lovelace) policy briefing assessing whether UK law currently regulates AI advisers, agents, and companions. Maps data protection, consumer, financial, and professional-services regimes against agent use cases and identifies gaps.
Legal analysis on harms from advanced AI assistants (AWO for Ada Lovelace Institute)
Ada Lovelace Institute press briefing summarising AWO's legal analysis of advanced AI assistants, which concludes existing UK consumer, equality, and data-protection laws do not adequately cover agent-specific harms and calls for targeted legislative reform.
Agentic AI: The liability gap your contracts may not cover
Clifford Chance analysis of contractual and tort liability gaps created by agents acting autonomously across systems. Examines principal-agent doctrine, software supplier liability, and data-processing contracts, with drafting suggestions for enterprise agent deployments.
Intelligent AI Delegation
Tomasev et al. (Google DeepMind) formalise delegation as a governance mechanism for autonomous AI, modelling principal-agent dynamics and proposing protocol designs that keep humans in control as agents handle more decisions on their behalf.
Levels of Autonomy for AI Agents
Feng et al. propose a five-level scheme for classifying AI agent autonomy, adapted from SAE driving levels, tied to oversight requirements and evaluation tests at each level. Intended to standardise how deployers describe autonomy claims.
Governing AI Agents
Noam Kolt's legal scholarship applies principal-agent theory from law and economics to AI agents, identifying information asymmetry, incentives, and loyalty problems. Proposes governance levers including disclosure duties, fiduciary analogues, and liability allocation.
The Principal-Agent Alignment Problem in AI
Hadfield-Menell's Berkeley technical report formalises AI alignment as a principal-agent problem with incomplete contracts, drawing on mechanism design. Introduces inverse reward design and cooperative inverse reinforcement learning as alignment approaches.
AI and Human Oversight: A Risk-Based Framework for Alignment
Kandikatla and Radeljic propose a risk-based framework for calibrating human oversight of AI, tying oversight intensity to capability, context, and consequence. Maps oversight patterns (in-the-loop, on-the-loop, out-of-the-loop) to concrete risk tiers.
The dilemmas of delegation: policy challenges of Advanced AI Assistants
Harry Farmer (Ada Lovelace) report on policy dilemmas raised by advanced AI assistants: concentration of power, undermining of user agency, and regulatory fragmentation. Proposes UK policy responses across competition, consumer, and data protection.