Industry focused governance materials.
23 resources
FDA's action plan for AI/ML-based Software as Medical Devices (SaMD). It addresses the regulatory framework for AI-enabled medical devices, including predetermined change control plans and good machine learning practices.
ECB supervisory guidance on using AI/ML in credit institutions. Covers model risk management, governance, validation, and supervisory expectations for AI applications in banking.
Framework for responsible AI adoption in the public sector. Addresses procurement, deployment, transparency, and accountability considerations specific to government use of AI systems.
UNESCO's guidance on AI in education, including ethical considerations, pedagogical implications, and governance recommendations for educational AI systems. Addresses student privacy, algorithmic fairness, and teacher autonomy.
EEOC technical guidance on assessing AI systems used in employment decisions for potential discrimination. Covers adverse impact analysis, validation requirements, and employer responsibilities when using AI hiring tools.
This issue brief examines the FDA's role as the primary federal agency responsible for regulating AI-powered medical devices and health tools. It provides guidance on understanding the regulatory framework for health AI technologies, including wearable devices and diagnostic tools for various medical conditions.
A research case study examining the establishment of AI governance frameworks within healthcare organizations in Canada. The study provides insights into practical implementation of AI governance in medical settings, with context on FDA-authorized AI medical devices and government AI research investments.
FDA guidance document providing recommendations to sponsors and stakeholders on using artificial intelligence to generate information and data for regulatory decision-making. The guidance covers AI applications in supporting determinations of safety, effectiveness, and quality for drug and biological products.
A GAO report examining how financial institutions use artificial intelligence and how federal financial regulators oversee AI implementation in the financial services sector. The report analyzes current regulatory approaches using existing laws, regulations, guidance, and risk-based examinations to govern AI use in financial services.
This report analyzes the current state of AI regulation in the financial services sector, with a focus on recent developments in US federal and state-level regulatory approaches. It provides updates on legislative changes, including the removal of proposed federal AI moratorium provisions, and their implications for financial institutions navigating the evolving regulatory environment.
This report examines AI regulation in financial services, highlighting the UK Financial Conduct Authority's approach of not introducing AI-specific rules due to the technology's rapid evolution. Instead, the FCA is focusing on applying existing regulatory principles to AI applications in financial services.
OECD's Observatory of Public Sector Innovation framework for artificial intelligence implementation in government. Provides guidance on how public sector organizations can set national AI priorities, investments and regulations while using AI to transform policy creation and service delivery.
This report examines how federal and state governments are implementing artificial intelligence across agencies to enhance efficiency, decision-making, and service delivery. It focuses on governance frameworks, ethical practices, and collaborative approaches between government entities in AI adoption.
This research paper examines the deployment of automated decision-making systems in public sector contexts and their relationship to existing data governance regimes. The study analyzes how AI implementation in government settings may intensify existing power asymmetries and explores the unique challenges posed by algorithmic systems in democratic governance.
UNESCO's comprehensive guidance document developed within the framework of the Beijing Consensus to help education policy-makers prepare for and implement artificial intelligence in educational settings. The resource aims to ensure equitable access to AI technologies and their benefits in terms of innovation and knowledge across educational systems globally.
UNESCO's comprehensive guidance document for policy-makers on integrating artificial intelligence in education systems. The resource provides policy recommendations, governance frameworks, and strategic approaches for equitable and ethical AI implementation in educational contexts globally.
The U.S. Department of Education has issued official guidance on the use of artificial intelligence in educational settings. The guidance is part of Secretary McMahon's broader policy priorities including evidence-based literacy, education choice expansion, and returning education authority to states.
The US Department of Defense formally adopted five core principles for ethical AI development: responsible, equitable, traceable, reliable, and governable. These principles guide the Defense Department's approach to developing and deploying artificial intelligence capabilities in military and defense contexts.
The US Department of Defense officially adopted a comprehensive set of ethical principles to guide the development and deployment of artificial intelligence systems within military operations. These principles establish standards for responsible AI use in defense applications and critical infrastructure protection.
This academic paper examines ethical principles for AI systems used in national defense contexts, addressing both offensive and defensive capabilities. It explores key ethical challenges and requirements for military AI applications, emphasizing accountability, proportionality, and alignment with just war theory principles.
This research examines the expanding role of AI in human resources, focusing on how data-backed candidate recommendations must be translated into nuanced human judgments about culture fit and strategic alignment. The report emphasizes the need for upskilling HR teams in data literacy and change management to effectively integrate AI tools into talent management processes.
Legal guidance addressing emerging AI governance issues in hiring practices, focusing on anti-discrimination compliance and algorithmic bias risks. The resource examines how AI hiring tools may inadvertently perpetuate discrimination through biased data and proxy variables, potentially violating federal employment laws including Title VII, ADA, and ADEA.
A resource examining the application of artificial intelligence in human resources functions, with emphasis on regulatory compliance and legal considerations. The guide addresses specific regulations like GDPR that restrict the use of machine intelligence in hiring, promotion, and salary decisions.