Fiduciary duties in AI
Fiduciary duties in AI refer to the ethical and legal obligations that those managing AI systems owe to the people affected by their decisions. These duties include acting in users' best interests, maintaining transparency, avoiding conflicts of interest, and ensuring accountability. As AI takes on a larger role in healthcare, finance, education, and public services, the case for applying fiduciary principles to its governance has grown stronger.
Why it matters
AI systems often make or inform decisions with lasting consequences for individuals who have little power to understand or challenge those decisions. The resulting power imbalance between AI operators and affected people is exactly the kind of relationship that fiduciary law was designed to address.
When those designing or deploying AI do not prioritize the well-being of the people affected, the risk of harm goes up. Fiduciary duties help fill the ethical gaps that technical standards alone leave open, and they support compliance with laws like the GDPR and frameworks such as ISO/IEC 42001 for AI management systems.
The stakes are high because AI can embed conflicts of interest structurally. A recommendation algorithm may favor products that generate higher commissions. A diagnostic AI may be optimized for throughput rather than accuracy. Without fiduciary-like obligations, these conflicts can persist undetected.
The three core fiduciary duties applied to AI
Fiduciary law rests on three foundational duties. Each one translates into the AI context with meaningful modifications.
Duty of care
In AI, the duty of care requires that systems be rigorously validated before deployment, that their assumptions be tested and documented, and that human overseers maintain enough technical literacy to meaningfully review algorithmic outputs. Relying on an unexplainable AI model without validation is legally analogous to relying on an unverified analyst.
Delegating decisions to a machine does not absolve the human fiduciary from oversight. A financial advisor who blindly follows an AI recommendation without independent judgment may be breaching their duty of care, just as they would if they followed an unqualified human assistant's advice without question.
Duty of loyalty
The duty of loyalty is the most contested in AI contexts. AI systems can embed conflicts of interest not through intentional malfeasance but through optimization objectives that diverge from user interests. A system can be technically competent (satisfying a narrow duty of care) while still being structurally disloyal, optimizing for metrics that do not align with user welfare.
Those deploying AI must prioritize user interests over commercial or institutional incentives. How models are trained, what objectives they optimize for, and how their outputs are presented to users all fall within the scope of loyalty.
Duty of disclosure
Fiduciaries must inform principals of material facts. Applied to AI, that means disclosing how a system makes decisions, what data trained it, what biases it may carry, its known limitations, and whether it influences a user's interaction in ways they would not expect.
The duty of disclosure connects directly to explainability. A system whose decisions cannot be explained cannot be audited for loyalty. And a user who does not understand how a system works cannot exercise informed consent.
The fiduciary gap problem
A particularly important structural challenge identified in recent scholarship is the fiduciary gap.
Traditional fiduciary protection depends on a learned intermediary: a professional who interprets tools and applies judgment for the client's benefit. When AI systems become autonomous agents rather than tools, that intermediary effectively disappears.
A human supervisor overseeing thousands of AI-managed accounts or medical interactions cannot exercise meaningful individual judgment on each decision. The fiction of human oversight collapses at scale. If the services traditionally provided by a lawyer, doctor, or financial advisor start coming directly from software, the question becomes: where does the duty to act in the individual's best interest go?
Two solutions have been proposed. The first is a digital fiduciary model that extends fiduciary duties directly to developers and deployers of AI, requiring them to embed loyalty as a hard design constraint. The second is an expanded product liability approach that redefines product defects to include design choices that foreseeably produce disloyal outcomes.
The information fiduciary framework
Yale Law Professor Jack Balkin's information fiduciary framework is the most influential scholarly proposal for applying fiduciary law to the digital information economy. Balkin argues that platforms and digital services collecting and exploiting user data occupy positions of trust analogous to traditional fiduciaries.
His framework identifies three duties: confidentiality (not weaponizing data against users), care (protecting users from harms flowing from data exploitation), and loyalty (not acting against users' basic interests).
The framework has shaped US legislative discussions and has been explicitly extended to AI systems in recent scholarship. Critics, however, argue it may be too weak. It imports a professional relationship model into a context where the power asymmetry is far greater, and it may end up legitimizing data practices rather than constraining them.
Fiduciary duties in the EU AI Act
The EU AI Act does not use the word "fiduciary," but it creates a dense structure of obligations that are functionally fiduciary in character.
Article 16 (Provider obligations) requires providers to implement quality management systems, maintain technical documentation, enable human oversight, and conduct post-market monitoring. These mirror the fiduciary duty of care.
Article 50 (Transparency obligations) requires deployers to inform users when they are interacting with AI and requires providers to publish summaries of training data. These parallel the duty of disclosure.
Legal analysis has argued that the EU AI Act effectively creates two novel director-level fiduciary duties: AI due care (directors must be able to question, understand, and monitor AI systems shaping corporate decisions) and AI loyalty oversight (directors must ensure that delegated AI systems remain impartial and do not serve vendor interests or marginalize stakeholders).
Fiduciary duties by sector
Finance and robo-advisors
Finance is the most legally developed sector for AI fiduciary analysis. Robo-advisors are registered investment advisers under the Investment Advisers Act of 1940, which imposes fiduciary duties as a statutory matter. The SEC has explicitly warned against AI-washing, where firms claim AI capabilities that do not exist or are not properly governed, treating it as a form of fiduciary breach.
The EU's ESMA has stated that financial institutions must take full responsibility for the actions of AI systems they deploy, rejecting any possibility of delegating accountability to vendors. Best practices in this area include AI governance committees, ongoing model validation programs, vendor oversight protocols, human-in-the-loop controls, and transparent disclosures to clients about AI's role.
Healthcare
Healthcare AI raises the most acute fiduciary questions because the stakes are highest and the information asymmetry most severe. Some scholars have proposed imposing fiduciary duties directly on healthcare AI developers, arguing that these developers exercise meaningful control over clinical decisions affecting vulnerable patients.
California's Senate Bill 1120, effective January 2025, prohibits healthcare plans from denying or modifying care based solely on AI algorithms and requires physician review. The provision amounts to a statutory loyalty duty. The EU AI Act classifies most health AI applications as high-risk, requiring conformity assessments and post-market monitoring.
The standard of care is itself shifting. As accurate AI becomes widely available, physicians may face negligence liability for failing to use established AI tools, creating new fiduciary-like obligations in the process.
Public services
For AI deployed by governments and public institutions, fiduciary-like obligations arise from constitutional and administrative law. Government agencies using AI to make decisions about benefits, immigration, housing, and criminal justice hold considerable power over affected citizens. That power imbalance creates relationships of trust that warrant fiduciary-style protections.
Corporate board responsibilities
Corporate directors have fiduciary duties of care and loyalty, and these increasingly extend to AI governance.
Boards that over-rely on AI tools without adequate independent verification may exceed business judgment rule protection and breach the duty of care. When AI systems pursue goals misaligned with shareholder and stakeholder interests, directors may breach the duty of loyalty if they failed to maintain adequate oversight.
The Business Judgment Rule, the traditional shield protecting directors from personal liability for good-faith decisions, is being reconsidered in AI contexts. Judicial deference now depends on directors' ability to demonstrate informed stewardship: the capacity to explain how AI-assisted decisions were reached, not merely that they were made. Board-level AI literacy carries a new premium.
Real-world example
A financial advisory platform uses a recommendation algorithm to suggest investment products. While the tool claims objectivity, it prioritizes higher-commission products from partner institutions. The platform does not disclose the conflict, misleading users who assume the system is working in their best interest.
The case leads to a regulatory inquiry and new disclosure requirements. The firm responds by updating its governance model: increasing transparency, commissioning an independent audit of its algorithms, and disclosing to clients how recommendations are generated and what conflicts of interest exist.
Best practices
-
Align system objectives with user welfare, not just performance metrics. Optimization targets should not create structural conflicts of interest.
-
Treat the duty of loyalty as a hard design constraint. Review optimization objectives for potential conflicts between user interests and business interests, rather than treating loyalty as an aspiration.
-
Involve external stakeholders and ethicists in key design and deployment decisions. An independent ethics board provides an additional layer of accountability.
-
Make it easy for users to report harms, request explanations, and challenge AI decisions. Effective redress requires both technical infrastructure and real institutional commitment.
-
Evaluate potential conflicts of interest and unintended consequences before deployment, including fiduciary-specific analysis alongside standard risk assessment.
-
Document decision logic. Traceability of model inputs, weights, and outcomes is necessary for fiduciary accountability. If you cannot explain how a specific decision was reached, accountability breaks down.
-
Designate specific individuals who bear fiduciary-like responsibility for AI systems, analogous to a compliance officer or data protection officer.
FAQ
Do fiduciary duties apply to all AI systems?
Not legally, in most jurisdictions. But they are strongly recommended in high-stakes domains like finance, healthcare, criminal justice, and public administration where user vulnerability is high. The EU AI Act creates functionally fiduciary obligations for high-risk AI systems without using fiduciary language.
Who should enforce fiduciary duties in AI?
Governments, professional associations, sector-specific regulators, and internal governance bodies each have a role. Independent third-party audits and certification schemes can support enforcement as well. In finance, the SEC and ESMA are already enforcing fiduciary-like obligations for AI-driven advisory services.
How are fiduciary duties different from privacy or security regulations?
Privacy and security laws address specific technical risks. Fiduciary duties are broader: they focus on the fundamental obligation to act in the interests of those who depend on you, encompassing ethical alignment, transparency, loyalty, and accountability in decision-making.
Can open-source AI projects be held to fiduciary standards?
Yes, especially when they are widely adopted or integrated into high-impact systems. The challenge is that once model weights are released, the developer loses direct control over downstream use. But maintaining good documentation, publishing known limitations, and providing responsible use guidance helps fulfill fiduciary-like responsibilities.
Who bears liability when an AI system violates fiduciary-like duties?
The question remains unresolved. Liability could fall on the original developer, the fine-tuner who modified the model, the deployer who integrated it, or the end user. Most governance frameworks recommend clear contractual allocation of responsibilities, but no jurisdiction has established definitive precedent for the full AI value chain.
How does explainability relate to fiduciary duties?
Explainability is a prerequisite for fiduciary accountability. A system whose decisions cannot be explained cannot be audited for loyalty, and a user who does not understand how a system works cannot give informed consent. If a system cannot explain its decisions in terms a principal can evaluate, the fiduciary duty of disclosure may be structurally impossible to fulfill.