Fiduciary duties in AI

Fiduciary duties in AI refer to the ethical and legal obligations that organizations or individuals managing AI systems owe to those affected by their decisions. These duties include acting in the best interests of users, maintaining transparency, avoiding conflicts of interest, and ensuring accountability. As AI begins to influence more decisions in healthcare, finance, education, and public services, applying fiduciary principles to its governance becomes essential.

This matters because AI systems often make or inform decisions with lasting consequences. If those designing or deploying AI do not prioritize the well-being of the people affected, the risk of harm increases. Fiduciary duties help fill the ethical gaps left by technical standards alone. They support compliance with laws like the General Data Protection Regulation (GDPR) and frameworks such as the ISO/IEC 42001 standard for AI management systems.

“Nearly 60% of consumers believe companies using AI should have a legal obligation to act in their best interest.”
(Source: AI Trust Index 2023)

Understanding fiduciary relationships in the AI context

In traditional settings, fiduciary duties arise in relationships like trustee-beneficiary or doctor-patient. The person in the position of power is expected to prioritize the interests of the dependent party. In AI, the power imbalance exists between those who develop or operate AI systems and those affected by their outputs.

Key duties include:

  • Duty of care: AI must be tested, documented, and safe for its intended use.

  • Duty of loyalty: Operators must not prioritize profit over user well-being or manipulate outputs for advantage.

  • Duty of disclosure: Users must be informed about how AI decisions are made and what their rights are.

Real-world example of fiduciary concerns in AI

A financial advisory platform used a recommendation algorithm to suggest investment products. While the tool claimed objectivity, it prioritized higher-commission products from partner institutions. The platform did not disclose this conflict, misleading users who assumed the system was working in their best interest.

The case led to a regulatory inquiry and new disclosure requirements. It also prompted the firm to update its governance model to better reflect fiduciary responsibilities, including more transparency and an independent audit of its algorithms.

Best practices for applying fiduciary principles in AI

Integrating fiduciary duties into AI development and operation requires both technical and organizational changes. It also requires a cultural shift in how AI teams approach user relationships.

Best practices include:

  • Build trust by design: Align system objectives with user welfare, not just performance metrics.

  • Create an ethics board: Involve external stakeholders and ethicists in key design and deployment decisions.

  • Establish redress mechanisms: Make it easy for users to report harms or request explanations.

  • Conduct impact assessments: Evaluate potential conflicts of interest and unintended consequences before deployment.

  • Document decision logic: Ensure traceability of model inputs, weights, and outcomes.

These practices align with recommendations from institutions like the OECD AI Principles and the EU High-Level Expert Group on AI.

FAQ

Do fiduciary duties apply to all AI systems?

Not legally, but they are strongly recommended in high-stakes domains like finance, healthcare, criminal justice, and public administration where user vulnerability is high.

Who should enforce fiduciary duties in AI?

Governments, professional associations, and internal governance bodies should each play a role. Independent third-party audits and certification schemes can also support enforcement.

How are fiduciary duties different from privacy or security regulations?

Privacy and security laws address specific technical risks. Fiduciary duties are broader, focusing on ethical alignment, transparency, and accountability in decision-making.

Can open-source AI projects be held to fiduciary standards?

Yes, especially if they are widely adopted or integrated into high-impact systems. Maintaining good documentation, transparency, and governance helps fulfill this responsibility.

Summary

Fiduciary duties in AI represent an important step toward ethical and responsible AI governance. These duties protect individuals and groups who rely on or are impacted by AI, especially in complex and opaque decision environments.

 

Disclaimer

We would like to inform you that the contents of our website (including any legal contributions) are for non-binding informational purposes only and does not in any way constitute legal advice. The content of this information cannot and is not intended to replace individual and binding legal advice from e.g. a lawyer that addresses your specific situation. In this respect, all information provided is without guarantee of correctness, completeness and up-to-dateness.

VerifyWise is an open-source AI governance platform designed to help businesses use the power of AI safely and responsibly. Our platform ensures compliance and robust AI management without compromising on security.

© VerifyWise - made with ❤️ in Toronto 🇨🇦