Code of conduct for AI development
Code of conduct for AI development
A code of conduct for AI development is a set of written principles and guidelines designed to guide the behavior of developers, researchers, and organizations building artificial intelligence systems.
It outlines the ethical boundaries, safety expectations, and accountability standards that must be followed during the entire AI lifecycle—from design to deployment.
This matters because without shared norms and responsibilities, AI development can quickly spiral into harmful, opaque, and biased practices.
A clear code of conduct supports trustworthy systems, minimizes risks, and helps meet legal obligations such as those outlined in the [EU AI Act](https://artificialintelligenceact.eu/), ISO 42001, or the OECD AI Principles.
“Only 16% of AI professionals believe their organization has a clear ethical framework for AI development.”(Source: 2023 IBM Global AI Adoption Index)
What is a code of conduct for AI?
A code of conduct for AI defines how people and organizations should act when designing, training, testing, and using AI systems. It reflects shared values like transparency, fairness, accountability, and privacy. These codes often align with international frameworks or organizational missions.
Why codes of conduct matter for AI teams
They provide direction when legal requirements are unclear or outdated. AI governance and compliance teams rely on them to flag early risks, audit processes, and define what “responsible AI” actually means inside the company. Risk teams use them to align development choices with business values and public expectations.
A real-world example comes from the UK Information Commissioner’s Office (ICO). Its “AI auditing framework” includes expectations about fairness, accountability, and accuracy that companies must meet to avoid legal penalties. A code of conduct translates such expectations into daily practice.
Common principles in AI conduct codes
Many AI conduct codes around the world include the following principles:
-
Transparency: Clear documentation of how the model was trained, what data it used, and what decisions it influences.
-
Accountability: Assigning people or teams who are responsible for the model’s impact and updates.
-
Non-discrimination: Ensuring data and models do not reinforce social or historical biases.
-
Privacy: Applying techniques such as data minimization and differential privacy to protect user information.
-
Human oversight: Keeping people involved in important decisions, especially those affecting rights or freedoms.
Best practices for implementing AI codes of conduct
Good principles are not enough—they must be applied properly. Here’s how organizations make them real.
Start by linking the code to clear roles and responsibilities. Developers should know what “compliance” looks like. Risk managers should have access to checklists or workflows that reflect the code. And executives should back it up with resources.
Best practices include:
-
Build the code with your team: Co-create it with input from engineers, product managers, lawyers, and ethicists. This improves adoption.
-
Train regularly: New hires, contractors, and partners should all learn what the code means and how to apply it.
-
Review and revise: Every year, update the code based on new laws, technologies, or incidents.
-
Audit compliance: Use tools to track whether real development work aligns with the stated principles.
Practical use-cases
-
A chatbot company writes a code of conduct to prevent its AI from generating harmful or false medical advice.
-
A financial institution updates its AI risk policy to include “non-discrimination” testing on all credit-scoring models.
-
A social media firm creates guidelines to ensure AI moderation doesn’t remove lawful speech disproportionately from marginalized groups.
FAQ
What is the difference between a code of conduct and AI policy?
A code of conduct explains the values and behaviors expected during AI work. An AI policy includes formal rules, procedures, and legal terms that enforce those behaviors. The code often supports the policy.
Is a code of conduct legally binding?
Not on its own. But it can support legal compliance by guiding ethical decision-making and showing a proactive approach to regulation.
Who should write the code of conduct?
Ideally, a cross-functional team. Include engineering, compliance, legal, and ethics roles. Organizations can also consult civil society, impacted communities, or academic experts.
How often should the code be updated?
Annually is recommended, or sooner if there’s a major legal change, audit failure, or public incident involving your AI.
Related Entries
AI assurance
AI assurance refers to the process of verifying and validating that AI systems operate reliably, fairly, securely, and in compliance with ethical and legal standards. It involves systematic evaluation...
AI incident response plan
is a structured framework for identifying, managing, mitigating, and reporting issues that arise from the behavior or performance of an artificial intelligence system.
AI model inventory
An **AI model inventory** is a centralized list of all AI models developed, deployed, or used within an organization. It captures key information such as the model’s purpose, owner, training data, ris...
AI model robustness
As AI becomes more central to critical decision-making in sectors like healthcare, finance and justice, ensuring that these models perform reliably under different conditions has never been more impor...
AI output validation
AI output validation refers to the process of checking, verifying, and evaluating the responses, predictions, or results generated by an artificial intelligence system. The goal is to ensure outputs a...
AI red teaming
AI red teaming is the practice of testing artificial intelligence systems by simulating adversarial attacks, edge cases, or misuse scenarios to uncover vulnerabilities before they are exploited or cau...
Implement with VerifyWise Products
Implement Code of conduct for AI development in your organization
Get hands-on with VerifyWise's open-source AI governance platform