Back to policy templates
Policy 03 of 15

AI Ethical Use Charter

Defines prohibited AI behaviors, acceptable use boundaries, and the ethical commitments the organization makes to its stakeholders.

1. Purpose

This charter sets the ethical boundaries for AI use at [Organization Name]. It defines what AI may and may not be used for, establishes acceptable use standards, and communicates the organization's commitments to customers, employees, and the public. It applies alongside the Responsible AI Principles and translates those principles into concrete rules.

2. Scope

This charter applies to all AI use across the organization: internal tools, customer-facing systems, third-party AI services, prototypes, and experiments. It applies to all employees, contractors, and partners regardless of role or seniority.

3. Prohibited uses

The following uses of AI are prohibited without exception:

3.1 Prohibited under EU AI Act Article 5

  • Social scoring: evaluating individuals based on social behavior or personality traits for purposes unrelated to the context in which the data was collected.
  • Subliminal manipulation: deploying AI techniques that manipulate individuals below their threshold of awareness to distort their behavior in a way that causes harm.
  • Exploitation of vulnerabilities: targeting individuals or groups based on age, disability, or social/economic situation to distort their behavior.
  • Real-time remote biometric identification in publicly accessible spaces for law enforcement, except where explicitly authorized by law.
  • Emotion inference in workplace or educational settings, except for medical or safety reasons.
  • Untargeted scraping of facial images from the internet or CCTV for facial recognition databases.

3.2 Prohibited by organizational policy

  • Using AI to make final decisions about individuals' employment, credit, insurance, or legal standing without human review.
  • Deploying AI systems that cannot be overridden, corrected, or shut down by a human operator.
  • Using personal data for AI training without a documented lawful basis.
  • Deploying AI in ways that deliberately deceive users about the nature of the interaction.
  • Using AI to generate false evidence, fabricated references, or misleading content presented as factual.
  • Circumventing AI governance processes (shadow AI) for production use cases.

4. Acceptable use standards

Placeholder. Populate with your organization's language for 4. Acceptable use standards.

4.1 General use

  • AI may be used for productivity, research, analysis, and content drafting with appropriate human review.
  • AI outputs must be reviewed by a qualified person before being used in external communications, contractual documents, or decisions affecting individuals.
  • Users must not input confidential or restricted data into AI tools that have not been approved through the governance process.
  • Users must disclose AI involvement when producing work product that others will rely upon.

4.2 Customer-facing use

  • Users interacting with AI must be informed that they are interacting with an AI system (EU AI Act Article 50).
  • AI-generated content must be identifiable as such when it could be mistaken for human-produced content.
  • Customers must have access to a human alternative for decisions that materially affect them.

4.3 Development use

  • AI coding assistants may be used but generated code must be reviewed, tested, and owned by the developer.
  • AI-generated code must not bypass security review or testing requirements.
  • AI must not be used to generate test results or compliance evidence.

5. Reporting violations

Any employee who observes or suspects a violation of this charter must report it through one of the following channels:

Reports may be made anonymously. Retaliation against good-faith reporters is prohibited.

  • Direct report to the AI Governance Lead.
  • Report through the organization's ethics or compliance hotline.
  • Report to their direct manager, who must escalate to the AI Governance Lead within 48 hours.

6. Consequences

Violations of this charter may result in:

  • Immediate suspension of the AI system involved.
  • Mandatory remediation with a defined timeline.
  • Disciplinary action up to and including termination for individuals who deliberately violate prohibited use rules.
  • Termination of vendor relationships where third-party AI violates this charter.

7. Regulatory alignment

  • EU AI Act: Article 5 (prohibited practices), Article 50 (transparency obligations), Article 14 (human oversight).
  • GDPR: Article 22 (automated decision-making rights).
  • ISO/IEC 42001: Clause 5.2 (AI policy), Annex C (AI ethical considerations).
  • OECD AI Principles: Human-centered values and fairness.

8. Review

This charter is reviewed annually or when triggered by new prohibited practices under regulation, organizational incidents, or changes in AI capabilities that create new ethical considerations.

Document control

FieldValue
Policy owner[AI Governance Lead]
Approved by[AI Governance Committee]
Effective date[Date]
Next review date[Date + 12 months]
Version1.0
ClassificationInternal

Ready to implement this policy?

Use VerifyWise to customize, deploy, and track compliance with this policy template.

AI Ethical Use Charter | VerifyWise AI Governance Templates