Back to policy templates
Policy 09 of 15

AI Transparency and User Notice Policy

Specifies when and how users and affected individuals are informed about AI involvement in the organization's products, services, and processes.

1. Purpose

This policy defines the transparency and notification obligations for AI systems at [Organization Name]. It specifies when users must be told that AI is involved, what information must be provided, how disclosures are delivered, and who is responsible for maintaining them. Transparency is both a regulatory requirement and a trust obligation.

2. Scope

This policy applies to:

  • All AI systems that interact with or produce outputs for individuals (customers, employees, partners, the public).
  • All AI systems that generate or manipulate content (text, images, audio, video).
  • All AI systems that make or influence decisions about individuals.
  • Both internal and external-facing AI systems.

3. Transparency obligations

Placeholder. Populate with your organization's language for 3. Transparency obligations.

3.1 AI interaction disclosure

When a person interacts with an AI system (chatbot, virtual assistant, automated support), they must be informed that they are interacting with AI, not a human. This applies unless it is obvious from the circumstances (EU AI Act Article 50(1)).

The disclosure must be:

  • Provided before or at the start of the interaction.
  • Clear and in plain language.
  • Accessible (not buried in terms of service).

3.2 Synthetic content marking

AI-generated content (text, images, audio, video) must be marked as artificially generated or manipulated when it could reasonably be mistaken for human-produced or real content (EU AI Act Article 50(2)).

  • Machine-readable metadata must be embedded in the output where technically feasible.
  • Human-readable labels must be applied when the content is published or distributed externally.
  • Content used for satire, parody, or artistic expression may be exempt if clearly identifiable as such.

3.3 Emotion recognition and biometric systems

If the organization deploys emotion recognition or biometric categorization systems, individuals exposed to these systems must be informed of their operation (EU AI Act Article 50(3)).

3.4 Deepfake disclosure

AI-generated or manipulated image, audio, or video content that constitutes a deepfake must be disclosed as artificially generated or manipulated before or at the point of distribution (EU AI Act Article 50(4)).

4. Deployer transparency for high-risk systems

For high-risk AI systems (EU AI Act Article 13), deployers must provide information about:

  • The intended purpose and nature of the AI system.
  • The level of accuracy, including known limitations.
  • Circumstances that may affect performance.
  • Human oversight measures in place.
  • Expected lifetime and maintenance requirements.

5. Automated decision-making transparency

When AI systems make or materially influence decisions about individuals (GDPR Articles 13, 14, and 22):

  • Individuals must be informed that automated processing is taking place.
  • The logic involved must be explained in terms the individual can understand.
  • The significance and possible consequences of the processing must be communicated.
  • Individuals must be informed of their right to request human review of the decision.
  • Contact information for exercising these rights must be provided.

6. How disclosures are delivered

ChannelDisclosure method
Chatbot or virtual assistantOpening message before the conversation begins (e.g., "You are chatting with an AI assistant").
Email or written communicationFooter or header note indicating AI involvement in drafting.
Web or mobile applicationIn-context label near AI-generated content or recommendations.
Voice interactionAudio announcement at the start of the call or interaction.
Generated media (images, video)Watermark, metadata tag, or accompanying label.
Decision notificationLetter or notification explaining AI involvement and how to request review.

7. Internal transparency

For AI systems used internally (employee-facing):

  • Employees must be informed when AI is used in HR processes (performance reviews, scheduling, screening).
  • Employee monitoring using AI must be disclosed in accordance with local employment law.
  • Internal AI tools must clearly indicate when output is AI-generated vs. human-produced.

8. Documentation

For each AI system, the Model Owner must document:

This documentation is maintained in the AI inventory and available for audit.

  • What transparency obligations apply (based on system type and risk classification).
  • How each obligation is met (disclosure method, text, placement).
  • When disclosures were implemented and last reviewed.
  • Any exemptions claimed and their justification.

9. Roles and responsibilities

RoleTransparency responsibilities
Model OwnerIdentifies applicable obligations, implements disclosures, maintains documentation.
Product/UXDesigns disclosure interfaces that are clear, accessible, and non-disruptive.
LegalAdvises on regulatory requirements, reviews disclosure text, assesses exemptions.
AI Governance LeadAudits transparency compliance across the portfolio, tracks regulatory changes.

10. Regulatory alignment

  • EU AI Act: Article 50 (transparency for providers and deployers), Article 13 (deployer information for high-risk systems).
  • GDPR: Articles 13-14 (information obligations), Article 22 (automated decision-making).
  • ISO/IEC 42001: Clause 4.2 (interested parties), Annex C (transparency considerations).
  • NIST AI RMF: GOVERN function (GV-4: organizational transparency).

11. Review

This policy is reviewed annually or when triggered by new transparency requirements (e.g., EU AI Act Code of Practice finalization), new AI system types, or audit findings.

Document control

FieldValue
Policy owner[AI Governance Lead]
Approved by[AI Governance Committee]
Effective date[Date]
Next review date[Date + 12 months]
Version1.0
ClassificationInternal

Ready to implement this policy?

Use VerifyWise to customize, deploy, and track compliance with this policy template.

AI Transparency and User Notice Policy | VerifyWise AI Governance Templates