Back to policy templates
Policy 02 of 15

AI Data Use Policy

Establishes rules for how data is collected, processed, stored, and shared in the context of AI systems.

1. Purpose

This policy defines how [Organization Name] collects, processes, stores, and shares data in connection with AI systems. It confirms that all AI-related data use complies with applicable data protection laws, respects individual rights, and maintains the trust of customers, employees, and partners.

2. Scope

This policy applies to:

  • All data used to train, fine-tune, validate, test, or operate AI systems.
  • All data generated by AI systems (predictions, recommendations, content, decisions).
  • All data transmitted to or received from third-party AI services.
  • All employees, contractors, and partners who handle data in connection with AI.

3. Definitions

  • Personal data: Any information relating to an identified or identifiable natural person, as defined by GDPR Article 4(1).
  • Special category data: Data revealing racial or ethnic origin, political opinions, religious beliefs, trade union membership, genetic data, biometric data, health data, or data concerning sex life or sexual orientation (GDPR Article 9).
  • Training data: Data used to build, train, or fine-tune an AI model.
  • Inference data: Data processed by an AI system at runtime to produce outputs.
  • Synthetic data: Artificially generated data that mimics real-world data patterns without containing actual personal information.
  • Lawful basis: The legal ground under which personal data is processed (consent, contract, legal obligation, vital interests, public task, or legitimate interests).

4. Lawful basis for AI data processing

Before any personal data is used in an AI system, the lawful basis must be identified and documented. Not all AI use requires consent — GDPR allows several lawful bases. However, each must be assessed case by case.

Lawful basisWhen applicable to AIKey requirements
Consent (Art. 6(1)(a))Data subjects explicitly agree to AI processingMust be freely given, specific, informed, and unambiguous. Can be withdrawn.
Contract (Art. 6(1)(b))AI processing is necessary to deliver a contracted serviceMust be genuinely necessary, not just convenient.
Legitimate interests (Art. 6(1)(f))AI processing serves a legitimate business interest that does not override individual rightsRequires a documented Legitimate Interest Assessment (LIA).
Legal obligation (Art. 6(1)(c))AI processing is required by law (e.g., fraud detection mandates)Must identify the specific legal obligation.

Special category data requires an additional condition under GDPR Article 9(2) and must never be used for AI training without explicit legal review.

5. Data collection and sourcing

  • Data must be collected for specified, explicit, and legitimate purposes (purpose limitation).
  • Only data that is adequate, relevant, and limited to what is necessary may be collected (data minimization).
  • The source and provenance of all data used in AI must be documented.
  • Purchased or licensed data must have clear terms permitting its use in AI training and inference.
  • Web-scraped data must be reviewed for copyright, terms of service, and personal data content before use.
  • Synthetic data is preferred when real data is not necessary or when privacy risks are high.

6. Data quality and accuracy

EU AI Act Article 10 requires that training data for high-risk systems be "relevant, representative, free of errors, and complete." All AI data must meet the following standards:

  • Data must be reviewed for accuracy, completeness, and timeliness before use.
  • Known data quality issues must be documented and their impact on model performance assessed.
  • Bias screening must be performed on training and evaluation datasets.
  • Data quality metrics must be tracked and reported to the model owner.

7. Data retention and deletion

  • Data must not be retained longer than necessary for its stated purpose (storage limitation).
  • Retention periods must be defined and documented for each dataset and AI system.
  • When a model is retired, associated training and inference data must be reviewed for deletion or anonymization.
  • Individuals have the right to request deletion of their data. When a deletion request is received, the impact on active AI systems must be assessed and the request fulfilled where legally required.
  • Audit logs and compliance evidence may be retained longer per legal and regulatory requirements.

8. Data sharing and third-party transfers

  • Data shared with third-party AI providers must be governed by a Data Processing Agreement (DPA) that specifies the purpose, security measures, sub-processors, and data subject rights.
  • Cross-border transfers must comply with GDPR Chapter V (adequacy decisions, standard contractual clauses, or binding corporate rules).
  • Data residency requirements must be documented for each AI system.
  • Third-party AI providers must not use customer data for their own model training unless explicitly authorized.

9. Automated decision-making

When AI systems make or materially influence decisions about individuals:

  • Individuals must be informed that automated processing is taking place (GDPR Article 13/14).
  • Individuals have the right not to be subject to solely automated decisions that produce legal or similarly significant effects (GDPR Article 22).
  • Meaningful human review must be available for high-impact decisions.
  • The logic involved, significance, and envisaged consequences must be explained to the individual.

10. Data protection impact assessments

A Data Protection Impact Assessment (DPIA) must be completed before deploying AI systems that:

  • Process personal data at scale.
  • Involve systematic monitoring of individuals.
  • Process special category data.
  • Use automated decision-making with legal or significant effects.
  • Combine datasets from multiple sources in ways that individuals may not reasonably expect.

11. Roles and responsibilities

RoleData use responsibilities
Data OwnerAccountable for data quality, classification, access approvals, and compliance of their datasets.
Model OwnerEnsures AI system data use is documented, lawful basis identified, and retention periods defined.
Data Privacy OfficerReviews DPIAs, advises on lawful basis, handles data subject requests, coordinates with regulators.
LegalReviews data licensing, vendor DPAs, cross-border transfer mechanisms.
All employeesHandle data according to classification, report data quality issues, complete privacy training.

12. Regulatory alignment

  • GDPR: Articles 5 (principles), 6 (lawful basis), 9 (special categories), 13-14 (transparency), 17 (right to erasure), 22 (automated decisions), 25 (privacy by design), 35 (DPIAs).
  • EU AI Act: Article 10 (data governance for high-risk systems), Article 13 (transparency).
  • ISO/IEC 42001: Annex B (B.7 — data for AI systems).
  • NIST AI RMF: MAP function (MP-3 , AI risks from third-party data).

13. Review

This policy is reviewed annually or sooner when triggered by regulatory changes, data breaches, or material changes to AI data processing activities.

Document control

FieldValue
Policy owner[Data Privacy Officer]
Approved by[AI Governance Committee]
Effective date[Date]
Next review date[Date + 12 months]
Version1.0
ClassificationInternal

Ready to implement this policy?

Use VerifyWise to customize, deploy, and track compliance with this policy template.

AI Data Use Policy | VerifyWise AI Governance Templates