Back to policy templates
Policy 10 of 15

AI Vendor Risk Policy

Extends third-party risk management with AI-specific due diligence, ongoing monitoring, and contractual requirements.

1. Purpose

This policy defines how [Organization Name] assesses, manages, and monitors risks from third-party AI vendors, APIs, and services. AI vendor risk differs from traditional vendor risk because models behave unpredictably, training data creates hidden compliance exposure, and vendor-side changes can alter system behavior without notice.

2. Scope

This policy applies to:

  • All third-party AI services (APIs, SaaS, hosted models, embedded AI features).
  • All pre-trained or foundation models procured from external providers.
  • All AI consulting engagements that deliver models or model hosting.
  • All open-source models adopted for production use.
  • All renewals and material changes to existing AI vendor relationships.

3. Pre-engagement assessment

Before activating any AI vendor, the following assessment must be completed:

3.1 AI-specific risk questionnaire

In addition to the standard vendor risk questionnaire, AI vendors must answer:

  • Model governance: What model architecture is used? How are models versioned? What change management process applies to model updates?
  • Training data: What data sources were used for training? Does training data include personal information? Is opt-out respected? Are Model Cards and Datasheets for Datasets available?
  • Bias and fairness: What bias testing has been performed? What metrics were used? What demographic groups were assessed? What remediation was applied?
  • Security: What protections exist against prompt injection, data exfiltration, and model theft? What penetration testing has been conducted?
  • Data handling: Is customer data used to train or improve the vendor's models? What data residency guarantees are provided? Who are the sub-processors?
  • Transparency: Can the vendor explain how the model produces outputs? Are confidence scores or uncertainty indicators available?
  • Incident response: What is the vendor's AI incident notification SLA? How are model failures communicated?

3.2 Documentation requirements

Vendors must provide:

  • Model Card describing architecture, training data, intended use, and limitations.
  • Security certifications (SOC 2 Type II, ISO 27001, or equivalent).
  • Data Processing Agreement (DPA) covering AI-specific data handling.
  • Sub-processor list with data residency information.
  • Incident reporting SLA and escalation process.

3.3 Risk scoring

Each AI vendor is scored using the organization's risk matrix (per AI Risk Management Policy). Factors include:

  • Sensitivity of data processed by the vendor.
  • Criticality of the business process the vendor supports.
  • Degree of autonomy (advisory vs. decision-making).
  • Vendor's transparency and control capabilities.
  • Regulatory exposure (does the use case fall under EU AI Act high-risk categories?).

4. Contractual requirements

AI vendor contracts must include the following clauses:

  • No training on customer data: Vendor must not use the organization's data to train, fine-tune, or improve its models unless explicitly authorized.
  • Change notification: Vendor must notify the organization before material changes to the model (version upgrade, architecture change, training data change) with at least 30 days notice.
  • Data portability: Organization must be able to export or delete its data without undue burden.
  • Right to audit: Organization or its designated auditor may audit the vendor's AI governance practices.
  • Incident notification: Vendor must notify the organization of AI-related incidents within 24 hours.
  • Liability allocation: Clear allocation of responsibility for AI system failures, bias, and regulatory non-compliance.
  • Termination rights: Organization may terminate if the vendor introduces unacceptable risk or fails to remediate findings.

5. Ongoing monitoring

Approved AI vendors are monitored continuously. Re-assessment is triggered by:

Monitoring includes: performance tracking against agreed SLAs, bias testing on the organization's own data, security posture verification, and regulatory compliance status.

  • Material model version change by the vendor.
  • Change to the vendor's sub-processor list.
  • Vendor security incident or regulatory enforcement action.
  • Change in the organization's use of the vendor (expanded scope, new data types).
  • Annual review cycle (minimum).

6. Vendor exit planning

For each AI vendor, a documented exit plan must include:

  • Data export procedure and timeline.
  • Alternative vendor or internal capability identified.
  • Transition period and migration plan.
  • Contractual obligations that survive termination (data deletion, confidentiality).

7. Roles and responsibilities

RoleVendor risk responsibilities
Requesting teamSubmits vendor request with business justification and risk classification.
SecurityConducts vendor security assessment, reviews certifications and penetration test results.
LegalReviews contracts, DPAs, and liability allocation. Advises on regulatory obligations.
AI Governance LeadCoordinates AI-specific assessment, tracks vendor risk scores, manages re-assessment schedule.
AI Governance CommitteeApproves high-risk vendor engagements, reviews vendor exit decisions.

8. Regulatory alignment

  • EU AI Act: Article 25 (responsibilities along the AI value chain), Article 16 (provider obligations).
  • GDPR: Articles 28-29 (processor obligations, DPA requirements).
  • ISO/IEC 42001: Clause 8.5 (third-party and customer relationships).
  • NIST AI RMF: GOVERN function (GV-6: policies for third-party AI).

9. Review

This policy is reviewed annually or when triggered by material vendor incidents, new regulatory requirements, or changes to the organization's AI vendor portfolio.

Document control

FieldValue
Policy owner[AI Governance Lead / CISO]
Approved by[AI Governance Committee]
Effective date[Date]
Next review date[Date + 12 months]
Version1.0
ClassificationInternal

Ready to implement this policy?

Use VerifyWise to customize, deploy, and track compliance with this policy template.

AI Vendor Risk Policy | VerifyWise AI Governance Templates