Retour aux modèles de gouvernance IA
Data and Security AI Policies

Shadow AI Detection and Reporting Policy

Requires teams to report unsanctioned AI tools and sets remediation steps.

Responsable : IT Governance Lead

Objectif

Detect and remediate unsanctioned AI usage (“shadow AI”) to prevent data leakage, compliance violations, and inconsistent governance practices.

Champ d'application

Covers all AI tools, services, browser extensions, and scripts accessed using company accounts or data, whether managed by IT or procured directly by business units.

  • Third-party SaaS copilots and chatbots
  • AI-enabled plugins for productivity suites
  • Locally installed open-source AI tools
  • Cloud APIs accessed without architectural review

Définitions

  • Shadow AI: Any AI technology used without approval from IT Governance and Security.
  • Self-Report Form: Standard questionnaire employees use to disclose AI tools.
  • Remediation Plan: Actions required to either formalize the tool (security/legal review) or decommission it.

Politique

Employees must disclose all AI tools before uploading company data or integrating them into workflows. IT Governance continuously monitors network, SaaS, and browser telemetry to surface undisclosed AI usage. Detected shadow AI must be remediated within defined SLAs.

Rôles et responsabilités

IT Governance owns detection tooling, reporting dashboards, and remediation tracking. Security evaluates risk and blocks high-risk tools. Legal reviews terms and DPAs. Business Owners either sponsor a formal onboarding or enforce decommissioning.

Procédures

Shadow AI management includes:

  • Deploy network/SaaS discovery tools to flag AI domains and APIs.
  • Provide self-report form and awareness training for employees.
  • Classify tools by data sensitivity and usage patterns.
  • Trigger remediation workflow (onboard securely or decommission) with deadlines.
  • Maintain inventory of approved vs. blocked AI tools.
  • Escalate repeated non-compliance to HR or Legal.

Exceptions

Temporary access may be granted for evaluations (≤ 14 days) with read-only data and no production credentials. Evaluation results must include a go/no-go decision and security checklist.

Fréquence de révision

Shadow AI metrics (discoveries, remediation SLA breaches, blocked tools) are reviewed monthly. Policy effectiveness and tooling coverage are reassessed semi-annually.

Références

  • ISO/IEC 42001:2023 Clause 7 (Support, awareness, communication)
  • NIST AI RMF Govern function
  • Internal documents: SaaS Security Standard, Vendor Onboarding Checklist, Acceptable Use Policy

Prêt à implémenter cette politique ?

Utilisez VerifyWise pour personnaliser ce modèle de politique, le déployer et suivre la conformité.

Shadow AI Detection and Reporting Policy | Modèles de gouvernance IA VerifyWise