Back to policy templates
Policy 14 of 15

Post-Market Monitoring Policy

Defines how deployed AI systems are monitored after release, as required by EU AI Act Article 72.

1. Purpose

This policy establishes how [Organization Name] monitors AI systems after they are deployed to production. Post-market monitoring detects performance degradation, emerging risks, and compliance gaps that were not apparent during pre-deployment testing. For high-risk AI systems, this monitoring is a legal requirement under the EU AI Act.

2. Scope

This policy applies to:

The depth of monitoring is proportional to risk classification: high-risk systems have the most frequent and detailed monitoring requirements.

  • All AI systems deployed to production, regardless of risk classification.
  • Both internally developed and third-party AI systems.
  • The monitoring period begins at deployment and continues until the system is retired.

3. Monitoring plan

Each AI system must have a documented monitoring plan that specifies:

For high-risk systems, the monitoring plan is part of the technical documentation required by EU AI Act Annex IV.

  • What metrics are tracked and their acceptable thresholds.
  • How data is collected (automated monitoring, user feedback, deployer reports).
  • How often metrics are reviewed (continuous, daily, weekly, monthly, quarterly).
  • Who is responsible for reviewing results and taking action.
  • What triggers a revalidation or incident escalation.

4. What to monitor

Placeholder. Populate with your organization's language for 4. What to monitor.

4.1 Performance metrics

  • Accuracy, precision, recall, or equivalent metrics tracked against deployment baselines.
  • Error rates, failure rates, and availability.
  • Latency and throughput under actual production load.
  • Output quality assessments (for generative AI: hallucination rate, relevance scores).

4.2 Drift detection

  • Input data distribution compared to training data distribution (feature drift).
  • Output distribution changes that may indicate model behavior change (concept drift).
  • Changes in user interaction patterns that may indicate the operating context has changed.

4.3 Bias and fairness

  • Fairness metrics tracked over time across protected groups.
  • New bias patterns that emerge from production data but were not present in test data.
  • Feedback from users or affected individuals indicating discriminatory outcomes.

4.4 Safety and security

  • Guardrail trigger rates (blocked requests, masked content, injection attempts).
  • Anomalous usage patterns that may indicate adversarial activity.
  • Vulnerability disclosures affecting model dependencies or infrastructure.

4.5 Regulatory and contextual changes

  • New regulations or guidance affecting the system's operating domain.
  • Changes to the system's operating context that may alter its risk profile.
  • Vendor changes (model updates, sub-processor changes, terms of service changes).

5. Monitoring frequency

Risk levelAutomated monitoringManual reviewFull revalidation
HighContinuous (real-time alerting)MonthlyQuarterly
MediumDaily metric collectionQuarterlySemi-annually
LowWeekly metric collectionSemi-annuallyAnnually

6. Escalation triggers

The following conditions trigger escalation from routine monitoring to active response:

Escalated issues follow the AI Incident Response Policy for triage and resolution.

  • Performance metric drops below the defined threshold.
  • Drift detected beyond the tolerance defined in the monitoring plan.
  • Bias metric crosses the acceptable boundary.
  • Guardrail trigger rate increases by more than 50% from baseline.
  • User complaint or feedback indicating harm or discrimination.
  • Vendor notification of material model change.
  • Regulatory change affecting the system's compliance status.

7. Monitoring cycles

For high-risk systems, monitoring is structured in recurring cycles:

  • Each cycle involves a structured review of all monitoring dimensions.
  • The assigned stakeholder answers a set of monitoring questions and records findings.
  • Findings are documented in a monitoring report.
  • Flagged concerns trigger immediate escalation to the AI Governance Lead.
  • Completed cycles produce a PDF report for the audit trail.

8. Deployer feedback

EU AI Act Article 72 requires that the monitoring system collect data "provided by deployers or collected through other sources." The organization must:

  • Establish a channel for deployers to report issues, feedback, and observed problems.
  • Review deployer feedback as part of the monitoring cycle.
  • Act on deployer-reported issues within the response times defined in the monitoring plan.

9. Roles and responsibilities

RoleMonitoring responsibilities
Model OwnerMaintains the monitoring plan, reviews metrics, responds to alerts, initiates revalidation.
AI Governance LeadTracks monitoring compliance across portfolio, reviews escalations, coordinates reporting.
SecurityMonitors for adversarial activity, reviews guardrail trigger patterns.
AI Governance CommitteeReviews quarterly monitoring summary for high-risk systems, approves changes to monitoring plans.

10. Regulatory alignment

  • EU AI Act: Article 72 (post-market monitoring system and plan), Article 73 (serious incident reporting), Annex IV (monitoring plan in technical documentation).
  • ISO/IEC 42001: Clause 9.1 (monitoring, measurement, analysis, and evaluation).
  • NIST AI RMF: MANAGE function (MG-1: risk responses deployed, MG-2: deployment decisions revisited).

11. Review

This policy is reviewed annually. Individual monitoring plans are reviewed when systems change, when monitoring findings indicate plan inadequacy, or when the EU Commission publishes the official monitoring plan template (expected by February 2, 2026).

Document control

FieldValue
Policy owner[AI Governance Lead]
Approved by[AI Governance Committee]
Effective date[Date]
Next review date[Date + 12 months]
Version1.0
ClassificationInternal

Ready to implement this policy?

Use VerifyWise to customize, deploy, and track compliance with this policy template.

Post-Market Monitoring Policy | VerifyWise AI Governance Templates