Due diligence in AI procurement refers to the process of systematically evaluating the technical, ethical, legal, and operational aspects of an AI system before purchasing or integrating it into a business or government environment. This includes assessing the vendor, the model, the training data, security controls, and compliance with relevant frameworks or laws.
This matters because AI tools are increasingly used in critical decisions affecting people’s lives, finances, health, or freedoms. Without proper scrutiny, organizations risk adopting opaque, biased, or non-compliant systems. For AI governance, procurement, and legal teams, due diligence ensures accountability, protects against downstream risks, and supports adherence to frameworks like ISO/IEC 42001 and regulations such as the EU AI Act.
“Nearly 60% of organizations have adopted AI tools without fully assessing the source of training data or the vendor’s compliance history.”
(Source: Global AI Risk and Readiness Index, 2023)
What due diligence in AI procurement includes
AI procurement requires more than checking functionality or price. It demands structured review of aspects that may affect safety, fairness, transparency, and performance.
Key areas to evaluate include:
-
Vendor credibility: Track record, financial health, and past compliance issues.
-
Model transparency: Availability of documentation, model cards, and decision logic explanations.
-
Training data sources: Clarity on where data came from, how it was labeled, and if it includes bias or personal information.
-
Security practices: Encryption, access control, and response plans for data breaches or model misuse.
-
Compliance status: Alignment with local and international AI laws, data protection rules, and ethical guidelines.
-
Auditability and support: Whether the system allows post-deployment audits, logging, and human-in-the-loop controls.
Skipping these steps can lead to reputational harm or financial losses if issues emerge later.
Real-world example of failed procurement oversight
A European public agency procured an AI hiring tool without evaluating its dataset or internal decision rules. After rollout, the tool disproportionately excluded older applicants from shortlists. A third-party audit revealed biased training data and incomplete documentation. The contract was terminated, and the agency faced public scrutiny and regulatory inquiries.
This incident shows how a lack of upfront diligence can turn a cost-saving initiative into a legal and reputational liability.
Best practices for conducting due diligence on AI tools
Due diligence should be proactive, documented, and guided by checklists tailored to AI-specific risks.
Begin with a framework and adapt based on industry and use case:
-
Create a multidisciplinary review team: Include legal, technical, ethical, and procurement experts.
-
Use standard assessment checklists: Adopt templates from OECD AI Principles, AI Procurement Guidelines, or your national data protection authority.
-
Request model documentation: Require vendors to share model cards, data sheets, or risk assessments.
-
Evaluate explainability: Test whether the system can provide clear, meaningful explanations to users or regulators.
-
Require access logs and usage controls: Ensure visibility over who uses the system and how its outputs are applied.
-
Include contractual safeguards: Add clauses for audit rights, retraining options, and legal liability in case of harmful outputs.
Platforms like Credo AI, Z-Inspection, and GovernML offer toolkits for structured due diligence.
FAQ
Is due diligence mandatory for public sector AI procurement?
Yes, in many jurisdictions. The EU AI Act and national laws increasingly require impact assessments and technical documentation for high-risk AI systems.
What documents should vendors provide?
Vendors should supply model cards, training data summaries, compliance reports, third-party audit results, and system architecture diagrams.
Can smaller vendors comply with these requirements?
Yes. While full-scale assessments may be demanding, vendors can adopt lightweight versions of model documentation, privacy reviews, and impact assessments to meet transparency expectations.
How often should due diligence be repeated?
Initial procurement requires a full review, but follow-up reviews are recommended when the system is updated, retrained, or integrated into new use cases.
Summary
Due diligence in AI procurement is not a checkbox—it is a safeguard against hidden risks. Whether you’re buying an AI tool for hiring, insurance, customer service, or government operations, a structured evaluation protects your organization from legal exposure and reputational harm. Aligning with standards like ISO/IEC 42001 helps ensure that the systems you bring in are not only effective, but responsible, transparent, and safe.