Canada's Algorithmic Impact Assessment (AIA) is the world's first mandatory government-wide tool for evaluating AI systems before deployment. This questionnaire-based assessment determines whether your automated decision-making system poses low, medium, or high impact risks—and what mitigation requirements you must meet. Unlike voluntary frameworks, this tool carries real regulatory weight: federal departments must complete it before implementing any automated decision-making system, making it essential reading for anyone working with AI in Canadian government contexts.
The AIA uses a point-based system across four key dimensions: decision type, algorithm type, data inputs, and organizational capacity. Each question assigns points based on risk factors—for example, using machine learning adds more points than rule-based systems, while processing sensitive personal data significantly increases your score.
Your final score determines your impact level:
The tool automatically calculates your score and generates specific compliance requirements based on your result, removing guesswork about what's needed for Treasury Board Directive compliance.
Most AI risk assessments are voluntary guidelines—Canada's AIA has teeth. It's legally mandated under Treasury Board policy, meaning federal departments face real consequences for non-compliance. The tool also goes beyond technical risk to evaluate organizational readiness, asking whether your department has adequate staff training and governance structures.
The assessment covers the full AI lifecycle, from initial development through ongoing monitoring. It's designed for government decision-makers, not just technical teams, using plain language questions about business impact rather than algorithmic complexity. This makes it accessible to the program managers who actually deploy these systems.
Timing confusion: Many teams complete the AIA too late in the development process. It should be done during planning phases, not after your system is built—high-impact scores may require fundamental design changes.
Vendor coordination: When using external AI services, determining who completes which sections can be unclear. Generally, the government department completes questions about decision context and organizational capacity, while vendors provide technical algorithm details.
Ongoing obligations: The AIA isn't a one-time checkpoint. Systems must be reassessed when materially changed, but the tool doesn't clearly define what constitutes a "material change"—leading to inconsistent re-assessment practices across departments.
Before starting your assessment:
The assessment typically takes 30-60 minutes to complete, but implementing the required mitigation measures for medium and high-impact systems can take months.
Published
2024
Jurisdiction
Canada
Category
Assessment and evaluation
Access
Public access
VerifyWise helps you implement AI governance frameworks, track compliance, and manage risk across your AI systems.