This post is also available in:   français (French)

Governments across the world are increasingly using AI and automated decision-making (ADM) systems to determine government entitlements, prioritize public services, predict policing and support decisions regarding bail and sentencing.  This technology promises many benefits, while also raising significant risks to human rights, due process, procedural fairness, access to justice and the trustworthiness of justice-system and government decision-making.

The LCO’s Issue Paper, Regulating AI: Critical Issues and Choices, is a ground-breaking analysis of how to regulate AI and ADM systems used by governments and other public institutions.

The report discusses key choices and options, identifies regulatory gaps, and proposes a comprehensive framework to ensure governments using AI and ADM systems protect human rights, ensure due process and promote public participation.

The LCO report answers a series of important questions:

  • What issues should AI and ADM regulation address?
  • What are the benefits and limits of “ethical AI”?
  • Which model (or models) best ensures AI and ADM transparency, accountability, protection of human rights, due process and “trustworthiness” in governments and related institutions?
  • Are there gaps in the Canadian regulatory landscape?

Is regulation in Canada robust or comprehensive enough to meet the proven challenges of these systems?

The LCO report proposes a comprehensive framework to address these issues, including:

  • Baseline requirements for all government AI and ADM systems, irrespective of risk.
  • Strong protections for AI and ADM transparency, including disclosure of the existence of a system and a broad range of data, tools and processes used by the system.
  • Mandatory “AI Registers”.
  • Mandatory, detailed and transparent AI or algorithmic impact assessments.
  • Explicit compliance with the Charter and appropriate human rights legislation.
  • Data standards.
  • Access to meaningful remedies.
  • Mandatory auditing and evaluation requirements.
  • Independent oversight of individual systems and government use of AI/ADM generally.

The report concludes that proactive law reform is necessary to ensure AI and ADM regulation maximizes AI and ADM’s potential benefits, while minimizing potential harm.  The report emphasises the extraordinary regulatory gap in Canada.  As a result, some of the most consequential potential uses of AI and ADM – including systems that determine government benefits and prioritize services, facial recognition systems, and systems used in criminal justice – are under- or unregulated in Canada.

An Executive Summary of the report is available.

Project Documents