Project Overview

Blue background image which has AI on it. It also has a title written on it "The Rise and Fall of Algorithms in American Criminal Justice: Lessons for Canada".

Artificial intelligence (AI) and algorithms are often referred to as “weapons of math destruction.”  Many systems are also credibly described as “a sophisticated form of racial profiling.”  These views are widespread in many current discussions of AI and algorithms.

The LCO Issue Paper, The Rise and Fall of Algorithms in American Criminal Justice:  Lessons for Canada, is the first of three LCO Issue Papers considering AI and algorithms in the Canadian justice system.  The paper provides an important first look at the potential use and regulation of AI and algorithms in Canadian criminal proceedings.  The paper identifies important legal, policy and practical issues and choices that Canadian policymakers and justice stakeholders should consider before these technologies are widely adopted in this country.

The LCO’s second Issue Paper, Regulating AI:  An International Survey (to be released in 2021) considers current efforts to regulate AI and algorithms in government decision-making.  The LCO’s third Issue Paper, AI, Algorithms and Government Decision-Making (to be released in 2021) considers the use of AI and algorithms in civil and administrative law decision-making, such as determining welfare entitlements, administrative proceedings and government investigations.

The Rise and Fall of Algorithms in American Criminal Justice:  Lessons for Canada considers the extraordinary growth of algorithmic pretrial risk assessments in the United States.  These are AI or algorithmic tools that aid criminal courts in pretrial custody or bail decision-making.  Similar tools are used in other criminal proceedings, including sentencing.

Algorithmic pretrial risk assessments are an important case study in the use of AI and algorithms in criminal justice.  Bail proceedings adjudicate and balance fundamental liberty and public safety issues while needing to ensure high standards of due process, accountability and transparency.

The use of these tools has expanded rapidly across the United States, to the point where these systems are probably the most widely implemented AI or algorithmic decision-making tool in criminal proceedings in the world.  This expansion has been the catalyst for an unprecedented and rapid evaluation of how algorithmic tools in criminal justice are designed, developed and deployed.

The LCO’s Issue Paper summarizes ten important lessons and observations regarding AI and algorithms in American criminal proceedings.  The LCO highlights the issues and questions that will likely arise in Canada if, or more likely when, Canadian policymakers consider the use of AI or algorithmic tools in Canadian criminal proceedings, including:

  • Historic Bias and Data Discrimination
  • The “Metrics of Fairness”
  • Data Transparency
  • Data Accuracy, Reliability and Validity
  • “Risk Scoring” and Automation Bias
  • The Distinction Between Predictions, Law and Policy
  • Best Practices in Risk Assessments
  • The Need for Public Participation
  • Algorithmic Accountability
  • The Limits of Litigation

The LCO paper pays particular attention to issues regarding racism and data discrimination.  Anti-Black and anti-Indigenous racism has been a long-standing concern in Ontario’s justice system.  Perhaps not surprisingly, the LCO has identified many unexplored, unregulated and poorly understood issues respecting data, discrimination, algorithms and the law.  The paper also considers this technology from the related and overlapping perspective of access to justice for low-income and vulnerable communities.

The paper concludes with an analysis of law reform/regulatory issues and options to assist Canadians policymakers and stakeholders identify, discuss and develop appropriate legal rules for the Canadian criminal justice system.

An Executive Summary of the report is available.

Project Documents