Project Overview

The Law Commission of Ontario Criminal AI Lifecycle Project is the first legal and policy reform project in Canada to review how artificial intelligence (AI) impacts each stage in a criminal justice matter. The five stages are:

  • police investigations
  • Crown consideration of charges
  • the role of risk assessment in bail, sentencing, and community diversion
  • trial and appeal, and
  • systemic oversight.

The “lifecycle” approach has two goals. First, to help government and criminal justice sector institutions be proactive about the risks and benefits of AI on access to justice, due process, human rights, and civil liberties. Second, to help these institutions identify and adopt concrete, practical and complimentary approaches to the shared challenges of AI.

Why is this project timely?

Around the world, police, prosecutors, judges, corrections officers, and community safety services increasingly use AI systems to support their decision-making. Yet AI-informed choices raise profound questions about impacts on access to justice, due process, human rights, and civil liberties. Early – and in some cases controversial – examples of AI in Canadian criminal justice include facial recognition, predictive policing, de-encryption of digital devices, and identification of high-risk inmates. Jurisdictions outside of Canada further employ AI to conduct sentencing and bail assessments, engage in probabilistic DNA analysis, and to profile public activities on social media.

Canada has yet to widely adopt these AI technologies. But experience from other jurisdictions suggests this is certain to change. Other jurisdictions further demonstrate how AI is often implemented unevenly. In criminal justice this results in conflicts between institutions. These conflicts create barriers, delay process, and may fail to protect rights. The LCO believes a thoughtful and intentional approach is necessary to introduce AI into Canada’s criminal justice system. Crucially, the introduction of AI must also support established legal rights and principles. This requires a legal and policy framework that anticipates the risks and benefits of AI systems. It also requires a framework that is systemically aware to help institutions adopt complimentary responses to the shared challenges of AI. To achieve these objectives the LCO’s five-part review expects to engage with a wide array of potential issues. For example, the papers may consider:

  • the unique characteristics of AI in supplementing or supplanting human decision-making in relation to Charter rights and due process
  • how to be proactive and manage the impacts of AI through instruments like directives, impact assessment tools, procurement policies, and policy frameworks. These may be specific to criminal law and distinct from similar instruments made to guide government decision making, administrative and civil law, and in the private sector
  • necessary litigation resources to ensure a full and fair defense
  • the use of AI for good, such as monitoring justice systems for bias and discrimination, or more quickly scheduling proceedings
  • requirements and standards of AI transparency, notice, disclosure, data bias, explainability, and performance validation
  • gaps in existing legislation such as the Criminal Code of Canada, Evidence Act, Provincial Offences Act, and others
  • the assertion of proprietary trade secrets
  • policies that guide the Crown
  • the role of public participation and other external technological oversight mechanisms
  • wrongful convictions, systemic human rights, and civil liberties impacts
  • the role of expert witnesses
  • the KYT (“know your tech”) training needs of judges, Crowns, defense counsel, and other court officers to ensure a robust role for human-in-the-loop oversight and safeguards against undue deference to AI recommendations

 

How will the project develop and who is participating in it?

The LCO has invited leading practitioners and experts to author reports on each stage considered in the lifecycle analysis. The reports will identify issues and challenges raised by AI and pose key questions and options for legal and policy reform. The authors will work on the reports simultaneously, in parallel, to better identify and discuss themes and issues that cut across the different stages. Publication of the five reports will support broad and inclusive public consultations on the identified issues and options.

The authors of the five papers are:

AI in Police Investigations

  • Lynda Morgan, Defense Counsel Addario Law Group and Co-Chair, Osgoode Hall Annual TechCrime Program
  • Andrew Guaglio, Defense Counsel Fenton Law Group

Crown Review of Charges

  • Alpha Chan,  Chief Information Security Officer, Information & Technology Command Toronto Police Services
  • Elena Middelkamp, Crown Law Office – Criminal Ministry of the Attorney General

Risk Assessments in Bail, Sentencing, and Diversion

  • Gideon Christian, Professor of Law University of Calgary, Faculty of Law
  • Armando D’Andrea, Duty Counsel and Criminal Law Policy Legal Aid Ontario

AI at Trial and Appeals

  • Paula Thompson, Strategic Initiatives Ministry of the Attorney General
  • Eric Neubauer, Defense Counsel, Neubauer Law, and Co-Chair, Criminal Lawyers Association Technology Committee

Systemic Oversight Mechanisms

  • Brenda McPhail, Director, Privacy, Technology & Surveillance Program, Canadian Civil Liberties Association
  • Jagtaran Singh, Legal Counsel Ontario Human Rights Commission
  • Marcus Pratt, Director of Policy Legal Aid Ontario

Advisory Committee

An external Advisory Committee also supports the project. Advisory Committee members meet from time to time to review the work of the project and offer advice and guidance. The members of the Advisory Committee are:

  • Rosemarie Juginovic Chief Justice, Ontario Superior Court of Justice
  • Morris Pistyner – Strategic Prosecution Management Issues, Public Prosecution Service of Canada, Department of Justice
  • Gerald Chan Stockwoods LLP Barristers and Criminal Lawyers Association
  • Jane Mallen Ministry of the Attorney General and LCO Board of Governors
  • Diana Grech Strategic Analytics Unit, Ministry of the Attorney General
  • Rosanna Giancristiano Director, Court Operations, Ministry of the Attorney General
  • Michelina Longo Director, External Relations, Ministry of the Solicitor General
  • Jessica Mahon Ministry of the Solicitor General
  • Michael Swinburne Senior Policy Advisor, Canadian Human Rights Commission
  • David Murakami Wood Incoming Professor, Department of Criminology, University of Ottawa
  • Daniel Konikoff Acting Director, Canadian Civil Liberties Association, and PhD Candidate, Centre for Criminology and Socio-legal Studies, University of Toronto

 

What work is already complete?

The LCO has developed a series of papers examining the impact of AI in the criminal, civil, and administrative contexts. We also identify critical issues and choices for regulation and compare Canadian and EU approaches. The complete series of papers is available at https://www.lco-cdo.org/ai. Two papers are most relevant to the Criminal AI Lifecycle Project:

 

What are the next steps and timeline for the project?

Through the Summer and Fall of 2023 the LCO will collaborate with the authors to develop the five project papers. The LCO will then release the papers and commence public consultations in 2024.  

How do I get involved?

Any questions, comments or suggestions should be sent to: Ryan Fritsch, Legal Counsel and Project Lead Law Commission of Ontario rfritsch@lco-cdo.org.

The LCO can also be contacted at:

Email: LawCommission@lco-cdo.org

Web: www.lco-cdo.org
X (formerly Twitter): @LCO_CDO
LinkedIn: Law Commission of Ontario | Commission du droit de l’Ontario

Tel: (416) 650-8406
Toll-free: 1 (866) 950-8406

Law Commission of Ontario
2032 Ignat Kaneff Building
Osgoode Hall Law School, York University
4700 Keele Street Toronto, Ontario, Canada M3J 1P3

Project Documents