Project Overview

The Law Commission of Ontario’s (LCO) multiyear AI in the Civil/Administrative Justice System project brings together policymakers, legal professionals, technologists, NGOs and community members to discuss the development, deployment, regulation and impact of artificial intelligence (AI), automated decision-making (ADM) and algorithms on access to justice, human rights, and due process.

The catalyst for this project is the extraordinary growth in the use of these technologies by governments and public agencies across the world. AI and ADM systems are increasingly being used to make decisions affecting personal liberty, government benefits, regulatory compliance and access to important government services.  The growth in this technology has been controversial: Questions about racial bias, “data discrimination,” “black box” decision-making and public participation have surfaced quickly and repeatedly when AI and ADM systems are used by governments.  These issues, and others, raise new and complex law reform issues that need to be addressed in Canada.

The LCO benefited from an expert Advisory Committee to provide input during earlier phases of the project.

  • Abdi Aidid – Assistant Professor, University of Toronto, Faculty of Law
  • Raj Anand – Chair of the LCO Board, Partner at WeirFoulds LLP
  • Amy Bihari – Senior Data Advisor, Government of Ontario
  • Insiya Essajee – Counsel, Ontario Human Rights Commission
  • Michelle Mann – General Counsel, Department of Justice
  • Jonathon Penney – Professor of Law, Osgoode Hall
  • Carole Piovesan – Partner and Co-Founder, INQ Data law
  • Marcus Pratt – Policy and Strategic Research, Legal Aid Ontario
  • Jennifer Raso – Professor of Law, University of Alberta
  • Teresa Scassa – Professor of Law, Ottawa University
  • Julia Stoyanovich – Professor of Computer Science, New York University
  • Christiaan van Veen – Director, Digital Welfare State and Human Rights Project, NYU School of Law

Major Initiatives

AI impact assessments are a leading strategy to promote “trustworthy AI” in government and private sector AI systems.

Human Rights AI Impact Assessment

In November 2024, the Law Commission of Ontario and Ontario Human Rights Commission released the first AI human rights impact assessment (HRIA) based on Canadian human rights law. The LCO/OHRC HRIA is a practical step-by-step guide that will help Canadian public and private sector organizations embed “human rights by design” in their AI systems.

More information about the LCO-OHRC human rights impact assessment is available here.

In March 2025, the LCO released an AI impact assessment Backgrounder explaining the benefits, limitations, stakeholder perspectives, and choices in AI impact assessments. The Backgrounder will help policymakers and stakeholders understand the key issues and stakeholder perspectives on these important AI governance tools.

More information about the LCO-OHRC human rights impact assessment backgrounder is available here.

The LCO’s Bill 194 Submission makes recommendations to Bill 194, the Strengthening Cyber Security and Building Trust in the Public Sector Act, 2024.  Bill 194 is the provincial government’s first effort to regulate AI use by public sector entities in Ontario.  The submission recommends several amendments to ensure Trustworthy AI in Ontario, including provisions respecting human rights, procedural fairness, criminal justice, disclosure, AI accountability, risk management, prohibitions, and AI governance.

This project is the civil and administrative law equivalent of the LCO’s criminal justice AI project.

In June 2022, the LCO published Accountable AI, a comprehensive analysis of how to address the risks of AI and ADM in government benefit determinations, child protection risk assessments, immigration determinations, regulatory compliance, and other decision-making in the civil and administrative justice systems.  The paper addresses AI regulation, “Trustworthy AI,” AI litigation, and how human rights and administrative law need to adapt to the challenges of government AI decision-making.

More on the LCO’s AI and Automated Decision-Making in the Civil/Administrative Justice System project is available here.

Governments across the world are increasingly using AI and automated decision-making (ADM) systems to determine government entitlements, prioritize public services, predict policing and support decisions regarding bail and sentencing.

The LCO’s Regulating AI: Critical Issues and Choices report is a ground-breaking analysis of how to regulate AI and automated decision-making (ADM) systems used by governments and other public institutions. The report discusses key choices and options, identifies regulatory gaps, and proposes a comprehensive framework to ensure governments using AI and ADM systems protect human rights, ensure due process and promote public participation. More on the LCO’s Regulating AI report is available here.

Project Lead and Contacts

The LCO’s Project Lead is Susie Lindsay. She can be contacted at SLindsay@lco-cdo.org.

The LCO can also be contacted at:

Email: LawCommission@lco-cdo.org

Web: www.lco-cdo.org
X (formerly Twitter): @LCO_CDO
LinkedIn: Law Commission of Ontario | Commission du droit de l’Ontario

Tel: (416) 650-8406

Law Commission of Ontario
2032 Ignat Kaneff Building
Osgoode Hall Law School, York University
4700 Keele Street Toronto, Ontario, Canada M3J 1P3

Project Documents

Other LCO AI Projects