Governments around the world are developing AI and automated decision-making (ADM) systems to support all kinds of decisions that affect people’s lives, including benefits determinations; education; compliance with government regulations and licensing; child protection; immigration; facial recognition and surveillance technology; and policing, bail, and sentencing.
AI and ADM systems have the potential to transform government decision-making by improving the accuracy and consistency of decision-making and reducing backlogs. Notwithstanding this potential, government AI and ADM systems are controversial. There are many examples of government AI and ADM systems that have been biased, secretive, ineffective, and caused significant harms to individuals and communities.
The LCO’s Accountable AI report addresses key areas of AI regulation, AI litigation, human rights, administrative law, privacy, and civil procedure to determine if there are gaps or unanswered questions that must be addressed to ensure meaningful legal accountability for government AI systems.
Accountable AI includes 19 recommendations to promote “accountable AI.” These include recommendations addressing biased government AI systems, “black-box” decision-making, and the need for public engagement. The LCO concludes that “accountable AI” depends on a mix of law reform tools and strategies, including front end regulation, substantive law reform, enhanced due process protections, and innovative initiatives to improve access to justice. The LCO has also concluded that many tools and strategies are available to policymakers today. Others will depend on policymakers and stakeholders coming together to address a complex series of legal accountability challenges that often combine legal and technical analysis.
This is the third major LCO report looking at AI and ADM in the Canadian justice system. The first report, The Rise and Fall of Algorithms in American Criminal Justice: Lessons for Canada (October 2020), provides an important first look at the potential use and regulation of AI and algorithms in Canadian criminal proceedings. The second report, Regulating AI: Critical Issues and Choices (April 2021), is a ground-breaking analysis of how to regulate AI and ADM systems used by governments and other public institutions.
An Executive Summary of the report is available.
Related LCO Events and Projects
Expert Forum – December 2019
As part of this the project, in December 2019 the LCO hosted an invitational forum on automated decision-making in the civil and administrative justice system. The forum was a cross-disciplinary event designed to promote innovative, collaborative and multi-disciplinary analysis and recommendations. The LCO invited more than 30 policy makers, lawyers, jurists, technologists, academics, and community organizers to share experiences, discuss issues, and consider law reform options. We also invited leading US advocates and researchers to help us learn from the American and international experience.
- Kevin DeLiban (Legal Aid Arkansas), Martha Owen (Deats, Durst, Owen & Levy P.L.L.C.), and Christiaan van Veen (Director, Digital Welfare State and Human Rights Project at NYU Law), who shared case studies of AI in administrative decision-making;
- Professor Jennifer Raso (University of Alberta, Faculty of Law) and Raj Anand (Partner at WeirFoulds, former Chief Commissioner of the OHRC), who discussed legal issues that arise from the use of AI and automated decision-making in government decision-making; and,
- Nele Achten (Berkman Klein Center for Internet and Society, Harvard University), Benoit Heshaies (Treasury Board of Canada Secretariat, Government of Canada), Amy Bihari (Ontario Digital Service, Government of Ontario), and Professor Julia Stoyanovich (NYU), who discussed regulatory efforts underway in the area of AI and civil justice.
Forum materials included:
Workshop with Ontario Digital Services
In late 2020, the LCO completed a successful series of workshops in partnership with the Ontario Digital Service (ODS). Amongst its many responsibilities, the ODS is responsible for facilitating the development of AI within the provincial government.
The workshops brought together a wide range of legal, policy, operational and technology experts for a constructive discussion about AI and ADM development within the provincial government. Participants included representatives from several provincial ministries, including transportation, finance, consumer services, treasury, agriculture and the anti-racism directorate of the solicitor general. Participation also included representatives from the federal government, the City of London and several LCO advisory panel members.
The 4-part workshop was designed around four stages of creating an AI system: development and design; implementation; evaluation; and legal challenges. During the workshop, we discussed key themes of disclosure and transparency; bias; public participation; and due process. The workshop was grounded in a hypothetical case study around eligibility for social benefits.
The discussions were a productive collaboration of various viewpoints and expertise, highlighting similarities and gaps in understanding, best practices and potential reforms. The LCO’s report, Legal Issues and Government AI Development, identifies major themes and practical insights to assist governments considering this technology.
Accountable AI (June 2022)
Legal Issues and Government AI Development (March 2021)