Project Overview and Status
The Law Commission of Ontario’s (LCO) multiyear AI in the Civil/Administrative Justice System project brings together policymakers, legal professionals, technologists, NGOs and community members to discuss the development, deployment, regulation and impact of artificial intelligence (AI), automated decision-making (ADM) and algorithms on access to justice, human rights, and due process.
The catalyst for this project is the extraordinary growth in the use of these technologies by governments and public agencies across the world. AI and ADM systems are increasingly being used to make decisions affecting personal liberty, government benefits, regulatory compliance and access to important government services. The growth in this technology has been controversial: Questions about racial bias, “data discrimination,” “black box” decision-making and public participation have surfaced quickly and repeatedly when AI and ADM systems are used by governments. These issues, and others, raise new and complex law reform issues that have not yet been addressed in Canada.
The LCO has assembled an expert Advisory Committee to provide input over the course of the project.
What issues are being looked at?
AI impact assessments (AIAs) have emerged as a leading strategy to promote “Trustworthy AI” in many jurisdictions and sectors. Unfortunately, most AIAs are based on international “ethical AI” norms or foreign laws. Many also overlook human rights.
In response, the LCO and Ontario Human Rights Commission (OHRC) have developed the first AI human rights impact assessment (HRIA) based on Canadian human rights law. The HRIA will help governments, public agencies, and the private sector assess and mitigate the human rights impact of AI systems in a broad range of applications. The Canadian Human Rights Commission (CHRC) was a collaborator on this project.
More information about the LCO-OHRC human rights impact assessment is available here.
This project is the civil and administrative law equivalent of the LCO’s criminal justice AI project.
In June 2022, the LCO published Accountable AI, a comprehensive analysis of how to address the risks of AI and ADM in government benefit determinations, child protection risk assessments, immigration determinations, regulatory compliance, and other decision-making in the civil and administrative justice systems. The paper addresses AI regulation, “Trustworthy AI,” AI litigation, and how human rights and administrative law need to adapt to the challenges of government AI decision-making.
More on the LCO’s AI and Automated Decision-Making in the Civil/Administrative Justice System project is available here.
Governments across the world are increasingly using AI and automated decision-making (ADM) systems to determine government entitlements, prioritize public services, predict policing and support decisions regarding bail and sentencing.
The LCO’s Bill 194 Submission makes recommendations to Bill 194, the Strengthening Cyber Security and Building Trust in the Public Sector Act, 2024. Bill 194 is the provincial government’s first effort to regulate AI use by public sector entities in Ontario. The submission recommends several amendments to ensure Trustworthy AI in Ontario, including provisions respecting human rights, procedural fairness, criminal justice, disclosure, AI accountability, risk management, prohibitions, and AI governance.
The LCO’s Regulating AI: Critical Issues and Choices report is a ground-breaking analysis of how to regulate AI and automated decision-making (ADM) systems used by governments and other public institutions. The report discusses key choices and options, identifies regulatory gaps, and proposes a comprehensive framework to ensure governments using AI and ADM systems protect human rights, ensure due process and promote public participation. More on the LCO’s Regulating AI report is available here.
The LCO’s Comparing European and Canadian AI Regulation report compares and contrasts AI regulation in Canada and the European Union. This paper considers the strengths and weaknesses of each approach and identifies lessons for Canadian policymakers. This paper was written in partnership with the Research Chair on Accountable Artificial Intelligence in a Global Context at the University of Ottawa, Faculty of Law – Civil Law Section. More information is available here.