This post is also available in: français (French)

On Wednesday May 15, 2019 the Law Commission of Ontario (LCO) hosted a half-day symposium on AI for Lawyers: A Primer on Artificial Intelligence in Ontario’s Legal System. The event was held in-person at Osgoode Hall Law School and broadcast via webinar, archived and freely available below.

The event was conceived as AI 101 – designed for lawyers. New technologies, including algorithms, automated decision-making and artificial intelligence (AI), are set to challenge our long-standing assumptions and practices regarding human rights, due process and access to justice.

This symposium asked: how well do justice system professionals understand these technologies? What technologies are already being deployed in the legal system? What are the broad legal implications of adopting AI in the justice system? How can or should the justice system regulate these challenges?

To help answer these questions, the LCO partnered with Element AI, a leading Canadian AI developer, and Osgoode Hall Law School, to host the event.

This initiative is part of the Law Commission’s multi-year Digital Rights Project, funded in part by the Law Foundation of Ontario. Readers interested in learning more about AI and criminal justice issues can link to the Law Commission’s recent Roundtable on Algorithms in the Criminal Justice System.

 

Speakers

  • Richard Zuroff from Element AI explained the basics of artificial intelligence, machine learning and deep learning, along with the technical challenges of regulating AI
  • Philip Dawson – Element AI’s lead on public policy – gave an overview of regulatory approaches to AI from Canada and around the world
  • Carole Piovesan (INQ Data Law) provided insights as an AI lawyer and data governance specialist
  • Ryan Fritsch, lead for LCO’s Digital Rights Projects, hosted a panel discussion with:
    • Jill Presser (Presser Barristers) discussing concerns for litigating AI in criminal justice matters
    • Patrick McEvenue, Director of Digital Policy, discussing the use of AI systems at Immigration Refugee and Citizenship Canada, and
    • Amy ter Haar, lawyer and SJD candidate, discussing self-regulatory approaches of “reg tech” through blockchain and smart contracts

 

Materials

 

 

 

 

 

 

 

 

Click for Participant Package

 

Presentations

Presentation 1: A Primer on AI
Richard Zuroff, Element AI (@element_ai)

Click here to read presentation report

The event opened with a primer on artificial intelligence (AI) by Element AI’s Richard Zuroff. Richard overviewed some of the myths of AI, machine-learning, and deep-learning to make the technology more accessible to the legal layperson. Starting with a broad definition of artificial intelligence as agents or systems that can perceive the environment and take actions to optimize success in achieving a goal, Richard explained how AI systems can be used to augment human activity, using data and machine-learning to fill in informational and feedback gaps within organizations. By breaking down the obscurity of AI deep neural networks and the deep learning processes of AI we get insight into the algorithm and data-driven design of AIs beyond merely “black boxes” of inscrutable code. Through direct involvement and participation in the algorithm-building process, there can be an intersection of automation, AI decision-making and human values.

Keywords: AI, machine-learning, deep-learning, deep neural networks, data-driven, “black box”, automation

 

Presentation 2: Ethical Issues in AI – Regulatory Approaches

Philip Dawson (@P__Dawson)

Click here for slide presentation.

Philip Dawson, Element AI’s lead for policy, highlighted some of the current guidelines, national strategies, and internal/industry standards on ethical AI frameworks and AI policy. The current state of self-regulation for AI technology is recognized as lacking, and new directions for AI regulation and policy are examined with human rights at the foundation of the conversation. Consumer protection initiatives are noted as a growing concern in the era of digital rights. Exciting developments in digital consent and information privacy are examined, including the emerging concept of “data trusts”.

Keywords: AI, self-regulation, regulation and policy, consumer protection, informed consent, privacy, data trusts

 

Presentation 3: Ethical Issues in AI – Accountability and Governance

Carole Piovesan (@CJPiovesan)

Click here for slide presentation.

“Be informed, get involved, have no fear” – this is the advice offered by Carole Piovesan of INQ Data Law in this discussion of regulation and accountability for AI. Carole highlighted the many areas where AI will play a role in the future, and the rising need for accountability. She advocates for a “new generation of law” and a re-evaluation core principles of law as they apply to AI technology. Carole analyzed the immediate, medium-term, and long-term legal changes brought forth by AI, and what accountability really means for AI systems of differing capabilities.

Keywords: regulation, accountability, AI. Principles of Law, legal change

 

Presentation 4: Panel Discussion – AI in Practice

Host: Ryan Fritsch, LCO Digital Rights Project

Jill Presser (@JillPresser) highlighted the challenges rapidly emerging in the criminal justice system, particularly with litigating algorithms. Four major challenges to litigating algorithms and AI are discussed: discovery, expert evidence, technological literacy, and funding. Referencing AI Now’s Report on Litigating Automated Decision-Makers, Presser explained the potential difficulties in acquiring full disclosure of a AI system that is subject to proprietary/trade secrets protection, the admissibility of evidence of an “expert” that is an algorithm, the potential of automation-bias in the judicial system, and the resources required to challenge the decisions of these algorithm decision-makers.

Keywords: litigating algorithms, automated decision-makers, disclosure, automation-bias

 

Patrick McEvenue, Director of Digital Policy for Immigration Refugee and Citizenship Canada discussed the deployment of an automated decision-maker system in the IIRC to streamline immigration applications. McEvenue explained that AI technology is used to determine positive eligibility criteria for straight forward temporary residence applications. He emphasized that AI augmented a decision-making process in which human officers made the ultimate determination. This was a key issue raised in recent reports including Citizen Lab’s Bots at the Gate. McEvenue noted the guiding principles that innovators should follow in developing automated decision systems to ensure client confidence and transparency in the system. He also previewed IRCC’s forthcoming Policy Playbook/Innovator’s Handbook, a policy guidance and design tool to spur good (ethical) work. McEvenue additionally discussed the importance of minimizing bias and risk in the design of automated decision support systems.

Click here for slide presentation.

Keywords: IRCC, Immigration, automated decision-maker, AI, augmentation, Policy Playbook, Innovator’s Handbook, bias, automated decision support

 

Amy ter Haar (@amyterhaar), lawyer and SJD candidate, discussed the potential applications of AI in contract law and how emerging regulatory technology (reg tech) can help improve consumer protection. Drawing from her knowledge of blockchains and smart contracts, Amy explained the emerging benefits of using technology to regulate technology. Through reg tech, these benefits can be extra-jurisdiction (not restricted by jurisdiction). Amy discussed the emerging federated learning/split learning model of AI machine-learning and the implications of reg tech in increasing consumer protection and data privacy without losing the benefit of advancing technology.

Keywords: regulatory technology, reg tech, blockchain, smart contract, federated learning, split learning, AI machine-learning, consumer protection, data privacy