Zoe Chan, Anika Legal
Anika Legal is an Australian not-for-profit organization that fights with renters for housing justice. We fight for a world where renters can thrive in their homes.
Since launching five years ago, Anika has experienced substantial growth that is people focused and powered by a tech-enabled focus. As a result, despite only employing two lawyers (one of whom has only been with Anika since 2023), we have provided over 800 renters with ongoing casework support, followed a data driven approach to continually advocate for systemic change amidst a Housing Crisis, and worked with over 200 law students to fight for a fairer world for renters.
We will share the impact of Anika Legal’s technology-driven model, highlight our innovative approaches, and share key lessons learned in enhancing access to justice. Additionally, we intend to showcase our digital infrastructure during the conference.
Impact of Technology
Anika leverages technology to address the housing crisis and bridge the justice gap. Our bespoke digital case management platform enables us to celebrate and center remote and flexible work – thereby creating a more robust support system for renters using untapped legal resources. This flexibility has allowed us to assist over 800 renters, with services ranging from bond recovery to eviction support. This tech-enabled approach not only enhances our service delivery, but also strengthens our advocacy for systemic reforms in the housing sector.
Our advocacy efforts are fueled by the evidence gathered through our casework. The data collected through our platform helps us identify trends and systemic issues, enabling us to advocate effectively for policy changes.To this end, our platform optimizes the collection of data without additional administrative burden from either our clients or our workers, which feeds into our advocacy initiatives.
Digital Infrastructure Showcase
Our presentation will showcase our digital infrastructure, demonstrating how our bespoke platform operates and facilitates the efficient delivery of legal services. We will share our journey in developing our digital infrastructure with very limited funding. Attendees will have the opportunity to see firsthand how tech-enabled solutions can be harnessed – even in a low resource environment – and hear about the lessons we’ve learned along the way, such as:
- Lo-fi tech solutions can often be used to quickly iterate practice processes, without the need to seek additional tech upgrade funding.
- Codesigning practice improvements with core user groups enable quick iteration and optimizes change management
- Practice design centered on the right users can unlock untapped human resources.
Daniel Escott, Osgoode Hall Law School
Michael Litchfield, University of Victoria
The integration of Artificial Intelligence (AI) into the legal and justice system in Canada offers significant potential for innovation and efficiency. However, it also introduces risks – legal, ethical, and professional. This presentation bridges the gap between theories on AI use and practical, user-centric policy design by focusing on the development and implementation of policies that maximize AI’s benefits while mitigating its inherent risks.
Objectives:
- Understanding Ethical Theories in AI: Provide a brief overview of the key ethical principles relevant to AI, including fairness, transparency, accountability, and privacy.
- Identifying Risks in AI Deployment: Analyze the specific legal, ethical, and professional risks associated with AI in the legal sector and justice system.
- User-Centric Policy Design: Discuss the principles of user-centric policy design, emphasizing the importance of stakeholder engagement, contextual understanding, and iterative development.
- Case Studies and Best Practices: Present case studies of successful AI policy implementations in legal contexts, highlighting best practices and lessons learned.
Presentation Structure:
1. Introduction and Context (5 minutes):
- Overview of AI in the legal sector and justice system.
- Importance of balancing innovation with ethical considerations.
2. Ethical Theories and AI (10 minutes):
- Exploration of ethical considerations relevant to AI implementation.
- Discussion on how these principles apply to AI implementation in the legal and justice system.
3. Risk Analysis (10 minutes):
- Identification of specific risks associated with AI in legal contexts.
- Legal, ethical, and professional implications of AI deployment.
4. Principles of User-Centric Policy Design (10 minutes):
- Introduction to user-centric policy design methodologies.
- Importance of stakeholder engagement and iterative development.
- Strategies for balancing innovation with risk management.
5. Case Studies and Best Practices (10 minutes):
- Presentation of real-world examples and case studies.
- Analysis of successful AI policy implementations in legal contexts.
- Key takeaways and lessons learned.
Discussion and Q&A (20 minutes)
Target Audience:
- Legal professionals and practitioners.
- Policymakers and regulators in the legal sector.
- AI researchers and developers.
- Academics and students in law and technology fields.
Nye Thomas, Law Commission of Ontario
Summary
AI systems offer significant potential benefits to governments, the private sector, and the public. Many believe that these tools can “crack the code of mass adjudication”, improve access to justice, improve public and private services, and reduce backlogs.
At the same time, public and private sector use of AI is controversial. There are many examples of AI systems that have proven to be biased, illegal, secretive, or ineffective.
In response to these risks, governments around the world are adopting “Trustworthy AI” frameworks to
assure the public that AI development and use will be transparent, legal, and beneficial.
‘Trustworthy AI” legislation and policies are advancing quickly, but inconsistently and incompletely.
This presentation will consider whether current approaches to AI regulation are materially advancing access to justice and human rights for low-income and vulnerable communities. The presentation will highlight themes or issues that access to justice advocates should consider when evaluating AI regulatory proposals in their respective jurisdictions.
Background
Achieving access to justice and “Trustworthy AI” depends on a complex series of policy, legal and operational questions that go far beyond public statements of principle.
Governments have adopted very different approaches to AI regulation. Canada was a pioneer in AI regulation and adopted one of the world’s first government AI algorithmic impact assessments.
The EU’s recently passed Artificial Intelligence Act and Canada’s proposed federal Artificial Intelligence and Data Act (AIDA) are examples of national or “horizontal” AI regulation. In contrast, other jurisdictions have enacted targeted or sectoral legislation or policies to govern AI in specific locations or contexts. For example, there are many US federal, state or local legislation/policies targeting specific AI applications or technologies, such as New York City’s legislation governing employment AI systems and
the more than 20 U.S. jurisdictions that have banned or restricted the use of police facial recognition systems.
The different approaches to AI regulation can have significant implications for access to justice and human rights for low-income and vulnerable communities. For example, there is a wide divergence in whether, or how, enforcement and remedies are addressed in AI governance frameworks.
The presentation will consider AI regulation from the perspective of important access to justice issues and principles, including:
- What is AI’s potential to advance or hinder access to justice?
- What are examples of AI systems affecting access to justice?
- What are the emerging themes and gaps of AI regulatory models?
- Are AI regulations advancing access to justice and human rights for low-income and vulnerable communities?
- What are the key regulatory issues and choices?
The panel will be moderated by Nye Thomas, LCO Executive Director. Panelists will include LCO Policy Counsel Susie Lindsay (author of the LCO’s 2022 Accountable AI report) and two external experts.
David Wiseman, University of Ottawa
Julie Mathews, Community Legal Education Ontario
This presentation will explain the recommendations arising from a research report examining traditional and alternative regulatory approaches to the development of smart legal forms for priority justice-seekers.
We use the term “smart legal forms” to refer to dynamic digital tools that enable people to complete legal documents online, with or without assistance. These customized computer software technology tools enable the general public to generate legal (or law-related) forms and documents for legal actions and transactions, including for legal dispute resolution processes. We focus on the goal of increasing the availability of smart legal forms in areas of civil justice for people who frequently experience law-related problems that affect their basic human needs. We refer to these people as “priority justice-seekers” and regard improvements in the extent to which they can access justice as advancements in “community justice.”
After surveying research on the current landscape of technological tools offered for addressing law-related needs, we identify three key elements of smart legal forms that are effective for priority justice-seekers. Such forms: are accessible and actionable; embody ethical principles and appropriate protections, and are responsive and relevant to the needs and context of priority justice-seekers.
Providers of smart legal forms are vulnerable to being regarded by legal regulators as providers of legal services and, in turn, are vulnerable to regulatory intervention on the basis of engaging in the unauthorized practice of law. This potential for regulatory intervention is often justified by the need to protect the public from harm. Yet legal services regulators generally also recognize a need to enable the development of technological tools to improve access to justice. This has led to the introduction of a spectrum of regulatory and non-regulatory approaches aimed at fostering this type of activity. An increasingly popular approach has become the creation of regulatory sandboxes. Examining the outcomes of the ongoing operation of regulatory sandboxes in Canadian and comparative jurisdictions, we conclude that this regulatory approach is failing to foster the development of smart legal forms for priority-justice seekers.
On the basis of our research into traditional and alternative regulatory approaches in this area, we ultimately propose an approach that reflects action on regulatory as well as on non-regulatory fronts: on the former, we suggest that the regulatory treatment of smart legal forms be adapted and targeted to the particular situation (including the need to support access and relative level of risk) and, on the latter, we propose proactive support to increase the availability of effective smart legal forms for priority justice-seekers.
Zach Zarnow, National Center for State Courts
Andy Wirkus, National Center for State Courts
Consumer debt cases make up a disproportionate percentage of filings in civil courts across the United States. Besides being a high-volume issue, consumer debt also has a large disparity in represented parties, a high number of default judgments, and extreme consequences for consumers. In this session, we will apply a practical lens to the ways courts can use technology tools to improve outcomes, increase access to justice, and ensure compliance with procedural requirements. We will explain our guiding principles of access, fairness, and accuracy in the development of a suite of tools designed to reduce negative outcomes in consumer debt cases.
The suite of tools we will highlight includes a Consumer Debt Collection Information bot (CODI) used in the Philadelphia Municipal court that provides custom information to pro se litigants, a debt reform checklist designed to help guide court’s decisions in implementing regulatory reforms, and a first look at a filing screening tool that will be used to review initial filings for compliance with procedural protections already in place in most jurisdictions. We will discuss how we built the actual tools while sharing the lessons learned and key takeaways to apply to any legal tech tool build.
This session will be practical and tangible, but will also help explore, though those concrete examples, this moment in time that represents a crucial juncture in creating frameworks to automate processes while not prejudicing the administration of justice. This session will explore relevant considerations to use in strategizing potential usages of AI and other technology. It will take the principles of ethical usage of generative AI and expand that to include considerations of non-generative AI tools.