How can AI technologies be leveraged to improve legal aid services for underserved communities?
Kalia Walker: AI can help streamline legal services, when used appropriately. This can help to limit costs so that underserved communities can have greater access to legal representation.
Dan Linna: In our Northwestern Innovation Lab, interdisciplinary teams of law and computer science students work with organizations to develop or improve technology tools for legal services. For example, we have worked with the Law Center for Better Housing (LCBH) and Conor Malloy to improve the helpdesk tools that LCBH professionals use to respond to inquiries from individuals. We trained an AI system on past questions and responses to suggest responses to new inquiries received by the helpdesk.
Our Innovation Lab has also worked with LCBH to improve Rentervention, a chatbot that helps tenants with common problems. Rentervention uses a combination of AI tools, including a large language model to match a tenant’s description of their problem to a specific legal problem-type category, rules-driven AI to walk the tenant through information gathering and providing a solution and generative AI to improve communication between the tenant and the chatbot. We found that using empathetic language can improve the effectiveness and perceived helpfulness and required effort to use a conversational AI system.
Jim Calloway: AI-powered kiosks at courthouses (and court websites) should be able to answer questions from those unfamiliar with the system. Initially many of these questions will be as simple as “What room am I supposed to go to?” or “I was told to go to court clerk. Where is that?” But soon they should respond to queries with basic information about the various matters handled in court.
Gyi Tsakalis: AI can provide basic information and guidance on common issues. Chatbots and virtual assistants can answer frequently asked questions, offering initial guidance and resources. AI can also help in automating document creation and analysis of large volumes of documents to identify relevant information. AI-enabled platforms can provide educational resources and legal awareness to underserved communities, helping them understand their legal rights and the legal processes. AI can assist in translation services and accessibility features for individuals with disabilities, ensuring that legal aid services are inclusive and accessible to all.
Quintin Steenhuis: Legal aid programs are understaffed and overworked. They have strict income limits, and typically they don't have to focus on the billable hour. That means legal aid programs have long been innovators in using technology to deliver legal help, from early legal help websites to programs experimenting with AI today.
AI can help deliver legal help in several ways, including helping tag questions by the legal topic they represent, making it easier to connect people who need help to the programs that can help them; and helping people sort through large amounts of information to get answers to their questions.
For example, JustFix.nyc partnered with Cornell Law School students, Wilson Sonsini's SixFifty and Josef Q to build an AI-powered, interactive knowledge base of housing questions and answers. Unlike a standard website FAQ, their app uses generative AI to let website visitors phrase questions in their own words and get answers to their specific questions, without requiring staff to write all the answers in advance. The Josef Q approach is safer than pointing people to ChatGPT because the answers are limited to a specific knowledge base.
What are the most promising AI innovations currently being developed or used to enhance access to justice?
Quinten Steenhuis: We're seeing a lot of creativity with new, more powerful chatbots. Rentervention in Illinois is using AI to power its tool that helps tenants solve their problems. It makes the chatbot more flexible so that tenants don’t need to choose from a canned list of choices to get the services they need. It even can generate letters for tenants to their landlords in a range of tones, from accommodating, to firm, to aggressive.
Generative AI can help with online dispute resolution. An interesting experiment called LLMediator came out of the University of Montreal. It is a GPT-3.5-powered mediator for online negotiations. It can pop up and suggest when a human mediator should try to calm the discussion down, or even directly offer solutions to two parties. In low-stakes situations, AI-powered mediators could certainly help parties reach agreements that they wouldn't be able to reach on their own.
AI can provide endlessly varied simulations that help train people for emotionally challenging, real-world situations. Imagine a self-represented litigant spending an hour rehearsing their argument in a simulated courtroom, or practicing for the questions that a judge might ask them. Generative AI can provide neutral feedback on how the litigant does and suggestions to improve or focus their arguments.
Can AI effectively reduce the cost of legal services, and if so, how?
Katie Dilks: AI, when effectively leveraged, has the potential to dramatically reduce the cost of repetitive, time-intensive tasks such as drafting and client communications. Obviously, this work needs to be adequately managed to ensure accuracy and quality, but the ability to draft a motion in moments rather than hours could fundamentally expand capacity for legal aid and pro bono attorneys.
Jim Calloway: I agree with Katie, automation should reduce the costs of traditional legal services by reducing the attorney and staff time it takes to accomplish certain tasks. The introduction of AI tools should make creating the templates and other automation tools easy and affordable.
Dan Linna: AI systems can be designed to empower humans to solve specific legal problems. Just as AI can empower lawyers to be more efficient and effective when solving legal problems, AI can empower people who do not have legal training to effectively solve a wide range of specific legal problems, without the need for a lawyer. These systems will be carefully designed, developed, deployed and validated for specific legal problems, and will include “off-ramps” that guide people to a lawyer if they have a nonstandard problem or other special circumstances. Many such systems are already available, such as Rentervention, discussed above. There are abundant opportunities for legal aid organizations, courts, lawyers and legal technology companies to develop technology tools that scale and provide legal services to individuals for free or at reduced costs.
Quinten Steenhuis: Generative AI can replace some human-provided services, and it can lower the cost of almost all that remains. When humans are totally out of the loop, this can open new markets that could never be served by the old one-lawyer, one-client model.
It can help lawyers do their work faster. While ChatGPT isn't a search engine or a legal database, it can summarize new legal decisions in seconds; generate a dozen ideas for compelling themes and legal arguments for a person to write a winning closing argument or brief, tailored to the facts of their actual case; act as an editor for legal writing, offering suggestions to improve tone, strengthen arguments or add missing context; turn lists or tables of information into prose that can be tweaked and edited for a motion; and turn an affidavit or a deposition into a draft of a cross examination. The key to these uses is that the author of the piece of writing needs to put at least as much information, if not more, into ChatGPT as they get out to prevent it from making things up or generating a boring piece of writing. But when used this way, ChatGPT is a safe and effective writing assistant that is available 24 hours a day.
How does AI handle the complexity and nuances of legal cases compared to human legal professionals?
Dan Linna: To answer these questions, we need to become evidence-based and more rigorous in our evaluation of the delivery of legal services. Ask 10 lawyers to draft a contract or a motion for summary judgment, and often you’ll see a wide range of very different approaches. Why is that? Shouldn’t we expect to see more consistency in legal work? What is it that lawyers do that produces the best outcomes? How might we measure the quality of legal work and the quality of outcomes? Even the most complex legal work consists of many tasks, including some that are not the practice of law, some that are very basic tasks, and some that require skilled expert judgment. Once we deconstruct legal work into discrete tasks, we will find that many tasks can be automated or augmented with AI. A key consideration will be placing humans meaningfully in the loop of AI systems and empowering them to exercise judgment.
Jim Calloway: The brain of a trained and experienced lawyer still beats AI. That may not always be the case. But lawyers need to be correct 100 percent of the time. Without suggesting those without resources deserve less-than-adequate representation, we must appreciate that a complete AI solution that delivers documents not reviewed by a human will possibly have some errors. How does society deal with that?
Quinten Steenhuis: We need to remember that current AI tools, like ChatGPT, are not actually intelligent. They are sophisticated prediction machines. But they are pretty good at generating human-seeming content, and they can do so in a way that mirrors an average person's ability to do so.
One interesting strategy when using ChatGPT is to use Socratic questioning. We lawyers are great at that! A good prompt isn't just asking one general question. You can get the best results with detailed and specific follow-up questions.
ChatGPT doesn't know the law. And if you ask it to make a legal conclusion from a fact pattern, it's going to give you a pretty general answer, just like a person off the street. But what's interesting is that if you set up the prompt the right way, you can get very good results. For example, you could give it a copy of the law, ask it to extract the elements of the legal claim, give it your full fact pattern and ask it to explain its decision step by step by linking the two. It can do that in seconds, and the result will be pretty good. Often the quality of your results depends on the quality of your inputs. Computer scientists used to have the expression garbage in, garbage out (GIGO). That's newly relevant with generative AI.
What ethical considerations arise when using AI in legal decision making, and how are they being addressed?
Kalia Walker: AI is a tool that attorneys can use, but it cannot replace their obligations as client advocates and advisors. I think many ethical issues arise when attorneys want AI to do their job for them, rather than assist them in doing their job. A very advanced software cannot take the place of an attorney’s training, judgment and ethical obligations.
Katie Dilks: If AI is being used in decision making, we must have processes in place to counterbalance systemic bias from real-world training data. If we move from a people-driven system where youth of color are incarcerated at astonishingly higher rates than their white counterparts to an algorithm-driven system with the same results, we've merely shifted the imprimatur of legitimacy for a biased system from a court to a computer.
Dan Linna: AI ethical considerations cannot be an afterthought. At the design, development, deployment and validation stages of AI systems, we must attend to bias, fairness, accuracy and other principles. Before deploying AI systems, developers and users must evaluate the accuracy of systems and conduct impact assessments, identifying the ways in which AI systems could fail and the harms that could be caused. Meanwhile, the baseline is not perfection, as our current systems are neither perfect nor free from bias. AI can help us improve outcomes and reduce bias in existing systems. But we must address the risks of AI amplify bias, discrimination and bad outcomes if we are not accountable and responsible in our design, development, deployment and validation of AI systems.
Quinten Steenhuis: AI tools were trained on the internet at large. While they've had plenty of feedback and moderation, we all know the problems of bias that exist across the internet. As Katie pointed out, there's a risk when using AI tools that we replicate and scale up the biases. The most likely and statistically observed result on the internet isn't always going to be the fairest or best result. And maybe we don't want to be stuck with the consensus of the internet in 2022 in 2030, 2040 or 2050. We'll need to apply deliberate human judgment to get results that reflect our values as a society.
Some of the worst examples of AI bias were seen in the prior wave of AI-assisted decision support tools. For example, sentencing algorithms like COMPAS that claim to predict someone's dangerousness or likelihood to reoffend, but arguably just measure how poor, Black or literate they are. Using AI for these purposes is always going to be risky. Maybe we should just be saying no to these use cases. AI can measure, summarize and classify things quickly. Maybe we should let the human apply the judgment after the AI does the classification.
In what ways can AI contribute to increasing the efficiency and accuracy of legal research?
Kalia Walker: Many attorneys bill by the hour and AI often summarizes a wealth of information very quickly. An attorney can (and must!) then fact-check any research findings, but the cost-savings upfront can help reduce the cost of legal services.
Quinten Steenhuis: My earlier responses touch on this. Generative AI is very good at summarizing and classifying. If you say “does this case raise an issue of standing” it will be very accurate. If you ask it to turn a 100-page legal decision into two paragraphs, it can help with that. These can help you make a better legal search engine. ChatGPT is not a search engine. It doesn't “know” what information is true; it predicts and generates plausible sounding text in response to your question.
AI could be used to help you spot gaps in your legal research strategy. If you've tried a dozen queries and aren't getting the results you want, AI could suggest a dozen more. Maybe that feature will be built-in, like Google already subtly rewrites our search queries to match them to similar queries.
Jim Calloway: We have watched digital legal research tools replace books because searching databases for terms is more efficient than doing research with books. As the legal information providers incorporate AI in their tools, we will see a significant impact. But unlike the well-publicized horror stories from lawyers who filed briefs citing hallucinated cases without bothering to read the case law they were citing, using AI for legal research will be a net positive for lawyers. The challenge ahead is whether these improved research tools will be accessible to small firm lawyers and the public or whether pricing will limit the best AI tools to only large law firms and corporate counsel.
How can AI be used to democratize access to legal information and resources?
Kalia Walker: Because of the way AI summarizes information in a more readable format than a typical internet search, many more people may better understand their rights and responsibilities when a legal dispute arises before speaking with an attorney.
Katie Dilks: AI has immense potential to be used as a translator, both for plain language and languages other than English, increasing access to legal knowledge and concepts. However, this relies on the accuracy of the underlying information the AI has been trained on to ensure that the information conveyed is accurate––a risk in the current realm of ChatGPT and other broadly trained, large language models. It will be critical that as AI systems are built with accurate, limited information systems (for example, being limited to one state's up-to-date statutes and court rules), that these systems are not cost-prohibitive for nonprofits and public-serving systems. I can easily see a future where large law firms have access to highly accurate, tailored, legal AI systems but most others cannot afford the cost or are not offered access at all.
Jim Calloway: Every business shares information to promote its products or services. Courts and other law-related public institutions should provide simple plain English explanations of their operations and services. AI aids in improving this because it can be easily trained to understand what people are asking no matter how awkwardly they phrase it.
Quinten Steenhuis: Generative AI can be a helpful front-end to large legal help websites and then give you an answer that responds directly to your question. It can power question and answer websites with endless variations in the litigant's questions.
As Jim and Katie pointed out, generative AI has a huge role in translation. While Google Translate and other machine translation tools still have a high rate of errors, GPT-4 has shown a dramatic quality improvement. Suddenly, we can get cost-effective translation into dozens of languages. Humans should still review this content, but at a much lower cost. There will soon be no excuse for having legal help content in just one language.
Plain language experts know how to make legal information easy to read, but it still takes a tremendous amount of human effort to make simple, easy-to-read versions of legal documents. GPT-4 can reduce the workload for the experts and make improving the plain language of more documents possible for those who care but aren't trained.
What role does AI have in predicting legal outcomes and how can this assist in access to justice?
Dan Linna: Courts can use Online Dispute Resolution (ODR) systems with AI to help guide litigants to a resolution of their dispute without adjudication. Existing ODR systems mostly rely on rules-based AI, which can be quite effective. Conversational AI systems augmented with data analytics and outcome prediction models can better connect with users to ensure that they understand their rights and obligations under the law and how a dispute is likely to be resolved if adjudicated. Even before going to court, AI systems can help people better understand the law, help them preserve their rights, and help them understand whether they should proceed to court to enforce their rights.
Quinten Steenhuis: There are a wide range of different AI approaches to predicting legal outcomes. They can be a useful tool within limits. If potential clients or pro se litigants can turn to a digital tool to understand the risk of going to trial, that can be helpful. If the prediction is 90 percent or 10 percent, that's a very useful signal. It can save litigants from expensive mistakes. When the AI tool is less confident, maybe that client should call a lawyer, but the tool might still help in easy cases.
Generative AI can translate statutes into plain language or computer code. It can walk through the elements of a claim or defense and identify aspects of a fact pattern that match those elements. This kind of use either needs people to build purpose-built tools on top of generative AI or very sophisticated litigants.
Older machine learning approaches that train a computer on examples of past decisions work in a different way. Given this fact pattern, how has a judge or tribunal ruled in the past? These tools work the same way each time, so we can measure their effectiveness and reliability. Some of them work very well. They need a lot of examples to perform well. It can be expensive and time consuming to find the examples and train the system.
Formal logic or computational law approaches try to break the legal rule into logical syllogisms that can be reasoned over, like mathematical proofs. But judges are human decision makers. These systems can be both expensive, hard to build, and miss nuances in the true to life factors that determine outcomes.
Are there successful case studies or pilot projects where AI has closed the access to justice gap?
Quinten Steenhuis: Let's not forget traditional guided interviews. Law Help Interactive helps create a million pleadings for self-represented litigants each year. They do that with HotDocs and A2J Author, older systems that require you to build the forms by hand. The LIT Lab has partnered with eight different jurisdictions to build guided interviews in a more modern system, and we're integrating AI in those approaches. In Massachusetts alone, the Lab's tools have helped tens of thousands of people stay in their homes, get restraining orders, get a fee waiver or beat debt collector's claims. These systems don't use statistical models; a person came up with the rules and they work the same way each time. This approach still uses computer automation to save humans effort.
What future developments do you foresee in the intersection of AI and access to justice, and what are the potential challenges?
Kalia Walker: Attorneys will continue to become more adept at using AI so I think more people from underserved communities will benefit as the technology drives greater efficiency. That said, a potential challenge is ensuring that those underserved communities receive high-quality legal services that reflect a thoughtful and ethical use of AI.
Quinten Steenhuis: AI has a lot of potential today in the hands of experienced users, and a lot of pitfalls and chances for users to get bad, boring or inaccurate responses. The next two years should be about building solutions in the AI ecosystem that make it foolproof for legal users. I expect we'll see more uses that tie generative AI together with a database and new techniques to reduce so-called “hallucinations,” so answers come from trusted knowledge bases.
We're going to see AI integration in every tool. It's already in Gmail and Google Docs and Word Online and a dozen other tools lawyers use every day. Expect to see more integrations; some good and some gimmicky. Soon enough, we won't notice the dozens of small ways that AI tools are saving us time.
General purpose AI is expensive. I hope we'll see some “smaller” models that are fine-tuned for specific purposes that are cheaper to use and more environmentally friendly. Some of the most popular tools today are like using a rocket engine to light a cigarette. I hope to see more “right-sized” tools as we get better at building with these solutions.