August 31, 2020

Measuring the Value and Risks of AI for Professional Services: A Webinar Recap

David Skoler

What do professionals, including lawyers, need to know about artificial intelligence (“AI”) when they use it to deliver services? How can these professionals evaluate both the value and risks of using AI? These were just some of the topics discussed during the June 4 Thomson Reuters webinar,  “Artificial Intelligence in Professional Services: Weighing and Measuring the Value and Risks”. A video of the webinar is available on YouTube.

Program participants included: Christina Ayiotis, a Cyber Strategist, Nadia V. Gil, Global Solutions Advisor, Operations and Strategy at Google, Daniel W. Linna Jr., Director of Law and Technology Initiatives and Senior Lecturer at Northwestern Pritzker School of Law & McCormick School of Engineering, and Noah Waisberg, Co-Founder & CEO at Kira Systems.

The program illuminated the underutilization of technology in legal services and highlighted the legal areas that are most susceptible to technological disruption right now. The panelists also discussed the ABA’s Model Rules of Professional Conduct and how these rules can be read to prescribe a duty for lawyers to use relevant technology in providing legal services. At the same time, legal industry professionals must ensure that they continue to operate ethically and effectively by addressing any risks that arise when adopting technological tools.

Legal Services Areas Most Ripe for Technology Disruption

Gil (at 17:40) identified five areas ripe for disruption by technology, especially by AI tools, in the legal services industry:

1.     E-Discovery: Machine Learning to Find Relevant Documents

2.     Litigation Prediction: Algorithms to Forecast Lawsuit Outcomes

3.     Automated Document Drafting: Automated Creation of a Contract, Lease or Similar Legal Document, Often for a First Draft

4.     Due Diligence: Automated Document Analysis to Determine Relevant Contracts from a Large Quantity

5.     Legal Research: Advanced, Tailored and Predictive Search Results

These tools generally work in tandem with people, keeping humans in the loop. Nevertheless, adoption remains extremely low, especially when compared to other industries, Gil (at 27:05) opined.

What do Lawyers Need to Know to Effectively Use AI Tools?

According to the panelists, one of the reasons for low technology adoption in the legal industry, in addition to the billable hours model and the risk-averse nature of lawyers, is that many lawyers lack the technological competence needed to derive value from technology tools. The panelists and many legal experts disagree on how technology should be adopted and the level of expertise lawyers should have when using legal technology. During the webinar, Ayiotis (at 32:55) referenced the HBR article, A Radical Solution to Scale AI Technology, which emphasizes that “[t]o scale value in the AI era, the key is to think big and start small,” skip-proof of concepts (“POC”), and go “pivot to piloting.” (Emphasis added). This pushes companies to go straight to launching applications in the “real world,” which provides much more feedback, relative to a POC that merely determines whether something works technically, without regard to customer experience, bias, privacy, etc.

Professor Linna emphasized that some caution is required, stating (at 24:55) that “lawyers need to understand a bit more how these tools work.” This is especially crucial when it comes to explaining to clients the benefits and potential risks that can come from these systems. When another panelist mentioned Google Translate as illustrative of the ease of using certain tools, Professor Linna (at 25:05) used this as an example to demonstrate the appropriate level of technological knowledge and competence needed to use even this tool. He explained that lawyers must understand Google Translate well enough to know that it may be appropriate for translating and generally understanding a news story in a foreign language, for example, but it would not be acceptable to write a contract in English then translate it into a new language and expect it to be an effective contract in that new language.

Waisberg emphasized the importance of context and said often lawyers do not necessarily need technical expertise to use technology tools competently. He (at 20:05) explained that “there is a popular misconception that because we talk so much about how machine learning systems are trained, it necessarily follows that you as an individual lawyer need to train an artificial intelligence system to get value out of this” (emphasis added). He illustrated his point by reference to “due diligence review” tools developed by his company, Kira, which “out of the box” (OOB) (meaning without any user training or AI knowledge) has been pre-trained to find more than 1,000 provisions from contracts. Waisberg said that OOB users comprise two-thirds of Kira’s customer base. The other one third has trained Kira to extract over 20,000 provisions specific to a client, jurisdiction, or industry.

How Do the ABA’s Model Rules of Professional Conduct Govern Technology Adoption?

Waisberg (at 24:15) made the point that the “ethical rules are fairly clear” that lawyers have a duty to “actually … look at the relevant technology [and if there] is a technology to make an impact … you need to be using that.”

The ABA’s Model Rules of Professional Conduct includes rules directly and indirectly related to the use of technology, including:

1.     Rule 1.1 (Competence): “A lawyer shall provide competent representation to a client. Competent representation requires the legal knowledge, skill, thoroughness and preparation reasonably necessary for the representation.”

2.     Rule 1.1, Comment 8 (Competence): “To maintain the requisite knowledge and skill, a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology….” (Emphasis added.) According to Robert Ambrogi, as of July 22, 2020, 38 states have adopted this duty of technology competence.

3.     Rule 1.5 (Fees): “A lawyer shall not make an agreement for, charge, or collect an unreasonable fee or an unreasonable amount for expenses.” (Emphasis added.)

Professor Linna (at 24:55) noted that if you couple the duty of competence with the duty to charge only a reasonable fee, clients can argue that it would be unreasonable to charge fees for a human to perform a task that can be done effectively and more efficiently with technology.

Nevertheless, the 2019 ABA Legal Technology Survey found that only 8 percent of firms currently use AI-based tools. Larger firms with 100 or more lawyers were at 26% usage, while firms under 100 had adoption rates below 5%. Arguably, smaller firms have as much to gain if not more, when it comes to increasing their volume of work through AI technology. However, their adoption is likely low because they lack an IT department, Gil said (at 28:08).

A Big Component of Competence is Awareness of Potential Risks

The panelists also discussed the need for professionals adopting AI tools to understand potential risks, including, in particular:

1.     Algorithmic Bias: Professor Linna (at 9:35) and Ayiotis (at 29:02) highlighted the algorithmic bias research done by computer scientist Joy Buolamwini. Professor Linna cited Boulamwini’s “Gender Shades” study as an example. This study showed how specific facial recognition technologies exhibited bias across gender and race when evaluated. Professor Linna also talked about the data science process and how failures in defining the problem, collecting data, cleaning data, and misalignment in training data and the population where an AI tool is deployed, among other things, can lead to ineffectiveness and bias.

2.     Data Privacy: Cloud-based solutions are not necessarily riskier than on-premises storage, but it is, nonetheless, important for lawyers to understand how and where their client’s data is stored, and the risks involved with any storage medium. Gil (at 41:05) highlighted that with ransomware threats continuing to increase in complexity and the increasing prevalence of data privacy laws, it will be important for law firms to pay close attention to how client data is retrieved, used, shared, and maintained.

3.     Importance of Testing Systems: Professor Linna (at 2:40) effectiveness, for example, by using standard information retrieval metrics, such as precision, recall, and F1 score. He also explained that professionals should understand the benefits and risks of favoring precision over recall, and vice versa.

In sum, lawyers must understand that they have a duty to be technologically competent and utilize technology whenever there are opportunities to deliver legal services more effectively and efficiently, so as to not charge clients unreasonable fees. Fortunately, lawyers do not need to be experts in AI. Lawyers should, however, be working with technology experts to help them fulfill their duty of competence. The minimum requirements for competence will continue to increase, and lawyers will need to continue to adapt. But, as the webinar highlighted, rather than fear technology, lawyers should continuously look for opportunities to use technology to automate and augment legal-services delivery tasks, freeing lawyers to reallocate their efforts to other tasks that produce greater value for their clients.

David Skoler

Student, Northwestern Pritzker School of Law and Kellogg School of Management

David Skoler is a second-year student in the JD-MBA program at Northwestern Pritzker School of Law and Kellogg School of Management and a former Deloitte consultant. He has been a research assistant for Daniel W. Linna Jr. since May 2020. David is optimistic about the untapped opportunities to use technology, like artificial intelligence, for legal services.