chevron-down Created with Sketch Beta.
October 02, 2020 Feature

The Future of Law Firms (and Lawyers) in the Age of Artificial Intelligence

By Anthony E. Davis

This article explores the future for lawyers and law firms in the light of the changes that Artificial Intelligence (“AI”) is already bringing to the universe of legal services.1 Part I briefly describes some of the ways AI is already in use in ordinary life – from facial recognition, through medical diagnosis to translation services. Part II describes how AI is transforming what it means to provide legal services in six primary areas – litigation review; expertise automation; legal research; contract analytics; contract and litigation document generation; and predictive analytics. Part III explores the providers of these AI driven legal services – often non-lawyer legal service providers – and how these providers are replacing at least some of what clients have traditionally sought from lawyers. Part III also discusses the implications of all these changes both for the future role of lawyers individually, focusing on what services clients will still need lawyers to perform: judgment, empathy, creativity, and adaptability. In turn this Part examines what these changes will mean for the size, shape, composition, and economic model of law firms, as well as the implications of these changes for legal education and lawyer training. Part IV identifies the principal legal, ethical, regulatory, and risk management issues raised by the use of AI in the provision of legal services. Finally, in Part V the article considers who will be the likely providers of AI based services other than law firms – legal publishers; the major accounting firms; and venture capital funded businesses.

Introduction

Will law firms as we have known them still exist when our grandchildren are adults? This essay is intended to initiate a discussion about the future for lawyers and law firms in the light of the extraordinary changes that artificial intelligence (“AI”) is already bringing to the universe of legal services. The essay is intended as a precursor of a fuller treatment of the topics raised; its focus is identifying the principal questions and issues that confront the profession as a result of the rise of AI.

The legal spend of corporations in the United States on traditional law firms remains flat, year after year, while the spend on in-house legal departments and on other legal service providers is exploding.2 More and more, both in-house counsel and these new legal service providers (and, to a limited extent law firms) are using AI in ways that are transforming both what it means to provide legal advice, and the ability of clients to assemble data, establish performance and payment metrices, and manage—and control—the outside law firms to which they have traditionally turned for advice and representation. In Part I, this article will briefly describe what AI is and the different ways it can be (and is being) applied to solve problems and provide solutions that benefit clients. Part II will review the different kinds of AI platforms that are already in use or late-stage development to provide substantive legal assistance to clients in ways that previously were the domain of a large number of (principally younger) lawyers. Part III will consider how AI is likely to affect both the future role of lawyers and the implications of AI for the likely structure and composition of law firms. This section also examines how these changes will in turn affect law firms’ hiring needs, the hiring models they will use, and the ways in which legal education, both pre- and post-admission, will have to change if law is to survive as a profession. Part IV will discuss the ethical, legal, regulatory, and risk management issues that face law firms today when they introduce or provide AI platforms or solutions to their clients. Finally, Part V will consider whether other service providers, both professional and otherwise (including but not limited to developers of AI solutions), have economic and marketplace advantages that will enable them to replace lawyers and law firms.

Part I – What is Artificial Intelligence Anyway?

What Does It Do?

It does (some of) the things we ask – e.g., Alexa.

It does facial recognition – e.g., Apple’s face ID software.

It translates – e.g., Google Translate.

It does medical diagnoses (very accurately) – see, e.g., its use to identify skin cancers.3

It wins games - e.g., GO.

How Does It Work?

AI is all about inferences of various kinds; logical, statistical, and a combination of both. And in case you were wondering, statistical inference is based on very high-level math (“… automatically computing (and adjusting) the step size for gradient-based neural net training algorithms [by] estimating and tracking the largest eigenvalue of the Hessian matrix of a neural net’s error surface.” Yann LeCun,1993.)4 But it isn’t necessary to understand the underlying math to be able to code or to teach software to learn skills. “If a typical person can do a mental task with less than one second of thought, we can probably automate it using AI either now or in the near future.”5

Why Now?

Between 2000 and 2017 three critical things happened simultaneously in the technology universe: (1) computer processing power increased from 103 to 107; (2) the cost of data storage reduced from $12.4 per GB to $0.004 per GB; and (3) there was astronomically large data growth. In other words, we are now in an age when it’s easy to harness computer power to engage in learning; it’s cheap, and there are massive amounts of data from which to learn.

Part II – What Can AI Do In Law, and What Is It Doing NOW

In general, there are six ways that AI is currently used in the legal arena: (1) e-discovery; (2) expertise automation; (3) legal research; (4) document management; (5) contract and litigation document analytics and generation; and (6) predictive analytics.

What Do They Do – And How Do They Affect the Delivery of Legal Services?

E-Discovery. This was the first use of AI in law and is quite well established. In essence, e-discovery is software that enables a vast number of documents to be surveyed and those relevant to the search criteria to be identified at a fraction of the cost, in a fraction of the time, and generally much more accurately than when the same survey is performed by teams of lawyers or paralegals looking at computer screens.

Expertise Automation. This is the commoditizing of legal knowledge that enables clients (as well as lawyers) to find answers to questions using software developed for particular areas of legal information that once would have required interaction with a lawyer. Examples include software developed to enable individuals to draft a will or enable companies to give their employees access to answers to common questions in a specific area, such as employment law. For example, a factory manager in a jurisdiction can ask the software what rights to family leave a pregnant employee has without the need to speak to a lawyer either within the company’s legal department or its outside counsel. In addition, this is the realm of software increasingly developed to increase access to justice for individuals who do not have the resources to hire a lawyer. These tools include will drafting, and even assisting individuals in litigation contexts such as housing court or fighting traffic tickets.

Legal Research. Publishing companies, like LexisNexis or Westlaw, have huge databases of information including laws and regulations in multiple jurisdictions. They have developed software packages that enable clients (or lawyers) to do fast, accurate (and therefore cheap) research that would have taken individual lawyers much longer (and more expensively and, probably, less accurately) in earlier times. Some of them even offer services that will do the work of answering questions using the software and providing the solutions directly to clients’ legal departments without an outside lawyer.

Document Management. Corporations often have thousands or tens of thousands of similar documents, such as contracts, that need to be managed for consistency and enforcement. An example was publicized by JP Morgan in 2017. The Bloomberg.com headline read: “JP Morgan Software Does in Seconds What Took Lawyers 360,000 Hours.”6 Readers who follow the Artificial Lawyer website7 will see almost daily announcements of new software packages designed to assist corporations to accomplish similar outcomes.

Contract and Litigation Document Analytics and Generation. There are now numerous AI tools that help lawyers draft consistent, appropriate, and up-to-date documents both in the transactional and litigation spheres, by reference to huge databases of precedents. In addition, there is a growing group of AI providers that provide what are essentially do-it-yourself tool kits to law firms and corporations to create their own analytics programs customized to their specific needs.

Predictive Analytics. There are two main groups of AI tools that fit within this category. The first are the tools that will analyze all the decisions in a particular sphere, input the specific issues in a case including factors like the individual judge assigned to hear the case, and provide a prediction of likely outcomes. This is the group that the French have recently criminalized8 (a decision that seems about as likely to succeed in the long run as the early English king (Canute) who stood at the sea shoe and ordered the tide to turn). The other kind of analytics, of which there are now at least four available in the marketplace, will review a given piece of legal research or legal submission to a court and identify the key relevant precedents and authorities that are missing from the research or submission. In the United States, one of these tools is available for free to judges, which raises the question of whether it is now legal malpractice for lawyers not to use such a tool before filing legal papers with the court.

Part III – What Does the Advent of these AI Solutions Mean for the Economics and Structure of the Legal Profession and the Training of the Next Generations of Lawyers

The two fundamental questions presented by the arrival of these new tools are: (1) who will provide the solutions to clients’ problems, and, as a follow-up to that question, what will the lawyer’s role be in providing those solutions; and (2) how will the answers to those questions affect the composition, structure, and economics of law firms?

Who Will Provide Clients with Legal Solutions?

Thomson Reuters and the consultant Adam Smith, Esq., predicted in 2018 that in the United States, the expenditures of corporations on legal solutions from both outside law firms and their internal legal departments will decline between 2017 and 2027, while the expenditure on alternative legal service providers (i.e., principally the providers of AI based solutions) will increase at least seven fold (from an estimated $12 Billion in 2017 to $85 Billion in 2027).9 Another market research report issued in 2019 by Zion Market Research suggests that the global legal AI market will grow at 35.9% per year/CAGR [Compound Annual Growth Rate] in terms of revenue between 2019 and 2026.10

What does this mean for the role of individual lawyers in this new environment? Perhaps the best example of the most fundamental change is the reduced time it will take AI to provide clients with an answer, as in the example of the report referred to earlier under the Document Management heading that JP Morgan in 2017 developed and used software to do in “Seconds What Took Lawyers 360,000 Hours.”11 The drudge work traditionally done by new lawyers is already vanishing and will ultimately disappear almost entirely. That holds true in all of the realms in which AI provides solutions, as described above, not just document management. However, contrary to the purveyors of gloom and doom about the future existence of the legal profession, lawyers will still have vital roles to perform—but those roles will be different and more refined than in the past.

Lawyers of the future will provide four basic functions that AI cannot provide (and will not be able to provide unless and until “General Artificial Intelligence” – where computers are creative in ways analogous to the human mind, rather than producing data based solutions to specific problems that current AI technologies provide – becomes available at some point in the future). Those functions are: judgment, empathy, creativity, and adaptability. In other words, lawyers will provide the last mile of solution delivery—the application of those human functions to the output of the AI tools. To take a simple example: suppose a predictive analytics tool tells the user that in a certain case before an identified judge in a particular jurisdiction, the likelihood of a successful outcome is 60%. That prediction does not actually tell the lawyer or client what the client should actually do—that is, whether the client should proceed or not. That takes a lawyer using his own judgment to advise the client, using the lawyer’s understanding of the client’s needs (empathy), on which path to choose.

How Will This Change in Roles Affect the Composition, Structure, and Economics of Law Firms?

This question especially deserves extensive treatment. Here we can only give a brief summary of the likely implications of the impact of the changes being brought on (forced?) by the advent of AI solutions into legal services. The changes will be most focused in three areas: (1) the training and qualification of future generations of lawyers (and where that training will happen); (2) the composition and structure of law firms; and (3) the economics of law firms.

Lawyers of the future will not need to be able to “code,” but they will need an intimate and continuing understanding of how to identify and use AI solutions to meet their clients’ needs. In particular, given that there is currently no rating system for the adequacy or effectiveness of individual AI solutions (see the section below on the ethical implications of AI-derived legal service solutions), future lawyers will need to know how to be able to assess the relative strengths and weaknesses of particular solutions. Notably, the London-based global firm Linklaters recently announced the creation of a special Legal Ops “track” for lawyers skilled in precisely this way.12 Further, O’Melveny & Myers recently announced that in order to be considered for a summer associate position (the usual initial track for prospective associates at the firm), applicants would have to participate in an AI-based online computer game designed, in essence, to test their teambuilding (i.e., empathy) skills.13

How and Where Will the Next Generations of Lawyers Be Trained?

Where and when will lawyers receive the training they will need to survive and prosper in this new world? A small number of law schools are developing and offering a variety of technology training programs. But for now, these programs reach only a minority of law students. There is a second component to the problem faced both by individual lawyers and their firms: the ability to provide the critical “last mile” component of legal services—judgment. Future lawyers will need to develop that skill over time, imposing a rigorous training requirement on law firms that will continue over an extended period of time.

It inevitably follows that law firms will need to change their composition and structure in two critical ways. First, they will not need to recruit armies of young lawyers to perform services that are no longer needed, with the expectation that not all will survive the first few drudge-filled years. In other words, the “old” model of hiring and then letting go large numbers of young lawyers cannot survive. Instead, because of the burdens of continuous training, firms will need to make much more discriminating hiring decisions, designed to identify the next generation of providers of the “last mile” services, with the particular goal that instead of these lawyers being a fungible and replaceable commodity, they will actual join the firm and stay with the firm for the long haul. Second, they will need to develop a serious, long-term program to train the next generation to become, over time, the judgment providers. In short, hierarchical structures of future law firm—of whatever size and geographic reach that looks like—will need to be flat, not a pyramid. While we make no predictions about which of the current generation of firms will be able to adapt in this way (and make the economic changes discussed below that are a necessary corollary of these structural changes), it seems likely that firms of the future will be much more focused on specific practice expertise where they become the obvious providers of the “last mile” services in their chosen fields. The “general service law firm” model of the past, was one where firms sought to provide services in multiple practice areas, requiring different levels of skills. But more and more of those services are already or will very soon be replaceable by the kinds of AI solutions described here and will inevitably replace the leverage profitability model that has become the basis of many such firms’ profitability. It is hard to see how that that general service law firm model can survive the changes we envision.

Perhaps most significant of all are the necessary economic and financial changes that will be required for law firms to prosper as the “last mile” providers. First, the billable hour, built on the back of the leverage system that made law firms so profitable in the past, where a large part of that profitability depends on employing associates to devote large numbers of hours which are then billed to clients at multiples of the cost of the associates to the firms, will necessarily be largely replaced as the primary billing model. There is no way to make a profit out of charging for time spent when what the client wants and needs are the “last mile” services of judgement, empathy, creativity, and adaptability. Instead, firms will have no choice but to develop a billing and profitability model based on the value of the judgment, empathy, creativity, and adaptability that they bring to their clients in order to accomplish the clients’ objectives. This inevitable change in billing models is amplified by the structural changes described above—there will not be an army of associates whose time can be billed; rather there will be a cadre of next-generation lawyers who are at least in part an expense component (in terms of the ongoing training requirements) rather than primary revenue and profit generators as in the past. In other words, the billing and overall profitability model for the successful law firm of the future will have to be defined in terms of the value clients place on those “last mile” services, having little or nothing to do with the time it takes for the lawyers involved to provide those services. To some extent these changes are already under way. As every firm that serves large corporate clients is only too well aware, clients are already pushing back with ever greater force at paying for the time charges of young lawyers, and alternative fee arrangements are increasingly the norm.

There is a second component to the economic changes that will have to enter the law firm universe. While law firms persist, for whatever set of reasons, in being modeled on a partnership rather than a corporate structure, most, if not all, of the profits necessarily get sucked out at the end of every financial year to pay the partners. In turn, this leaves very little reserve capital to invest in technology creation or in the other necessary structural changes described above. This problem is particularly acute in the United States, where the current, byzantine regulatory structure is explicitly designed to prevent law firms from becoming partners with, or accept investment, from non-lawyers. On the other hand, the model of alternative business structures in place in England and Wales, and the similar changes previously adopted in Australia and now being developed in Canada, show the way for the future of the economic prosperity of law firms as part of a wider array of “legal service providers.” Notably, Utah has begun the process of considering how to accomplish these objectives with its innovative encouragement of such ventures in regulated “sandboxes,” and California recently announced that it is close to adopting this approach.14

Part IV – The Legal, Ethical, Regulatory, and Risk Management Issues in the Provision of Legal Services Using AI

The foundational principles of the law governing lawyers, and of professional responsibility are implicated in the changes described in this article, including competence, confidentiality, supervision, communication, and liability for errors.

The Duty of Competence

The principal ethical obligation of lawyers when they are developing or assisting clients in identifying and using any AI solution is the duty of competence. In 2012 the American Bar Association (the “ABA”) explicitly included the obligation of “technological competence” as falling within the general duty of competence which exists within Rule 1.1 of its Model Rules of Professional Conduct (“Model Rules”). Many states have already followed suit with their own rules.15 Other jurisdictions, such as Australia, have also incorporated this principle into their rules.16 The meaning and implications of “technological competence” go beyond AI solutions17, but do have several specific implications for AI tools.

One issue about AI that is just beginning to be addressed among academic writers18 and by regulators of commerce generally, is the problem of built-in, or implicit, bias and lack of transparency in the algorithms that underlie AI. For example, Michael Kearns and Aaron Roth report in Ethical Algorithm Design Should Guide Technology Regulation, “Nearly every week, a new report of algorithmic misbehavior emerges. Recent examples include an algorithm for targeting medical interventions that systematically led to inferior outcomes for black patients, a resume-screening tool that explicitly discounted resumes containing the word “women” (as in “women’s chess club captain”), and a set of supposedly anonymized MRI scans that could be reverse-engineered to match to patient faces and names. Kearns and Roth suggest “that more systematic, ongoing, and legal ways of auditing algorithms are needed. . . . It should be based on what we have come to call ethical algorithm design, which … begins with a precise understanding of what kinds of behaviors we want algorithms to avoid (so that we know what to audit for), and proceeds to design and deploy algorithms that avoid those behaviors (so that auditing does not simply become a game of whack-a-mole).”19

Also problematic is the fact that there is no independent analysis of the efficacy of any given AI solution, so that neither lawyers nor clients can easily determine which of several products or services actually achieve either the results they promise, nor which is preferable for a given set of problems. Again, in the long run, it will be one of the tasks of the future lawyer to assist clients in making those determinations and in selecting the most appropriate solution for a given problem. At a minimum, lawyers will need to be able to identify and access the expertise to make those judgments if they do not have it themselves.

Legal Liability When an AI Solution Fails

In parallel to the ethical duty of competence are issues of legal liability in connection with the use of AI tools. Two particular liability issues are foremost. First, to what extent are, or will, lawyers be liable when and how they use, or fail to use, AI solutions to address client needs? One example explained above is whether a lawyer or law firm will be liable for malpractice if the judge in a matter accesses software that identifies governing or guiding principles or precedents that the lawyer failed to find or use. It does not seem to be a stretch to believe that liability should attach if the consequence of the lawyer’s failure to use that kind of tool is a bad outcome for the client and the client suffers injury as a result. Much more complex and difficult to resolve will be questions of the apportionment of liability between the creator of a defective software solution and the law firm that uses it for the client’s supposed benefit. Obviously, this will in part be decided by contract, but in the likely situation where the AI provider has a limitation of liability in its contract, what will happen to the lawyer’s liability given that in some jurisdictions (including many states within the United Sates) the lawyer is not permitted to get an advance limitation of liability from the lawyer’s client? And when determining relative liability between the provider of the defective solution and the lawyer, should the court consider the steps the lawyer took to determine whether the solution was the appropriate one for use in the particular client’s matter?

The Duty of Confidentiality

A distinct and also vital ethical duty that lawyers will have to manage is to ensure that the use of AI solutions does not pose a risk to the general duty to preserve client confidences and to maintain and preserve the attorney-client privilege.20

The Duty to Supervise

Rules 5.1, 5.2, and 5.3 of the ABA Model Rules and the equivalent rules in place in every American state establish an express and explicit duty to supervise subordinates, including third party providers, in connection with the delivery of legal services by the lawyer or law firm. (Also see the Australian Solicitors Conduct Rules 2012, Rule 37 Supervision of Legal Services.) This supervisory duty assumes that lawyers are competent to select and oversee the proper use of AI solutions. Here again, this is not just a matter of the duty to supervise what goes on, and what tools are used within the law firm, but what third-party tools are used and how. Again, liability issues arise: did the law firm appropriately select the vendor, and did the lawyers manage the use of the solution?

The Duty to Communicate

In addition to the other duties already identified, lawyers have an explicit duty to communicate to their clients material matters in connection with the lawyers’ services. This duty is set out in ABA Model Rue 1.4. Other jurisdictions have adopted similar rules.21 Thus, not only must lawyers be competent in the use of AI, but they will need to understand its use sufficiently to explain the question of the selection, use, and supervision of AI tools.

Regulatory Issues

This article is not the place to discuss the role of the regulators of the legal profession in overseeing legal services being provided by non-lawyer alternative service providers. As we have seen, this is a rapidly expanding area of activity, to be contrasted with the likely contraction of traditional lawyer and law firm provided solutions. Some jurisdictions, such as England, Wales, and Australia have already recognized this, and others, such as certain Canadian provinces22, are in the process of adopting and implementing entity-based regulation. This approach, at least in part, enables the regulators to oversee all the providers of legal services, not just traditional law firms and/or lawyers. In the long run, this approach – regulating all providers in the market - is going to be critically important in establishing appropriate compliance standards for all providers of AI-based legal services.

As mentioned above, addressing the significant issues of bias and transparency in AI tools, and, in addition, advertising standards, will grow in importance as the use of AI itself grows. Clients in jurisdictions whose regulators are limited to overseeing only the services actually provided by lawyers are likely to suffer from the provision of AI solutions that are outside the scope and authority of the regulators to supervise. The significance and implications of this regulatory deficit or imbalance will become ever more pronounced as alternative legal service providers play an increasing role in providing clients with legal services without any direct involvement of lawyers.

Part V – Who Will Be the Providers of AI-Based Legal Services

As discussed above, the traditional partnership economic model of law firms is essentially antithetical to the use of capital for the development of innovative technological solutions, except for the very largest firms with the deepest pockets. Even there, it must be remembered that lawyers and law firms are not intrinsically risk takers. Of course, there are a number of law firms that have developed, or are in the process of developing, AI-based solutions for particular applications to benefit their clients where they have identified existing needs (so that the risk element is reduced). But the resources law firms have allocated for technology solutions in the past, will now or in the future be miniscule in comparison with the billions of dollars invested by non-law firm entrepreneurs and venture capitalists in finding AI solutions to problems. This is inevitable, given that law firms traditionally distribute their capital to the lawyers in the firm, do not retain earnings for investment, and even if and when they do, it is not used for risk investment on the venture capital model.

Three groups are predominating the development of AI legal solutions. One group, identified earlier, are the legal publishers, such as Thomson Reuters and Wolters Kluwer. A second group, always perceived as a direct threat by lawyers, is the major accounting firms. Both groups have two advantages over even the largest and most prestigious law firms: they are structured on a corporate and not a partnership model, so that they can accumulate and invest capital. Further, they have an expressed interest in penetrating the global market for legal services. The third group is venture capital-supported entrepreneurs within the high-tech world. This group has been the source of the largest number and variety of AI solutions within all the categories described in this article. Interestingly, there is already underway a consolidation among some of the early developers. Tens of merger and acquisition deals were announced in 2019 among the early players in this universe, evidently in order to obtain improved penetration into the market for these services based on greater capitalization.23 Notably, law firms have not been completely absent from this marketplace, in that there have been several joint ventures between traditional law firms and AI solution providers in recent months.24 Nevertheless, the relative inability and normal unwillingness to raise and apply risk capital leave law firms in last place as the originators of the solutions that are being developed or will be developed in the future. The future lies with those willing and able to place venture capital at risk. This is why the issues outlined in Section IV regarding the need for effective regulation of non-lawyer provided legal services is of critical importance going forward.

The changes to the way in which legal services will be delivered in the future have been on view for the past decade. They are inevitable, and cumulative. The dreadful impact of the Covid-19 pandemic is likely to accelerate the pace of these changes exponentially. Clients, hard pressed economically, are certain to put ever greater pressure on the leverage model of large law firm billing and profitability. Clients, under pressure to reduce internal as well as external costs, will turn to the developers and vendors of AI solutions to achieve outcomes more efficiently, faster, and more cheaply than law firms can deliver. Put another way, clients will increasingly seek to avoid as much as possible even the involvement of an increasingly irrelevant middleman: the law firm. As a result, on the other side of the fence, law firms will be under enormous pressure to reduce overhead costs, especially expensive real estate which remote working during the pandemic has resoundingly demonstrated to be in great part superfluous. Traditional models of law firms are not well suited to enable them to deal with these developments. To survive this tidal wave of change, firms will need to be nimble, looking at new billing models, based on new ways to demonstrate value. They will need to look at new management models and new hiring models. Candidates for future lawyers will need to be able to demonstrate emotional intelligence, and a deep understanding of the ways in which technology can aid in achieving client outcomes. These traits and skills will be at least if not more valuable than law school grades. The inflexion point for the delivery of legal services is upon us. The faint of heart, who seek to cling to the old models, will likely not survive long.

Endnotes

1. This article was originally published in 16 Direito GV Law Rev., no. 1 (2020).

2. Projections provided to the author by Adam Smith, Esq., in conjunction with his work with Thomson Reuters.

3. Taylor Kubota, Deep learning algorithm does as well as dermatologists in identifying skin cancer, Stanford News (Jan. 25, 2017), https://news.stanford.edu/2017/01/25/artificial-intelligence-used-identify-skin-cancer/.

4. Yann LeCun, Patrice Simard & Barak Pearlmutter, Automatic learning rate maximization by on-line estimation of the Hessian’s eigenvectors 5 (NIPS 1992), available at https://papers.nips.cc/paper/589-automatic-learning-rate-maximization-by-on-line-estimation-of-the-hessians-eigenvectors.

5. Andrew Ng, What Artificial Intelligence Can and Can’t Do Right Now, Harv. Business Rev., Nov. 09, 2016, available at https://hbr.org/2016/11/what-artificial-intelligence-can-and-cant-do-right-now.

6. Hugh Son, JPMorgan Software Does in Seconds What Took Lawyers 360,000 Hours, Bloomberg, Feb. 27, 2017, available at https://www.bloomberg.com/news/articles/2017-02-28/jpmorgan-marshals-an-army-of-developers-to-automate-high-finance.

7. Artificial Lawyer, https://www.artificiallawyer.com/author/artificiallawyer/ (last visited July 7, 2020).

8. Michael Livermore & Dan Rockmore, France Kicks Data Scientists Out of Its Courts, Slate, June 21, 2019, available at https://slate.com/technology/2019/06/france-has-banned-judicial-analytics-to-analyze-the-courts.html.

9. Data provided to the Author by Adam Smith Esq., based on his work with Thomson Reuters.

10. Global LegalTech Artificial Intelligence Market Is Expected To Reach Around USD 37,858 Million By 2026, Zion Market Research, Mar. 26, 2019, https://www.zionmarketresearch.com/news/legaltech-artificial-intelligence-market.

11. See Artificial Lawyer, supra note 7.

12. Linklaters Forms New Legal Ops Group + Starts Ops Grad Scheme, Artificial Lawyer, Mar. 10, 2020, https://www.artificiallawyer.com/2020/03/10/linklaters-forms-new-legal-ops-group-starts-ops-grad-scheme/

13. Debra Cassens Weiss, O’Melveny will use online games to evaluate potential summer associates, A.B.A. J., Nov. 21, 2018.

14. See Utah Implementation Task Force on Regulatory Reform, https://sandbox.utcourts.gov/ (last visited July 6, 2020); Lyle Moran, California bar gives approval to broad sandbox proposal, A.B.A. J., May 15, 2020.

15. Thirty-seven states have something in their Rules about technological competence, and two more have ethics opinions on it. See Rule 1.1 Chart, A.B.A. Center for Prof’l Responsibility, https://www.americanbar.org/content/dam/aba/administrative/professional_responsibility/mrpc1-1-comment-8.pdf (last updated Dec. 1, 2019).

16. See Australian Solicitors Conduct Rules R. 4.1.3A (2015) [hereinafter Australian Solicitor Rules], available at https://www.lawcouncil.asn.au/files/web-pdf/Aus_Solicitors_Conduct_Rules.pdf.

17. Anthony E. Davis & Steven M. Puiszis, An Update on Lawyers’ Duty of Technological Competence: Part I, N.Y. L.J. (Mar. 1, 2019) [hereinafter Tech Competence Part I]. See also Anthony E. Davis & Steven M. Puiszis, An Update on Lawyers’ Duty of Technological Competence: Part II, N.Y. L.J. (May 3, 2019).

18. For a discussion of this topic generally, see Report, Michael Kearns & Aaron Roth, Ethical algorithm design should guide technology regulation, Brookings (Jan. 13, 2020), https://www.brookings.edu/research/ethical-algorithm-design-should-guide-technology-regulation/; for an effort by a regulator to address this issue, see NYC’s Task Force to Tackle Algorithmic Bias Issues Final Report, JD Supra (Jan. 31, 2020), https://www.jdsupra.com/legalnews/nyc-s-task-force-to-tackle-algorithmic-93703/.

19. Michael Kearns & Aaron Roth, Ethical algorithm design should guide technology regulation, Brookings (Jan. 13, 2020), https://www.brookings.edu/research/ethical-algorithm-design-should-guide-technology-regulation/.

20. For a discussion of these issues in greater depth, see Tech Competence Part I, supra note 17.

21. See also Australian Solicitor Rules, supra note 16, at R. 7 (Communication of Advice) & R. 8 (Client Instructions).

22. For a survey of entity regulation of lawyers internationally, including Canada, see International Perspectives on the Regulation of Lawyers and Legal Services (Andrew Boon ed. 2017).

23. For articles describing the process of consolidation now under way, see Artificial Lawyer, supra note 7.

24. To read more about the most recent example of multiple similar ventures, see A&O’s Fuse Returns With Legal + FinTech Streams, Artificial Lawyer (June 1, 2020), https://www.artificiallawyer.com/2020/06/01/aos-fuse-returns-with-legal-fintech-streams/.

Entity:
Topic:
The material in all ABA publications is copyrighted and may be reprinted by permission only. Request reprint permission here.

By Anthony E. Davis

Anthony Davis is Of Counsel at Clyde & Co US LLP. He is a Lecturer in Law at Columbia University Law School and a Past President of the Association of Professional Responsibility Lawyers.