Artificial intelligence is the buzzword now in the legal profession. Whether it is used as an aid in legal research, in quickly reviewing contract provisions or voluminous discovery responses, in predicting effective settlement values or likely trial results, or in actually preparing and handling routine legal matters for parties, “machine learning” processes are revolutionizing the provision of legal services. There is little doubt that “AI,” as it is generally referred to in the profession, will be an integral part of all law firms to a greater or lesser extent in the coming years.
February 03, 2020 Feature
Artificial Intelligence: From Law Office to Administrative Proceedings
By Judge Paul Armstrong
But how does AI affect or fit in the adjudicatory or regulatory framework of administrative law? How can the functions of AI that are proving so effective in assisting private legal firms or practitioners be used by administrative agencies in fulfilling their functions? And can AI be used administratively in an effective, lawful, compassionate, and transparent manner? This article explores the best possible uses of AI, both legally and practically, in an administrative law setting.
What Is AI and What Can It Do?
AI is often referred to as a computer performing any of the following functions: searching, categorizing, synthesizing, and deciding, all in the context of the “big data” being generated and made available in modern society. AI is use of a computer to perform “intelligent” functions, such as recognizing and translating handwriting or speech, finding words or patterns in documents or other media, and sorting data into defined categories. In the more sophisticated types of AI following in the tradition of IBM’s Watson, the computer is involved in making or recommending decisions based on the recognition of patterns in relation to humanely defined outcomes. The computer-made or recommended decision is driven by algorithms embedded in software that allow the computer to learn how to best ensure that a predetermined result is obtained by accepting only accurate (or more accurate) answers. In a simple application, voice recognition software can be trained to transcribe an individual’s voice into written words, or translate written or spoken languages, by accepting only accurate responses, thereby making it more effective over time.
A more sophisticated AI application allows the computer to train itself to learn from data and adapt from experience to better arrive at the desired result. This is done through configuring a computer like a human brain (“neural network”) and allowing the computer to best decide how to arrive at the humanly defined result. An example of this would be IBM’s Watson learning how to best play chess through successive games and moves (kind of like we learned ourselves, only the computer does not forget). If the computer is simply following sorting or finding functions in its learning or is being trained by humans, then the rules by which the computer arrives at a result can be followed sequentially; if the computer trains itself to get a desired result through a neural network, then the actual manner in which the computer obtains the result is not so apparent (“black box”). The results of the black box can, however, be tested empirically, as by comparing the result to that obtained by a human, a human-trained computer, or a computer using a different algorithm. What has made AI so important and effective recently is the speed and effectiveness at which computers are being trained to do ever more “intelligent” things: from winning video games to recommending settlements before a jury trial.
How Is AI Being Used by Private Law Firms and Individuals?
AI is currently being used in the legal field for some of the following:
- Legal research (with natural language).
- Preparation and generation of standardized legal forms, pleadings, and discovery.
- Highlighting of anomalies in voluminous data for more thorough scrutiny.
- Assisting in settlement of legal disputes.
The more simple forms of AI have been used for some time in law: We are all familiar with the way that you can train a computer to be progressively more accurate in recognizing a person’s voice when using Dragon voice recognition software or the use of optical character recognition (OCR) capabilities to find and highlight words or phrases in voluminous documents or cases. However, the ever-increasing computing power available today has allowed vast amounts of data to be manipulated in seconds by convenient personal or laptop computers, allowing astonishingly real AI to be used by almost anyone.
The legal profession was initially hesitant to adapt to the use of AI, and nonlawyers led the way by setting up online interfaces for the public that could be used to create simple documents such as wills, deeds, and bills of sale often used in everyday life. But these simple, nonadversarial transactional uses have morphed into much more complex public interfaces that advise parties in minor matters like parking tickets. For example, an AI-powered service called DoNotPay has helped defeat thousands of cases involving contested parking tickets and now is expanding to other areas.1 The success of these lay services has led lawyers and law firms to provide blended legal services on many routine matters through use of AI with human legal interface at stages in the process.
As the legal profession started to become aware of the power of AI, it began to integrate AI in progressively more complex legal areas with great success. We are all familiar with the keywords and phrases used by Westlaw and LexisNexis as the basis of legal research. But the power of word recognition software and its ability to parse out similarities in text has allowed advanced legal research software such as the “ROSS” service to provide for a natural language examination of entire legal databases. Now, AI research can almost instantaneously answer legal queries or obtain on-point case authority with greater accuracy (and less legalese). Computers can review large numbers of documents to ensure uniformity and consistency in transactional matters. They are also being used to improve document review in such areas as mergers and acquisitions, where possible errors or data irregularities may be highlighted for further human review and consideration. AI is being used in reviewing voluminous discovery documents (or other data) to focus on more important disclosures or possible anomalies in the evidence provided.
AI can also assist in health care or food and drug compliance decisions where the convergence of statutes, case law, regulatory guidance, national and local coverage determinations, and other factors can create a legal minefield. AI can be effective in dispute resolution, using both simple blind bidding on a two-way resolution platform or more complex weighted negotiating on a situation involving two or multiple parties with multiple issues.2 Most remarkably, natural word recognition, together with machine learning that associates the use of certain repetitive text with outcomes in court cases, has been successfully used to predict judicial decisions with significant accuracy, providing predictive analysis for private litigants in rating settlement offers and assessing probable trial outcomes.3
How Can AI Be Effectively Used in Administrative Law?
AI may be effectively used in administrative law in evaluating and recommending beneficial policies, targeting investigative and corrective efforts, ensuring decisions consider relevant policy guidelines, and promoting consistency in decisions. Administrative law is, on its face, an area ripe for application of AI. Our judicial system involves the application of ever-evolving legal doctrines that are applied to ambiguous or novel fact situations by a court of general jurisdiction or a jury of 6 or 12 men and women with differing backgrounds and knowledge bases. In contrast, administrative bodies or administrative law judges (ALJs) apply a static body of law that is further clarified by administrative regulations and at times written guidance in generally recurring fact situations. Administrative law is also applied by agency administrators and adjudicators or judges with significant knowledge and expertise in the application of this law and familiarity with the fact situations commonly causing disputes.
AI could provide the benefit of testing the wisdom and predicting possible outcomes of prospective administrative regulations before they are promulgated. This could be done by creating a set of mathematically specified factors affecting agent behavior (model) and allowing AI to predict the outcome of various regulations upon the interested parties and the economy in general. More basically, AI could be applied to ensure that any proposed regulation is internally consistent and in conformance with the statutory directive empowering the agency. AI could also be used to check new regulations to ensure consistency with existing law or regulations and highlight any necessary revisions or deletions in current regulations.
Even more significant may be the power of AI in sampling periodic filings or reviewing public information available or reported to the agency in order to “kick out” any anomalous filings or focus on an area or company that might warrant additional review. An example might be the use of AI by a municipality to evaluate water usage and other data in order to focus on areas that might appear to be leaking or most likely to fail soon.4 AI could also be used in crunching voluminous data culled from filings or other sources to both identify areas of possible beneficial regulatory action and better identify actors that might be operating out of accord with the regulatory system.
Of course, the power of AI in assessing big data assumes a relatively uniform reporting medium from which such data can be collected on a large scale. As pointed out by Melanie Combs-Dyer, director of the Medical Compliance Group at Medicare, increased use of standard Fast Healthcare Interoperability Resources (FHIR) might assist large organizations such as Medicare in ensuring uniformity in initial or review decisions.5 The standardized information thus made available could assist in consistency among the various components in the Medicare system, many of whom are contractors, to avoid unnecessary appeals to Medicare ALJs and thus reduce any decisional backlog.
But probably the most significant area in which AI could be used in administrative law is in ensuring uniformity and consistency in enforcement actions by administrative agencies. There is an oft-sounded criticism among those dealing with ALJs that the results of an administrative determination can vary widely between different ALJs. While this is also true with judges in courts of general jurisdiction and is certainly true with juries, the relatively uniform body of law and regulations governing most administrative decision-making and the established expertise of the judges should generally make such decisions more uniform. When there is a significant divergence in decisional outcome given similar fact patterns, then AI could be used to examine the outcomes and clarify the possible cause of the divergence. The accuracy of the natural language AI method developed by researchers at the University College of London Computer Science, the University of Pennsylvania, and the University of Sheffield in predicting the outcomes of cases decided by the European Court of Human Rights gives a great deal of hope that AI could tease out determinative factors in ALJ decisions so as to promote greater accuracy and consistency.6
One of the ways that consistency in administrative decisions can be achieved is by using AI as a backup to ALJ decision-making to prevent clear legal or factual errors that often would subject such decisions to reversal on appeal within the agency or ultimately a court. In this regard, the Social Security Administration (SSA) has taken an active role in promoting the use of AI, first in its appeals process and then as an aid in writing and editing its ALJ decisions themselves. The product of an Appeals Council reviewer frustrated with his own performance in reviewing ALJ decisions for inconsistencies, the SSA’s Insight program is gradually working its way into the SSA process to hopefully ensure fewer legal and factual errors in ALJ decisions, greater consistency in outcomes, and, hopefully, fewer federal court remands.7 The developer and owner of this program, Kurt Glaze, coupled the SSA’s “electronic file” with word recognition software and a parsing of significant decisional areas to create a program that is now assisting in disability determination decisions. Adding on databases like the Dictionary of Occupational Titles, existing Social Security Law and Regulations, and court decisions that might affect the particular area in which the case was brought, Insight has been a useful tool for ALJs and writers in avoiding common errors and in highlighting possible weaknesses in the proposed written decision.
If AI can be relied on to increase the accuracy and consistency of regulatory adjudications, could an AI decision-maker ultimately replace the administrative law judge in the decision-making process? The Administrative Procedure Act and similar state statutes create a right in an aggrieved party to a hearing before an impartial judge in the event of an adverse agency decision. Could this judge eventually be a computer?
The prospect of at least an initial decision being made in such a case by a computer aided by AI was recently examined, and the authors concluded that current Supreme Court precedent should not categorically prohibit using machine learning algorithms in adjudicating administrative claims on due process grounds.8 Citing Matthews v. Eldridge,9 the authors compared a machine-learning decision to the paper review conducted by Social Security employees in terminating disability benefits, a process approved by the Supreme Court in Matthews. In Matthews, the Court set out a balancing test in evaluating a system that did not involve a formal hearing, which included three factors: the private interest affected, the risk of error, and the government’s interest in reducing costs.10 In Matthews, the Supreme Court found the weight came down in favor of the paper review.
However, an examination of the Continuing Disability Review process used in terminating Social Security disability benefits discloses an elaborate hearing procedure by which an aggrieved party could contest the initial paper review decisions, including both an adjudicatory hearing before a hearing officer and a more formal hearing before an SSA ALJ.11 The right to appeal any decision based only on a review of existing evidence, as is common with AI algorithms, to a human reviewer in a formal or informal hearing might therefore be a requirement under existing due process precedent.
While AI can certainly increase the consistency of administrative decision-making, it is by no means a panacea for the increasing burden on agencies with respect to adjudicatory hearings. As pointed out in a recent report, “the procedural consistency of algorithms is not equivalent to objectivity.”12 Often, big data reflect an inherent bias in the manner of its collection: The authors point to the systematic bias discovered in a commonly used risk assessment algorithm for criminal sentencing across the country.13
Because bias, or at least data that may reflect prior discriminatory practices, may be reflected in the training data used in creating the learning algorithm, it may encode existing human bias into a facially objective AI procedure. Even when sensitive data fields (such as race or gender) are hidden, learning algorithms can implicitly reconstruct these fields by using probability data and proxy variables. In addition, decisions with social policy implications often involve the weighing of many criteria that do not allow for a simple benefit calculation upon which AI is ultimately based. The possibility of algorithmic error and/or bias has caused some to conclude that any AI decision should be subject to some form of algorithmic audit, which would allow a human decision-maker to review the results of any AI decision for objective fairness and absence of any legally prohibited decisional criteria.14
Conclusion
There is little doubt that AI is changing the playing field in all areas of law, and it can be expected to do the same in the field of administrative law. AI has already made possible accurate and speedy legal research using natural language, allowed for sorting and classification of mega data when recorded in some uniform method, and proven its value in forecasting behavior in closed systems, such as pipelines. It can clearly be legally used by administrative agencies in better targeting investigative and remedial resources and in ensuring regulatory consistency in issuing policy and guidance.
Going beyond the regulatory to the adjudicative, at least some scholars believe that existing legal authority would allow initial agency adjudicative decisions to be made using a medium of artificial intelligence if a robust system of appeal to human decision-makers was provided. However, there is a possibility of algorithmic bias in any AI program itself, and a possibility that input data might reflect preexisting discrimination or other illegal patterns that a computer was not programmed to recognize. For these reasons, it may be necessary for an agency seeking to enforce an initial administrative decision made or aided by AI to include and support the specific AI procedure used and/or database upon which such decision was based.
While AI can offer procedural consistency and decisional timeliness, it is probable that due process and agency accountability requirements would require a formal hearing before an ALJ in which a claimant could challenge the AI process itself as part of any appeal from a lower-level determination made or aided by AI. The decision-maker may be assisted by AI in his or her decision and reviewed for accuracy and consistency by agency AI, but ultimately a human person is going to be responsible for making an administrative adjudicatory decision.
Information and views expressed in this article are those of the author alone and do not reflect the policies or opinions of the Social Security Administration or any agency or employee of the federal government.
Endnotes
1. Lisa M. Krieger, Stanford Student’s Quest to Clear Parking Tickets Leads to “Robot Lawyers,” Mercury News (Sept. 15, 2019, 4:06 PM), www.mercurynews.com/2019/03/28/joshua-browder-22-builds-robot-lawyers.
2. Amo R. Lodder & Ernest M. Thiessen, The Role of Artificial Intelligence in Online Dispute Resolution, Proceedings of the UNECE Forum on ODR 2003, www.odr.info/unece2003, found at https://pdfs.semanticscholar.org/7bbt/d664ecf7b931ba1442c92507df5161fcaa96.pdf (Sept. 15, 2019).
3. Matthew Hutson, Artificial Intelligence Prevails at Predicting Supreme Court Decisions, Science (May 2, 2017), https://www.sciencemag.org/news/2007/05/artificial-intelligence-prevails-predicting-supreme-court-decisions.
4. Trevor Hill, How Artificial Intelligence Is Reshaping the Water Sector, Water Fin. & Mgmt. (Mar. 5, 2018), http://waterfm.com/artificial-intelligence-reshaping-water-sector.
5. Phone interview with Melanie Combs-Dyer, Dir., Med. Compliance Grp., Medicare (June 6, 2019).
6. Nikolaos Aletras, Dimitrios Tsarapatsanis, Daniel Preotiuc-Pietro & Vasileios Lampos, Predicting Judicial Decisions of the European Court of Human Rights: A Natural Language Processing Perspective, 2 PeerJ Computer Sci. e93 (Oct. 24, 2016), http://peerj.com/articles/cs-93.
7. Phone interview with Kurt Glaze, developer of Insight program (May 6, 2019).
8. Cary Coglianese & David Lehr, Regulating by Robot: Administrative Decision Making in the Machine Learning Era, 105 Geo. L.J. 1147, 1184–87 (2017).
9. 424 U.S. 319 (1976).
10. Id. at 342–47.
11. 20 C.F.R. §§ 404.907–922, 404.993, 404.999a–999d, 404.1546, 404.1597, 404.1597a; POMS DI 12026.001.
12. Osonde Osoba & William Welser IV, RAND Corp., An Intelligence in Our Image 2 (2017).
13. Julia Angwin, Jeff Larson, Surya Mattu & Lauren Kirchner, Machine Bias: There’s Software Used Across the Country to Predict Future Criminals. And Its Biased Against Blacks, ProPublica (May 23, 2016), https://www.propublica.org/article/machine-bias-risk-assesssments-in-criminal-sentencing.
14. Osoba & Welser, supra note 12, at 25.