Of the resolutions adopted at the annual meeting of the ABA policymaking body, one stands out: it “urges courts and lawyers to address the emerging ethical and legal issues related to the usage of artificial intelligence (“AI”) in the practice of law.”
This resolution is important for several reasons:
- It reflects the proliferation of AI-related challenges and opportunities that law firms, corporations, and ordinary citizens face. Lawyers must not only advise clients but also contend with legal, ethical, business, and malpractice risks involved in deploying AI in their practices. Questions abound: to what extent may my clients use impenetrable, potentially biased algorithms to make determinations that assess or affect customers or unsuspecting citizens? What should they disclose? What representations can my clients fairly make about the AI solutions they market? Should my firm use AI in recruiting? Is AI more accurate than conventional approaches in complying with data subject access requests under GDRP and CCPA? Can I sufficiently trust AI in e-discovery to make representations to the court – or to determine case strategy?
- It recognizes that AI ethics extend beyond professional ethics: “AI, its production, and deployment should be beneficial (or at least not detrimental) to the lawyer, the court, clients, and society in general.” (Emphasis added.) This recognition should be applauded: the beneficial adoption of AI in the law is predicated on the satisfaction of both professional and societal ethics. It is surprising how often this vital point is overlooked.
- It highlights the need for sound evidence that AI meets its intended purpose: “How does the lawyer or court know if the AI is operating properly?” This question (the AI equivalent of knowing whether a drug is effective or a car safe) is seldom asked or satisfactorily answered. Worse, the effectiveness of AI is often assumed. Yet, the best scholarship we have in this respect – the groundbreaking US NIST studies in e-discovery—suggests caution: while a metastudy found that two automated systems performed “conclusively” better than human attorneys, many fared poorly.
The resolution is timely because of the domestic and international context:
Recently and nearly concurrently, international organizations published principles for the ethical adoption of AI. Some are certain to affect the intersection of AI and the law:
- The OECD principles on AI, ratified by 42 countries, including the United States. These serve as a reference in US NIST’s mandate, under a recent presidential Executive Order, to develop AI standards.
- The Council of Europe’s Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and their Environment, which will serve as the basis for certifications for AI-enabled systems in the law. Like GDPR, it can be expected to have a global impact.
- The Principles of the Law Committee of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, which will serve as a basis for standards and accreditations for AI in the law. (The IEEE is a global technology think-tank and leading standards-setting body.)
Representatives of the OECD, the Council of Europe, the IEEE, judicial education institutions, bar associations, in-house and outside counsel, and leading law-and-technology academic centers from the United States and Europe will gather in Athens, Greece, in September to discuss how such principles might inform instruments for the trustworthy adoption of AI in the law. The ABA resolution provides welcome context to this multi-stakeholder dialogue.
The resolution leaves one important question unasked.
The resolution could have accomplished more with additional focus on competence. The resolution correctly recognizes, referencing Rule 1.1 of the ABA Model Rules (Duty of Competence), that attorneys must “be informed, and up to date, on current technology.” It also underscores that attorneys “cannot be expected to know all the technical intricacies of AI systems.” But it falls short of asking a crucial question: What constitutes evidence of competence for use of AI in the legal system? Today, the legal system and society have no basis to trust the operators of AI in the law, from those who use AI to produce risk-assessments in criminal justice to those who operate Technology Assisted Review in discovery.
The global momentum towards norms for the trustworthy adoption of AI will have profound effects on legal systems, the practice of law, regulatory compliance, and society. The ABA has just issued a call to action and taken a meaningful step in contributing to the development of those norms.