ChatGPT (Open AI) has been a hot topic since it was launched to the public. This and other similar generative model chatbots, such as Bard (Google), and Bing AI give rise to ethical considerations. This technology is known as generative AI, where the program creates new content by predicting what words come next based on a prompt from the user. What are the ethical considerations of using this?
The ABA Model Rules of Professional Conduct (Model Rules) give attorneys some guidance on our ethical obligations. Although generative AI touches on many portions of the Model Rules, specifically, I am going to focus on Model Rule 1.1 Competence, 1.6 Confidentiality, and Rule 5.1 and 5.3 Responsibilities of a Partner or Supervisory Lawyer and Responsibilities Regarding Nonlawyer Assistance.
Model Rule 1.1 Competence
A lawyer shall provide competent representation to a client. Competent representation requires the legal knowledge, skill, thoroughness and preparation reasonably necessary for the representation.
Comment 8 of the Model Rule states that “To maintain the requisite knowledge and skill, a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology, engage in continuing study and education and comply with all continuing legal education requirements to which the lawyer is subject.” An attorney’s duty to provide competent representation includes making informed decisions as to whether AI is appropriate for the legal services provided to your client and whether the program performs as marketed.
Artificial hallucination refers to the phenomenon that gives seemingly realistic sensory experiences that do not correspond to any real-world input. Generative model chatboxes have been found to produce hallucinations, particularly when trained on large amounts of unsupervised data. By now, you have surely heard about the lawyer that used ChatGPT to write his brief. The AI tool created fake case citations, that is hallucinations. Of course, that lawyer has faced sanctions. The judge stated, in his opinion, there was nothing “inherently improper” about using AI for assisting in legal work, but lawyers must ensure that the filings are correct.
Undoubtedly, there are other aspects to consider. The use of fake information to change the answers in the generative model chatboxes is something to be wary of. Incorporating methods for monitoring and detecting hallucinations can help address this issue. Another issue is the potential to perpetuate biases and discrimination. The tools are only as unbiased as the data and algorithms used to create them, and there is a risk that these tools can perpetuate existing biases in the legal system. Overall, attorneys have an ethical obligation to be competent in the use of technology and to ensure that their use of AI-powered tools does not compromise their clients’ interests.