When ChatGPT was made public, we were curiously cautious. It has become synonymous to all things Artificial Intelligence (AI) because it has been recently ingrained in popular culture as the next revolutionary technology. For one of us, it has been a helpful aide—assisting with gathering research on the best sites to visit for that next vacation or identifying crucial educational resources. As one of us is an adjunct faculty, it has been difficult to decide whether to view it as a helpful tool versus one that crosses the line to plagiarism. The way all forms of AI have developed in the last few years has been revolutionary. However, many have and continue to voice concerns over its facility to promote bias and discrimination, including lawyers and policymakers.
Generative artificial intelligence (an AI system capable of generating information in a response format), like ChatGPT, is not a novel concept. Think simple chatbots some private companies were introducing even before ChatGPT. It just became much more accessible, prevalent (i.e. as a mobile application download), and faster with a ton of data at its disposal. It also became monetized on its own. It is important to note that AI has many more capabilities that can be categorized into four transgressions: reactive—extremely limited capability with no memory; limited memory—“uses memory to learn and improve its responses;” theory of mind—can emulate some human interaction, such as needs; and self-awareness—human-like intelligence and self-awareness is achieved. As it stands, current technology has achieved the theory of the mind stage. Understanding these principles captures the consistently evolving characteristic of a vastly unregulated technology like AI. Its benefits and downsides are a slippery slope.
The notion that AI systems can promote bias and prejudice is not novel. The alarm on this concern has been set off for some time. One report predicted that through 2022, 85% of AI projects implemented by organizations “w[ould] deliver erroneous outcomes due to bias in data, algorithms or the teams responsible for managing them.” Additionally, a research conducted by University of Southern California found that up to 38.6% of the facts used by AI was biased. As we see the popularity and use of AI increase, we are witnessing more regulatory stances. One example is the joint statement issued by the Federal Trade Commission, Department of Justice Civil Rights Division, Equal Employment Opportunity Commission, and the Consumer Financial Protection, which outlined their position on taking a closer look at possible discrimination involving AI systems and other automated processes.
In this issue, these authors discuss the implications and inner workings of AI. While we, as attorneys, policy makers, or scientists want to encourage the advent of new and emerging technology, it is also imperative that we continue to be stewards of responsible technology by tackling concerning characteristics like bias and prejudice. Emerging technology should be a good for all; not just for some. It cannot count as revolutionary if it continues to exclude or negatively impact certain groups of people. As the Membership and Diversity Committee of the Section of Science & Technology, it is our goal to encourage discussion on these issues either through programming, writing articles, or promoting policy that addresses it.