“Therefore, we call on all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4.”
Well, that stopped me. I was about to download the latest artificial intelligence (AI) system that might help me with this column. For years my “technology” has been the thesaurus. However, after reading that more than 1,000 technology leaders and researchers were urging a pause, my download was on hold.
Images of HAL 9000, the Terminator, and Robocop gave me pause. I could just see the New York Times lede: The takeover of NORAD, our entire electric grid, and the distribution system for Krispy Kreme donuts has been traced to an AI download by a columnist writing for Litigation News.
But seriously, after reading what seemed to be an article a day on AI, there are some real concerns. Concerns that impact the work we do as attorneys. In my 40+ years as an attorney, we have gone from carbon paper to Xeroxing to email. The pandemic then drastically changed the way we do things. Little did I think that we would see yet another sea change so soon.
Each concern begins with a sentence from a news article followed by some scenarios on how AI might affect trial practice. My apologies for the number of question marks. I don’t have many answers, only questions.
“GPT-4 didn’t just pass the bar exam—it scored in the 90th percentile.”
The bar exam I took was pass/fail. Not sure if I was in the 90th percentile. Probably not. Already there are do-it-yourself programs such as LegalZoom. These programs, combined with competent and ethical AI, would be a powerful combination. This could be an asset for those whose legal needs are underserved. However, “human” attorneys should vet these before clients start linking things like bank accounts to these platforms.
What algorithm will handle the balancing of ethical issues, such as the outer limits of “zealous representation”? Ethics opinions from different states can vary on the same subject. How will the gray areas be addressed? Who will createthe algorithm?
“A brain wave reader that can detect lies.”
Will AI eventually replace the judge…or the jury? Someone once told me that there was an app for everything. One “lie detection” app touts, “Our fine-tuned, artificial intelligence engine now gives you an ability to see the likelihood of anyone’s lie.”
Not so fast, cautions Jake Bittle, who writes, “In reality, the psychological work that undergirds these new AI systems is even flimsier than the research underlying the polygraph.” However, Bittle’s article was written in 2020. It appears that the capability of AI systems has grown at an exponential rate.
I contacted Bittle by email, and although he has not done any further research on this topic, he responded, “My impression is that even with very advanced AI systems like the ones that have debuted in the past year, there would still be a ‘garbage in, garbage out’ problem. Facial expressions and vocal fluctuations are only of limited use as indicators of a subject’s inner psychological state, just as the galvanic skin response measured by a polygraph doesn’t always indicate that a subject is lying.”
Some law enforcement agencies are already using AI to profile people. One algorithm uses nationality to profile dangerous drivers. Will AI technology be used by law enforcement as one component of probable cause for an arrest?