Here is an enormous, an incalculable force . . . let loose suddenly upon mankind; exercising all sorts of influences, social, moral, political; precipitating upon us novel problems which demand immediate solution; banishing the old, before the new is half matured to replace it. . . . Yet . . . not many . . . who fondly believe they control it, ever stop to think of it as . . . the most tremendous and far-reaching engine of social change which has either blessed or cursed mankind.
August 04, 2023 Technology
Artificial Intelligence—"What Hath God Wrought"
By Judge Herbert B. Dixon Jr.
Although the above description captures what some would describe as the impact of artificial intelligence on society, the words are by Charles Francis Adams in 1868, making predictions about the first transcontinental railroad. Adams, the son of John Quincy Adams (the sixth president of the United States), later became president of the Union Pacific Railroad. I am using the title of this column (Artificial Intelligence—“What Hath God Wrought”) and the railroad musing by Adams to remind readers that we have been here before—society’s familiar history of responding with trepidation to transformative inventions.
Two decades before the transcontinental railroad musings by Charles Adams, on May 24, 1844, a similar sentiment was espoused by Samuel Morse when he used Morse code to send an inaugural message from the U.S. Capitol to Alfred Vail at a railroad station in Baltimore, Maryland. The message? “What hath God wrought.” Have I made my point? Every transformative invention is accompanied by perceived ills, challenges, and harm to society . . . but I digress.
My previous technology article, My “Hallucinating” Experience with ChatGPT, caused some readers to think that I might not be a fan of artificial intelligence (AI). Indeed, AI-powered chatbot (Microsoft’s Bing, Google’s Bard, and OpenAI’s ChatGPT) responses that occasionally provide factually incorrect and fabricated responses can easily cause one to pause before using that technology. Also, the fact that chatbots produce their responses with such ease and apparent authenticity causes suspicion about whether students submitting essays, lawyers filing briefs, consultants delivering reports, and judges issuing orders used AI chatbots to do their work. Let me say unequivocally that I am a fan of AI and excited about its capabilities. However, we must be aware of every AI product’s limitations and frailties.
The advances in AI technology during our lifetimes have been incredible. The recent past, however, is instructive. Some AI capabilities have become so much a part of our lives that significant swaths of society would loudly protest if those capabilities were taken away. In the category of everyday technology, consider giving up your word-processing programs that predict the next word or phrase you should type and the software applications that suggest revisions to your written work to achieve clarity, avoid redundancy, and correct typographical errors and syntax. Would the legal profession give up legal research platforms (e.g., LexisNexis, WestLaw, and Fastcase) or software applications that search for relevant documents stored in a database of electronically stored materials at an astronomically faster rate and much lower cost than human review? Would the medical profession and hospitals resist giving up the use of AI tools that assist with the diagnosis of uncommon diseases and other medical issues? Would the general public give up Bing, Yahoo, Google, and other internet search engines they use daily to find valuable information on the World Wide Web? Most likely, no!
Concerns of Prominent AI Developers
Prominent AI developers have voiced concerns about the perceived dangers of AI. Early in 2023, Geoffrey Hinton, a recipient of the Turing Award from the Association for Computing Machinery, resigned from his position as vice president and engineering fellow at Google. Hinton is often labeled one of the founding fathers of modern AI. Hinton was very complimentary of Google’s work but highly critical of further AI development, saying he regretted his involvement with the development of that technology. His concerns include the possibility that AI chatbots could become more intelligent than humans and be exploited by bad actors. Hinton is also concerned about the potential for AI tools to spread misinformation. Other experts have called for a pause in developing AI chatbots until robust safety measures and regulations can be implemented. Some experts say that Hinton’s concerns are hypothetical. Of note, the CEO of Google described the rapid development of their chatbot, Bard, through the analogy of a speeding train that one day might start building its own tracks.
Shortly after Hinton’s letter, the president, president-elect, and 17 former presidents of the Association for the Advancement of Artificial Intelligence (AAAI) issued a warning letter about the risks of AI. They expressed their belief that AI will be increasingly game-changing in health care, climate, education, engineering, and many other fields. They noted the significance of AI powering navigation systems, its use in thousands of daily cancer screenings, its use in sorting billions of letters in the postal system, its revelation of the structure of thousands of proteins, its use in performing weather predictions, and its use developing new materials and providing engineers with creativity-boosting ideas. One of the signers of this letter was the chief scientific officer at Microsoft, which uses OpenAI’s ChatGPT technology in its Bing search engine.
AI Privacy Concerns
Another concern regarding the use of AI chatbots is privacy. ChatGPT, Bing, Bard, and other major chatbots all have policies to protect against the improper use of personal information. So, what can go wrong? Plenty! I’ll start the list with the possibility of rogue employees, cyberhackers, and unexpected technology glitches—where something that was not supposed to happen fortuitously happens, and the AI developer now promises to fix that problem so that it will never occur again. The privacy concern is that individuals using the enhanced chatbots are submitting medical inquiries, financial strategies, and other information they do not intend for public consumption. Regardless of whether the chatbot’s response is accurate, the mere possibility that you make an inquiry about a mental health issue, medicine for a sensitive medical condition, or your ownership of certain assets or planned investment has potentially revealed personal information that you did not intend to put in the public domain.
Privacy concerns are not limited to individuals. Apple, JP Morgan, Walmart, Verizon, and other major businesses have severely limited employee use of external AI tools over concerns about the release of confidential data. Most companies expressing this concern have their own special AI tools and are developing or acquiring more. The bottom line is that individuals and businesses should be very careful about inputting personal or confidential information to an external entity’s AI tools.
The Applications of AI Are Vast
In addition to the AI applications noted earlier in this article, AI is now creating music that sounds like your favorite artist singing the songs of another artist, performing music written by AI, writing poems and academic papers, creating artwork on demand, altering photos of known persons doing things with people and things that never occurred, and creating realistic videos with audio showing people doing and saying things that never happened. Early versions of manipulated photographs and motion pictures were called “trick photography.” The more recent term “deep fakes” refers to both AI-powered audio and video manipulation. In addition, consultants are using AI to write reports, doctors and hospitals are using AI to provide preliminary medical diagnoses, and lawyers are using AI to draft contracts and prepare the first draft of legal memoranda. Often, the customer service representative you chat or speak with is an AI-powered chatbot writing responses or orally responding to your inquiry. The possibilities are limitless.
Final Thoughts
One of the best examples I can give for a general understanding of the potential power of AI is what happened during the short history of AI being utilized to play the game of chess.
In 1950, Claude Shannon, an American electrical engineer, mathematician, and researcher from MIT, wrote a paper proposing the idea of training a computer to play chess.
In 1953, Alan Turing, who is considered a founding father of artificial intelligence (a term that was coined after Turing’s death), wrote a program for playing chess.
In 1988, IBM’s Deep Thought chess program became the first computer program to beat a grandmaster, Bent Larsen. After that notable occurrence, world chess champion Garry Kasparov declared that a computer program could never beat him in chess. He proved himself correct in 1989 by defeating IBM’s Deep Thought in a two-game match.
In 1996, Kasparov played a six-game match with IBM’s Deep Blue, the successor chess program to Deep Thought. Kasparov won the competition four games to two. However, this was the first time a world chess champion lost a game to a computer chess program.
In 1997, Deep Blue and Kasparov had a six-game rematch. Deep Blue won three games to two this time, with one draw. At this point, computer chess programs began regularly beating and then overwhelming humans on the chess board. Experts now believe that with the advances in AI algorithms and increases in computer computational power, improvements in AI-powered chess programs will occur only when the computer programs play each other.
The short history of AI conquering the game of chess is similar to what is happening now. Society is suffering growing pains trying to understand AI’s limitations and frailties while developers constantly improve their products. Newer versions of Microsoft’s Bing, Google’s Bard, and OpenAI’s ChatGPT are now searching the internet (instead of being limited to a static database). However, AI’s expanded search functions are still limited in differentiating between reliable sites and websites that purposely or inadvertently spread misinformation.
Finally, within your area of work, when you are presented with a draft document prepared by an AI-powered product, as with a draft document prepared by a law clerk, paralegal, or new lawyer whose abilities you have not fully assessed, it is up to you to exercise due diligence before you sign the document.