chevron-down Created with Sketch Beta.

GPSolo eReport

GPSolo eReport June 2024

AI and You: An Artificial Intelligence Primer

Jeffrey M Allen

Summary

  • Welcome to GPSolo eReport’s new column, which discusses the latest developments in artificial intelligence. This month, we present an overview of AI, complete with a glossary of common AI terms.
  • Many of the new programs being offered for attorneys to assist in litigation, document control, drafting, legal research, etc., have some version of generative artificial intelligence (GenAI) at their core.
  • We need to ensure the ethical and responsible development of AI. This includes creating transparent algorithms, avoiding bias, and ensuring human oversight.
AI and You: An Artificial Intelligence Primer
webphotographeer via Getty Images

Jump to:

GPSolo eReport’s Editorial Board has recognized the growing importance of artificial intelligence (AI) to our professional and personal lives. They have asked us to create a new AI column that will appear in each issue of the eReport to help members of the Division better understand AI, what it does, and what problems it creates. For the first article in the series, we thought that a primer on AI would serve readers best.

Artificial Intelligence Basics

AI has gained significant prominence in recent years. It has started to reshape industries, improve our daily lives, and raise important ethical questions. This primer will summarize AI, its history, key concepts, current applications, challenges, and prospects. We will touch upon several topics in this primer that future columns will explore in greater depth.

AI refers to the capability of a machine or computer program to perform tasks that typically require human intelligence. Such tasks include problem-solving, learning, understanding natural language, recognizing patterns, and data-based decision making.

We have had and used AI in our lives for many years, during which AI’s capabilities have improved and expanded, causing substantial but incremental growth in our use of and reliance upon AI, both in our professional and personal lives. You probably have seen this in the evolution of the software offered for searching case law and statutes. The software has grown more and more sophisticated as AI has evolved. As a result, we have increasingly used it to search for specific information relevant to our cases. Examples of AI in our personal lives include virtual assistants, such as Siri, Alexa, and Google Assistant, which employ AI to understand and respond to voice commands.

ChatGPT

In 2022, the tech company OpenAI introduced a program called ChatGPT. ChatGPT marks a significant turning point in the evolution of AI, much like the movement of life from the sea to land marked a significant turning point in the evolution of life as we know it. ChatGPT represents a different form of AI. The development of ChatGPT rapidly accelerated the growth of AI, ramping up its significance to us personally and professionally and opening the door to many new applications. We call this new form of AI “generative AI” (GenAI) as it can generate things such as text, music, diagrams, pictures, and much more.

ChatGPT is an artificial intelligence chatbot that uses natural language processing to create humanlike conversational dialogue. ChatGPT creates humanlike images, text, or videos in response to prompts or instructions we provide. The acronym “GPT” stands for “Generative Pre-trained Transformer,” referring to how the program processes requests and formulates responses. ChatGPT trains with reinforcement learning through human feedback and reward models that rank its responses. This process allows the augmentation of ChatGPT with machine learning to help improve future responses. The language model responds to questions and can compose written content of a variety of types for personal and professional uses, including articles, social media posts, essays, code, and emails.

Two recent advances have played a critical part in GenAI going mainstream: transformers and the breakthrough language models they enabled. “Transformers“ (not the movie version) are a type of machine learning that allows researchers to train increasingly large models without labeling all the data in advance. New models train on billions of pages of text, resulting in answers with more depth and comprehension. Transformers unlocked a new notion called “attention,” which enables the ability to track connections between words across pages, chapters, and books rather than only in individual sentences. Transformers can also use their ability to track connections for scientific analysis of such things as DNA.

Rapid advances in large language models (LLMs) having billions or even trillions of parameters opened the door for GenAI models to write engaging text, paint photorealistic images, and even create somewhat entertaining sitcoms on the fly. The rapid evolution of the capabilities of ChatGPT has expanded the use of AI, changing the nature of the beast and its role in our society. The evolution of AI in the wake of ChatGPT has created a tsunami of new applications and put AI in our homes, our cars, our health and medical care, and our offices. For example, in health care, AI aids in diagnosing diseases, predicting patient outcomes, and discovering drugs. In finance, AI can assess risk, detect fraud, and monitor high-frequency trading. In transportation, self-driving cars and AI-based traffic management systems may improve safety and efficiency.

Many industries have started considering the trade-off between human workers and ChatGPT-powered AI. Some have actually replaced human workers in some positions with AI-driven bots.

To give you some perspective, the move from strictly relying on natural intelligence (that’s what we have and use without the help of computers) to using AI could be compared to the move from walking to riding a bicycle. Similarly, the evolution of AI before GenAI could be compared to the evolution from the most basic bicycles (effectively a wooden scooter with no pedals) in the early 1800s to the first commercial car (a one-cylinder Winton, in 1898). Continuing the analogy, from that point, the evolution from pre-generative AI to GenAI would equate to the evolution from the Winton to the modern Ferrari.

AI’s Future

AI continues to improve and to learn and apply new skills as it improves. Nothing suggests that its rate of growth will diminish in the near future. An early version of ChatGPT (ChatGPT-3) took and passed law school exams without ever attending a day of law school. It did not, however, pass the bar exam administered to it. But many people who finish law school also fail the bar the first time they take it. A later iteration, ChatGPT-4, however, successfully passed the Uniform Bar Exam with a very respectable score of 297, which bested 90 percent of the human examinees who went to law school. ChatGPT is not focused exclusively on the legal profession. It also passed the CPA exam on its second try and has passed or nearly passed all three components of the U.S. Medical Licensing Exam. Does this mean we will soon have bot lawyers, bot doctors, bot accountants? Does it mean that we will create separate licensing categories for artificial intelligence in these areas? Does it mean we will not bother to license the bots and simply let them work as assistants to the various professions? Time will tell whether licensing boards will license a bot or restrict licenses to those of us limited by natural intelligence (which may be better in some respects but certainly much slower than GenAI).

Collaboration

Expect to see increased collaboration between humans and AI, with AI bots assisting in many professions, including law, accounting, and medicine. In truth, we have already seen bots doing significant work in all those fields and many others. Many of the new programs being offered for attorneys to assist in litigation, document control, drafting, legal research, etc., have some version of GenAI at their core. In fact, many would not exist but for the capabilities created by GenAI and ChatGPT.

We also will see bots supported by GenAI playing a bigger role in our personal lives. Expect to see many doctors using AI as part of our health care, accountants using AI to help prepare our taxes, and our cars using AI to drive us around, likely more safely than we drive ourselves, as natural intelligence has a greater susceptibility to distraction than AI. (The last of these raises an interesting question: If the car is driving itself and you are in the back seat legally intoxicated, are you operating a vehicle under the influence, or does the AI driving the car count as a designated driver?)

We have started to see the evolution from limited but useful virtual assistants such as Siri and Alexa toward far more sophisticated and capable virtual assistants, such as Microsoft’s Copilot, which is frankly amazing, even to a jaded technophile like me. If you have not yet experienced Copilot, take a good, hard look at it and what it can do to help you. You can test it out with the free version, but it has some serious limitations, the most important of which is the lack of interface with Microsoft Office. I chose the Pro version ($20 per month) to get its extra features, including the interface with the Microsoft Office Suite. The creation of assistants such as Copilot does not render Siri or Alexa obsolete—yet—as they currently focus on different things. But I predict that those earlier assistants are nearing end-of-life as they now exist and will either evolve into something more complex and capable or go the way of the dodo as more modern bots replace them using increasingly sophisticated GenAI.

Governance

AI, like most of technology, comes as a double-edged sword, bringing risks and benefits to us. Because of its power, both risks and benefits have increased significance. Governments and organizations have started developing guidelines and regulations for AI to ensure safety, fairness, and accountability in its deployment. Those processes remain in their infancy and do not yet offer any significant protection. More regulations will come in time, but for now, we are in the wild, wild west of the 1800s.

Ethics

We need to ensure the ethical and responsible development of AI. This includes transparent algorithms, avoiding bias, and ensuring human oversight. Ethical issues related to AI should concern us at all levels of personal and professional use. AI has no moral compass and no sense of right or wrong. It is not immoral; it is truly amoral. This should particularly concern us in connection with its use as a legal assistant or when entrusting it with confidential, personal, medical, or financial information. AI has facilitated several attorneys running afoul of their legal and ethical obligations and the laws under which they operate as attorneys. GenAI actually writes as well as or better than most attorneys I have encountered in my now 50-plus years practicing law. Unfortunately, nobody told it not to make up cases or misapply cases, citing them for propositions they do not support.

Some attorneys have had AI bots do their research and draft briefs or memoranda of points and authorities and then turned them in to the court without verifying the legitimacy of the references or the accuracy of the citations. As if that was not a poor enough example of natural intelligence, some of these attorneys also had the bright idea of misrepresenting to the court what they did, so they would not get into trouble. We should learn several lessons from these examples: (1) as an attorney, you need to take responsibility for your actions; (2) never lie to the court; (3) never file or lodge with the court something generated by AI without first having natural intelligence verify the accuracy and legitimacy of every citation. In the words of President Reagan, “trust, but verify.”

There is also the matter of protecting the confidential data entrusted to us by our clients. AI systems collect and analyze vast amounts of data. We need to ensure the safeguarding of such data, which may include personal and case-related information you input about your clients. FYI, the U.S. House of Representatives has banned Microsoft Copilot from all House staff’s computers due to the concerns of the Office of Cybersecurity that Copilot might provide confidential House data to unexpected and non-House approved cloud services.

Other Ethical Considerations and Dangers

In addition to issues of legal ethics noted above, the use of AI also raises more general concerns:

  • Fairness. AI can inherit biases intrinsic to data on which it trains. Those biases can influence the outcomes selected by the AI model.
  • Automation and job loss. Automation of tasks through AI will likely lead to job displacement in some industries. We need to prepare the workforce and address what effects that might have on our economy and our social structure. As a society, we need to consider what we will do about the displaced workers, their potential futures, and economic security.
  • Existential risks. AI’s increasing power and flexibility create concerns about the potential for misuse or unintended consequences that could pose existential risks to our society and our population. Industry and governments have raised these concerns and the need for regulation to mitigate these risks. Doing so will require responsible AI development and the implementation of effective monitoring and governance.

Glossary of AI-Related Terms

I do not intend this glossary to be an all-inclusive list of every term you might encounter in connection with AI. It includes the terms you will most likely encounter in articles and discussions about AI.

Algorithm. A step-by-step set of instructions or rules for solving a specific problem or performing a specific task. AI uses algorithms to train models and make predictions.

Artificial intelligence (AI). Artificial intelligence refers to the development of computer systems capable of performing tasks that typically require human intelligence.

Autonomous. We describe a machine as autonomous if it can perform its task or tasks without human intervention.

Big data. Large and complex datasets not easily managed or analyzed with traditional data processing tools. AI often leverages big data for training and decision-making.

Chatbot. A program designed to communicate with people through text or voice commands in a way that mimics human-to-human conversation.

Data mining. Seeking and discovering patterns, trends, and insights in large datasets.

Deep learning. A subfield of machine learning using artificial neural networks, specifically deep neural networks, to model and solve complex problems, often achieving state-of-the-art performance in tasks such as image and speech recognition.

General (strong) AI. AI with human-level intelligence capable of performing any intellectual task a human can perform. True general AI remains a long-term goal.

Generative AI (GenAI). Artificial intelligence systems that create new content such as text, images, audio, and video. GenAI models data to build its own representations, allowing the creation of brand-new, realistic, and often highly customized outputs.

Hallucination. Hallucination refers to outputs generated by AI systems that are fabricated rather than grounded in reality. For example, a text generator could hallucinate fictional events and present them as fact.

Large language model (LLM). Large language models (LLMs) are a subset of generative AI specifically designed to process, understand, and generate humanlike text. They’re based on deep learning techniques and trained on massive datasets, usually containing billions of words from diverse sources such as websites, books, and articles. This extensive training enables LLMs to grasp the nuances of language, grammar, context, and even some aspects of general knowledge.

Machine learning (ML). A subfield of AI that focuses on developing algorithms and models enabling computers to improve performance on a task through experience, without explicit programming.

Mathematical model. An abstract description of a concrete system using mathematical concepts and language.

Metrics. Tools used to evaluate the performance of supervised learning models in tasks such as classification and regression.

Narrow AI. Also known as weak AI, narrow AI is designed for specific tasks, such as virtual personal assistants (Siri, Alexa), recommendation systems, and chatbots.

Natural language processing. A field of AI focusing on enabling computers to understand, interpret, and generate human language.

Neural networks. Computer models inspired by the structure and function of the human brain, consisting of interconnected nodes (neurons).

Prompt. A prompt provides context to generative AI systems to produce a desired output. A prompt may be a text description, an image, or even samples of desired audio or video. Carefully engineered prompts guide the creative process while leveraging the model’s knowledge.

Reinforcement learning. A form of machine learning in which bots, virtual assistants, or other GenAI creatures learn to make decisions by interacting with an environment and receiving rewards or penalties.

Sentiment analysis. Analyzing text to determine the sentiment or emotional tone; often used in social media monitoring and customer feedback analysis.

Supervised learning. A form of machine learning in which the model trains on a labeled dataset, learning to make predictions or decisions based on input data.

Unsupervised learning. A form of machine learning in which the model learns patterns and structures in data without labeled examples, often used for clustering and dimensionality reduction.

Parts of this article are based on “A Primer on Artificial Intelligence,” written by Jeffrey Allen and Ashley Hallene for the ABA Senior Lawyers Division periodical Voice of Experience, January 2024.

    Author