Governance
AI, like most of technology, comes as a double-edged sword, bringing risks and benefits to us. Because of its power, both risks and benefits have increased significance. Governments and organizations have started developing guidelines and regulations for AI to ensure safety, fairness, and accountability in its deployment. Those processes remain in their infancy and do not yet offer any significant protection. More regulations will come in time, but for now, we are in the wild, wild west of the 1800s.
Ethics
We need to ensure the ethical and responsible development of AI. This includes transparent algorithms, avoiding bias, and ensuring human oversight. Ethical issues related to AI should concern us at all levels of personal and professional use. AI has no moral compass and no sense of right or wrong. It is not immoral; it is truly amoral. This should particularly concern us in connection with its use as a legal assistant or when entrusting it with confidential, personal, medical, or financial information. AI has facilitated several attorneys running afoul of their legal and ethical obligations and the laws under which they operate as attorneys. GenAI actually writes as well as or better than most attorneys I have encountered in my now 50-plus years practicing law. Unfortunately, nobody told it not to make up cases or misapply cases, citing them for propositions they do not support.
Some attorneys have had AI bots do their research and draft briefs or memoranda of points and authorities and then turned them in to the court without verifying the legitimacy of the references or the accuracy of the citations. As if that was not a poor enough example of natural intelligence, some of these attorneys also had the bright idea of misrepresenting to the court what they did, so they would not get into trouble. We should learn several lessons from these examples: (1) as an attorney, you need to take responsibility for your actions; (2) never lie to the court; (3) never file or lodge with the court something generated by AI without first having natural intelligence verify the accuracy and legitimacy of every citation. In the words of President Reagan, “trust, but verify.”
There is also the matter of protecting the confidential data entrusted to us by our clients. AI systems collect and analyze vast amounts of data. We need to ensure the safeguarding of such data, which may include personal and case-related information you input about your clients. FYI, the U.S. House of Representatives has banned Microsoft Copilot from all House staff’s computers due to the concerns of the Office of Cybersecurity that Copilot might provide confidential House data to unexpected and non-House approved cloud services.
Other Ethical Considerations and Dangers
In addition to issues of legal ethics noted above, the use of AI also raises more general concerns:
- Fairness. AI can inherit biases intrinsic to data on which it trains. Those biases can influence the outcomes selected by the AI model.
- Automation and job loss. Automation of tasks through AI will likely lead to job displacement in some industries. We need to prepare the workforce and address what effects that might have on our economy and our social structure. As a society, we need to consider what we will do about the displaced workers, their potential futures, and economic security.
- Existential risks. AI’s increasing power and flexibility create concerns about the potential for misuse or unintended consequences that could pose existential risks to our society and our population. Industry and governments have raised these concerns and the need for regulation to mitigate these risks. Doing so will require responsible AI development and the implementation of effective monitoring and governance.
Glossary of AI-Related Terms
I do not intend this glossary to be an all-inclusive list of every term you might encounter in connection with AI. It includes the terms you will most likely encounter in articles and discussions about AI.
Algorithm. A step-by-step set of instructions or rules for solving a specific problem or performing a specific task. AI uses algorithms to train models and make predictions.
Artificial intelligence (AI). Artificial intelligence refers to the development of computer systems capable of performing tasks that typically require human intelligence.
Autonomous. We describe a machine as autonomous if it can perform its task or tasks without human intervention.
Big data. Large and complex datasets not easily managed or analyzed with traditional data processing tools. AI often leverages big data for training and decision-making.
Chatbot. A program designed to communicate with people through text or voice commands in a way that mimics human-to-human conversation.
Data mining. Seeking and discovering patterns, trends, and insights in large datasets.
Deep learning. A subfield of machine learning using artificial neural networks, specifically deep neural networks, to model and solve complex problems, often achieving state-of-the-art performance in tasks such as image and speech recognition.
General (strong) AI. AI with human-level intelligence capable of performing any intellectual task a human can perform. True general AI remains a long-term goal.
Generative AI (GenAI). Artificial intelligence systems that create new content such as text, images, audio, and video. GenAI models data to build its own representations, allowing the creation of brand-new, realistic, and often highly customized outputs.
Hallucination. Hallucination refers to outputs generated by AI systems that are fabricated rather than grounded in reality. For example, a text generator could hallucinate fictional events and present them as fact.
Large language model (LLM). Large language models (LLMs) are a subset of generative AI specifically designed to process, understand, and generate humanlike text. They’re based on deep learning techniques and trained on massive datasets, usually containing billions of words from diverse sources such as websites, books, and articles. This extensive training enables LLMs to grasp the nuances of language, grammar, context, and even some aspects of general knowledge.
Machine learning (ML). A subfield of AI that focuses on developing algorithms and models enabling computers to improve performance on a task through experience, without explicit programming.
Mathematical model. An abstract description of a concrete system using mathematical concepts and language.
Metrics. Tools used to evaluate the performance of supervised learning models in tasks such as classification and regression.
Narrow AI. Also known as weak AI, narrow AI is designed for specific tasks, such as virtual personal assistants (Siri, Alexa), recommendation systems, and chatbots.
Natural language processing. A field of AI focusing on enabling computers to understand, interpret, and generate human language.
Neural networks. Computer models inspired by the structure and function of the human brain, consisting of interconnected nodes (neurons).
Prompt. A prompt provides context to generative AI systems to produce a desired output. A prompt may be a text description, an image, or even samples of desired audio or video. Carefully engineered prompts guide the creative process while leveraging the model’s knowledge.
Reinforcement learning. A form of machine learning in which bots, virtual assistants, or other GenAI creatures learn to make decisions by interacting with an environment and receiving rewards or penalties.
Sentiment analysis. Analyzing text to determine the sentiment or emotional tone; often used in social media monitoring and customer feedback analysis.
Supervised learning. A form of machine learning in which the model trains on a labeled dataset, learning to make predictions or decisions based on input data.
Unsupervised learning. A form of machine learning in which the model learns patterns and structures in data without labeled examples, often used for clustering and dimensionality reduction.