AI Is One Thing and Many Things
The terms “artificial intelligence” and “AI” are often bandied about as if there is a singular definition that is universally understood. In reality, AI is not one technology. A simple Google search of “what is AI?” turns up 4.3 billion results.
AI is an umbrella term that includes multiple concepts and technologies used individually and in combination to add intelligence to computers and machines. When it comes to defining AI, there is little or no standardization when it comes to organizing the components or building blocks of AI into a universally accepted taxonomy.
With this in mind, a working definition of AI is needed as well as a nontechnical framework for understanding what can be considered “AI building blocks.”
Defining Artificial Intelligence—General
Let’s start with a simple but functional definition of AI:
Artificial Intelligence (AI) is an area of computer science that emphasizes the creation of machines that work and react like humans. This means systems that have the ability to depict or mimic human brain functions including learning, speech (recognition and generation), problem solving, vision and knowledge generation.7
AI is a constellation of technologies that enable computers and other devices to sense, comprehend, act, and learn. Unlike IT systems of the past that merely generated or stored data, the value of AI systems is that they can learn from and adapt to data and complete tasks in ways similar to how a human would. In this regard, AI imbues machines with intelligence.
AI Building Blocks
The general definition offered above is a descriptor for what AI is. The components of AI described below are the “how” part of the equation. These are the functional capabilities provided by AI today.
For the sake of providing a framework for understanding AI capabilities, let’s take the broad definition noted above and break it down further into AI building blocks.
Machine Learning (ML)
Ask anyone today what type of AI project they are working on and the most likely answer will involve prediction, using machine learning (ML). ML provides software, machines, devices, and robots with the ability to learn without human intervention or assistance or without static program instructions.
Machine learning evolved from the study of pattern recognition and computational learning theory. The term was coined by AI pioneer Arthur Lee Samuel in 1959, who defined it as a “field of study that gives computers the ability to learn without being explicitly programmed.”8 Instead of defining a set of static rules, an ML model learns rules from a set of training data, then applies its learning to analyze and make predictions based on new data.
Cognitive Services
Beyond the power of making predictions through machine learning, there are a growing number of applications or solutions that can be categorized as cognitive services. As the title implies, these AI building blocks mimic specific human functions.
Computer Vision: Computer vision is a field of computer science that enables computers to identify objects in images in a manner that mimics what human vision does. Computer vision is a form of AI, as the computer must interpret what it sees.
Knowledge Extraction: Knowledge extraction refers to identification, extraction, and organization of specific information and knowledge from large amounts of preexisting data and information. The data are often unstructured—in other words, they are simply written as text rather than structured into a particular format or database. As the amount of data and information increases in healthcare, the ability to extract and mine these data to acquire new knowledge becomes vitally important. For example, medical records often include unstructured data such as medical providers’ case notes that must be analyzed and converted into a structured form before further analysis.
Speech: Converting text to speech and vice versa on the go to understand user intent and interact with patients and consumers provides a more natural interaction with users. Implementing speech translation and recognition features into applications and workflows makes an automated process more human by enabling automated systems to understand what a human is saying. The speech component of AI is getting a lot of uptake in healthcare today.
Language Understanding: This allows a computer application to understand what a person is saying and wants in the person’s own words. Language understanding and speech processing work hand in hand, as typically speech is converted to text, run through a language understanding process, then converted back to speech.
Natural Language Processing (NLP): NLP enables computers to derive computable and actionable data from text, especially when text is recorded in the form of natural human language (i.e., phrases, sentences, paragraphs). This technology allows humans to record information in the most natural method of human communication (narrative text), and then enables computers to extract actionable information from that text. NLP is an important component of knowledge extraction and other AI capabilities.
Text Analytics: Text analytics provides natural language processing over raw text for sentiment analysis (assessing affect or emotions), key phrase extraction, and language detection.
Search: Search is one of the most important services for nearly every application or solution nowadays. In order to implement a search service, it is essential to provide the best possible results.
Cognitive services can be used as stand-alone applications but are often used in combination. Today healthcare is experiencing a data explosion. One study estimates that medical knowledge is now doubling every 73 days.9 The use of NLP, knowledge extraction, text analytics, and search allow clinicians, researchers, and others to harness the power of massive amounts of data to be used in improving diagnostic and treatment processes.
Do Submarines Swim?
If you were really sick, how would you feel about being under the care of a doctor or nurse who is brilliant at the “science” of medicine but unable to explain why he or she is recommending a treatment, clueless about understanding what you are feeling, and not adept at managing the “softer side” of care delivery?
Such is the case with artificial intelligence today.
While growing capabilities are widening the value proposition for AI in health, there are key areas where it falls short. To further your understanding of how to delineate what it’s good at and what it’s not, consider this question: While everyone’s talking about artificial intelligence, have you ever heard anyone talk about artificial wisdom?
Smart machines can recognize certain things that are fact, logic, or pattern based but are unable to discern many situations that humans recognize as common sense. AI can describe how a submarine is propelled through the water but can’t differentiate this from swimming. A smart machine can sense or predict temperature variation but does not know how a patient feels when he or she has a fever. Measuring or predicting spikes in blood pressure does not equate to understanding what anxiety feels like for a patient or family member and what to do about it.
As “smart” as AI is becoming at certain things, no one has figured out how to imbue machines with those qualities essential to the care process like wisdom, reasoning, judgment, imagination, critical thinking, common sense, and empathy. Such attributes remain uniquely human.
Innovative leaders understand the differences in capabilities between humans and intelligent machines and will define and execute AI plans that leverage both to create performance loops. A performance loop is created when humans and machines collaborate to get the best from each. As this happens, the quality and effectiveness of health services improve.
This juxtaposition of humans collaborating with intelligent machines becomes the nexus for many of the legal, regulatory, and ethical issues that are emerging. Many go beyond the security and compliance rules and regulations with which health organizations must manage today such as the Health Insurance Portability and Accountability Act of 1996 (HIPAA) in the United States or the General Data Protection Regulation (GDPR) in Europe. And while AI is governed by the same laws and regulations as any system involving personal health information (PHI), there are new issues arising from AI that are currently not fully addressed by existing laws and regulations.
Beyond the typical legal and regulatory issues arising from the use of technology and PHI, here are some of the key issues that are coming forward for which guiding principles are needed.10
Fairness—AI Systems Should Treat All People Fairly
AI systems should treat everyone in a fair and balanced manner and not affect similarly situated groups of people in different ways. For example, when AI systems provide guidance on medical treatments, it should make recommendations that are accurate for everyone with similar symptoms. If designed properly, AI can help make decisions that are fairer because computers are purely logical and, in theory, are not subject to the conscious and unconscious biases that inevitably influence human decision-making.
And yet, AI systems are designed by human beings and the systems are trained using data that reflect the imperfect world in which we live. A recent study published in the New England Journal of Medicine cites numerous examples of bias being found in predictive algorithms used to make clinical treatment decisions today.11
Without careful attention, AI systems can wind up operating unfairly without careful planning due to bias that enters the system in a variety of ways, including incorrect correlational assumptions, data sets that are not representative of the broader population, or inclusion of biases present in the humans creating an algorithm.
Reliability—AI Systems Should Perform Reliably and Safely
AI-enabled systems that are deployed in healthcare offer great promise, but also the potential for injury or even death if they do not operate reliably (produce consistent results) and safely (do not introduce new risks). In managing the issues of reliability and safety, the healthcare sector has a head start compared to other sectors in that many of the AI systems envisioned will be considered medical devices and subject to existing and new regulations through organizations such as the Food and Drug Administration.
The complexity of AI technologies has fueled fears that AI systems may cause harm in the face of unforeseen circumstances, or that they can be manipulated to act in harmful ways. As is true for any technology, trust will ultimately depend on whether AI-based systems can be operated reliably, safely, and consistently—not only under normal circumstances but also in unexpected conditions or when they are under attack.
This begins by demonstrating that systems are designed to operate within a clear set of parameters under expected performance conditions. In all cases there should be a way to verify that they are behaving as intended under actual operating conditions. This means consistently producing the correct or intended results.
Privacy and Security—AI Systems Should Be Secure and Respect Privacy
As our lives generate more and more data, the question of how to preserve the privacy and security of our personal data is becoming more important and more complicated. While protecting privacy and security is important to all technology developments, recent advances require that we pay even closer attention to these issues to create the levels of trust needed to realize the full benefits of AI.
As we collect an increasing volume of sensitive data about people through an expanding array of devices, we will have to do more to ensure that these data are stored in secure systems. Such systems will be managed by stewards who will be guided by clear rules that protect these sensitive data from improper uses. At the same time, such systems will need to be managed in ways that enable new AI-powered innovations that benefit individual patients and society as a whole.
These dual objectives of security (ensuring that unauthorized parties cannot access the data) and privacy (ensuring that neither authorized nor unauthorized parties access and use the data for a nonpermitted purpose) are increasingly intertwined with the technology platforms on which the data are captured, stored, processed, and retrieved.
From a security standpoint, modern cloud platforms enable sensitive data sets to benefit from massive security investments by the companies that build and operate these systems. From a privacy standpoint, these modern cloud systems provide a deep and nuanced set of technical controls that allow data stewards to control access at a granular level as well as to create robust access logs that enable audits to ensure data have not been improperly accessed or used.
Inclusiveness—AI Systems Should Empower Everyone and Engage People
If we are to ensure that AI technologies benefit and empower everyone, they must incorporate and address a broad range of human needs and experiences. Inclusive design practices will help system developers understand and address potential barriers in a product or environment that could unintentionally exclude people. This means that AI systems should be designed to understand the context, needs, and expectations of the people who use them.
Transparency and Accountability
Underlying the principles of reliability, fairness, and security are two fundamental principles: transparency and accountability. Because decisions made by AI health systems will impact patients’ health and care, it is particularly important that everyone relying on these systems (healthcare professionals, patients, managed-care organizations, regulators) understands how the systems make decisions.
Equally important, as AI health systems play a greater role in both diagnosis and selection of treatment options by healthcare professionals, we will need to work through existing rules around accountability, including liability. As a threshold matter, these systems should provide “holistic” explanations that include contextual information about how the system works and interacts with data. Doing this enables the medical community to identify and raise awareness of potential bias, errors, and other unintended outcomes.
AI health systems may create unfairness if healthcare professionals do not understand the limitations (including accuracy) of a system or misunderstand the role of the system’s output. Even if it is difficult for users to understand all the nuances of how a particular algorithm functions, healthcare professionals must be able to understand the clinical basis for recommendations generated by AI systems.
Transparency is not just how the AI system explains its results. It’s also about teaching healthcare providers and users how to interrogate the results. The goal is to ensure that doctors and others relying on these systems understand the limitations of the systems and do not put undue reliance on them.
The creation and use of new technologies sometimes gets ahead of the lawmakers and regulators creating standards by which society can best benefit from such breakthroughs. One only needs to look at the advent of the Internet to understand the likely trajectory of legal and ethical issues arising from AI.
In 1998, as the Internet began to go mainstream with consumers and businesses, one would have been hard-pressed to find a full-time “privacy lawyer.” This legal discipline began to emerge as issues came forward and governments began assessing and creating privacy laws and regulations to guide and govern the appropriate use of consumer and patient data.12
Today, the International Association of Privacy Professionals, or IAPP (founded in 1997), has over 20,000 members in 83 countries. There’s no shortage of topics for IAPP members to discuss, including questions of corporate responsibility and even ethics when it comes to the collection, use, and protection of consumer information.13
Going forward, the real question is not whether AI law will emerge, but how it can best come together and over what timeframe. Just as the Internet gave birth to new public policies and regulations, artificial intelligence is spawning a new set of issues for governments and regulators and a new set of ethical considerations in the fields of health, medicine, computer science, and law. Similarly, the future will likely give birth to a new legal field called “AI law.” With this in mind, it’s safe to assume that not only will there be AI lawyers practicing AI law, but these lawyers, and virtually all others, will rely on AI itself to assist them with their practice.
Endnotes
1. Cliff Saran, Microsoft and Google Join Forces on Covid-19 Dataset, Comput. Wkly. (Mar. 17, 2020), https://www.computerweekly.com/news/252480156/Microsoft-and-Google-join-forces-on-Covid-19-dataset.
2. Patricia Kulp, Microsoft Is Powering the CDC’s Coronavirus Assessment Bot, Adweek (Mar. 25, 2020), https://www.adweek.com/digital/microsoft-is-powering-the-cdcs-coronavirus-assessment-bot/.
3. COVID Community Vulnerability Map, JVION, https://covid19.jvion.com/.
4. Mihaela Porumb, Ernesto Iadanza, Sebastiano Massaro & Leandro Pecchia, A Convolutional Neural Network Approach to Detect Congestive Heart Failure, 55 Biomed. Signal Processing & Control J. 101597 (Jan. 2020); Anne D’Innocenzio & Tom Murphy, Walmart’s Sam’s Club Launches Health Care Pilot to Members, U.S. News (Sept. 26, 2019, 4:55 P.M.), https://www.usnews.com/news/us/articles/2019-09-26/walmarts-sams-club-launches-health-care-pilot-to-members.
5. Neilsen Co., Connected Commerce: Connectivity Is Enabling Lifestyle Evolution (Nov. 2018), https://www.nielsen.com/us/en/insights/reports/2018/connected-commerce-connectivity-is-enabling-lifestyle-evolution.html.
6. Creative Destruction, Wikipedia, https://en.wikipedia.org/wiki/Creative_destruction (last visited 2019).
7. Tom Lawry, AI in Health:-A Leader’s Guide to Winning in the New Age of Intelligent Health Systems (HIMSS & CRC Press 2020).
8. J.A.N. Lee, Computer Pioneers: Arthur Lee Samuel, IEEE Comput. Soc’y, https://history.computer.org/pioneers/samuel.html.
9. Peter Densen, Challenges and Opportunities Facing Medical Education, 122 Transactions of Am. Clinical & Climatol. Ass’n 48 (2011).
10. Issues and principles noted are derived from the book The Future Computed. Brad Smith & Harry Shum, Foreward, in Microsoft, The Future Computed: Artificial Intelligence and Its Role in Society (2018), https://news.microsoft.com/futurecomputed.
11. Darshali A. Vyas, Leo G. Eisenstein & David S. Jones, Hidden in Plain Sight—Reconsidering the Use of Race Correction in Clinical Algorithms, New Eng. J. Med. (June 17, 2020), https://www.nejm.org/doi/full/10.1056/NEJMms2004740.
12. Smith & Shum, supra note 10.
13. Id.