There is a lot of excitement around the promise of machine learning (ML), a subset of artificial intelligence (AI), and wonderment about how it will impact the future of work, economic growth, education, and the practice of medicine and law, among other aspects of society. According to former Google CEO Eric Schmidt, the key accelerant of ChatGPT—an AI chatbot developed by OpenAI—is Reinforcement Learning from Human Feedback (RLHF), a technology that makes interaction with the chatbot more human-like. RLHF “was the linchpin that brought us to this point,” but it will take a full decade “to fully see the transformative nature of generative AI [ChatGPT technologies].” The RLHF acted as an accelerant, propelling the power of AI to the forefront of human consciousness.
August 10, 2023 Feature
Comparing the US, UK, and EU Regulatory Approaches to AI
Catherine Barrett
As leaders in the legal and public policy communities continue to study the real and possible impacts of AI on society, it is worth listening to leading computer scientists, such as Eric Schmidt, and eminent mathematician Dr. Geoffrey Hinton. An expert on neural networks and deep learning, Dr. Hinton recently characterized AI as comparable in scale to the industrial revolution or electricity or maybe the invention of the wheel. AI isn’t, however, magic. It’s math. ML applies mathematical models using statistics and algorithms to large data sets of text, image, voice, or a combination thereof, and “learns” over time as the system is continuously refined from user prompts. Machine-generating and human prompts help refine the algorithms, improving the accuracy of the outputs.
One type of ML is the large language mode (LLM). LLMs use large amounts of data and natural language programming (NLP) to translate unstructured spoken and/or written language into structured data a computer can process. Spell check, GPS functions in automobiles, search engine results, and chatbots are among the more familiar examples of NLP.
Growing Demand for—and Concern about—AI
As excitement grows about AI, however, so do concerns. There is tremendous demand from the public to utilize AI technologies, even before fully understanding and measuring the risk. There is tremendous supply of investment chasing AI-related innovations, creating a need for speed to market among entrepreneurs. And there is pressure on lawmakers and regulators to take action to protect society from risks, many of which are not yet identified and/or well understood. The Silicon Valley adage of “Move Fast and Break Things” cannot apply—the risks are too high.
For example, Dr. Hinton, often referred to as the godfather of AI, is concerned about potential risks of certain aspects of AI, such as the (1) rise of fake text and/or images generated from easily accessible and usable AI tools; (2) inability to discern what content is real and what is “fake” (generated using AI tools); (3) risk of bad actors using AI to cause harm (e.g., manipulating elections, instigating violence, or continued financial crimes); and (4) probability that computers will learn how to survive and control themselves. Other AI/ML operational concerns include how to address the restricted access to privately owned LLMs (e.g., Google, OpenAI, Large-scale AI Open Network (LAION)); inconsistent data/lack of quality data used to train big language models; and lack of controls/guardrails in place to anticipate rapid changes from the speed of NLP systems that “learn as they work and extract ever more accurate meaning from huge volumes of raw, unstructured, and unlabeled text and voice data sets.”
ABA Resolution 604 on AI Technologies
With the advent of ChatGPT and the rapid exploration of AI technologies, the pressure is mounting for a public policy response. In early 2023, the American Bar Association (ABA) showed leadership in this space and passed Resolution 604. Resolution 604 urges “organizations that design, develop, deploy, and use artificial intelligence systems and capabilities” to ensure AI technologies are subject to human authority, oversight, and control; provides legal liability for AI products and services; and ensures transparency and traceability of AI products and services. The ABA’s early assessment of the emerging AI landscape is significant not only for members of the ABA and the legal community, both domestic and international, but also for participants in the AI engine of growth, including business, finance, technology developers, and policymakers.
Government Responses to AI Risks
In response, many governments around the world are drafting AI principles or a code of conduct as a foundational step toward drafting public policies, laws and/or regulations to address the rise of AI-related risk. International bodies, such as the Organization for Economic Co-operation and Development (OECD), as well as individual countries, such as Canada, China, India, the UK, the US, have drafted AI principles to date. While there are differences, the top five AI principles these countries and the American Bar Association (ABA) share in common are (in order of most often-cited): (1) Transparency, Traceability & Explainability; (2) Data Protection &/or Privacy and Ensures Safety & Security; (3) Challenges &/or Redress of AI Decisions and Identifies & Manages Risk; (4) Prohibits Bias/Discrimination and Holds Humans Accountable; and (5) Prohibits Abuse/Illegal Activities and Includes Human in the Loop. As governments grapple with the inherent borderless applications of AI, these shared principles may lead to an internationally acceptable AI code of conduct and future risk management mechanisms to govern AI.
Government bodies have begun to create AI/ML national and international governance structures and standards to oversee the development and use of AI and to ensure the safety and security of society. The European Union (EU) is the furthest along in developing a comprehensive legislative response to governing AI. On June 14, 2023, the European Parliament (EP)—which represents EU citizens—agreed to a modified Artificial Intelligence Act (AI Act) with 499 votes in favor, 28 against, and 93 abstentions. The next step is for the Council of the EU—which represents the EU governments—to consider the draft AI Act and make changes. The AI Act is expected to become law before the end of 2023 and will have a profound impact on how the EU, the UK, US, and other governments approach regulating AI. Although the US passed the National Artificial Intelligence Initiative Act in 2020, there is no US law comparable to the breadth and depth of the AI Act to date. However, the absence of US legislative activity does not indicate silence.
US AI Policies and Standards
In October 2022, the White House Office of Science and Technology Policy released The Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People to help protect individual rights in the design, use, and deployment of AI technologies in the United States. The document outlined five protections that everyone in America should have: (1) safe and effective AI systems, (2) nondiscriminatory algorithms, (3) data privacy, (4) notice and explanation of AI system, and (5) the option to opt out.
In January 2023, the National Institute of Standards and Technology (NIST) issued the Artificial Intelligence Risk Management Framework (AI RMF 1.0) to help organizations manage risks associated with the use of AI systems and promote trustworthy, responsible development of AI technologies. In May 2023, the White House issued Fact Sheet: Biden-Harris Administration Announces New Actions to Promote Responsible AI Innovation That Protects Americans’ Rights and Safety, announcing seven federal AI research and development institutes; a commitment from several global technology leaders to publicly evaluate AI systems; and draft policy guidance for federal departments and agencies regarding development, use, and procurement of AI technologies expected this summer.
While the executive branch is taking steps to outline a national response to AI, leaders in the House and Senate have announced plans to introduce comprehensive AI-specific legislation before the end of 2023. Any final bill is expected to encompass US AI principles; mirror those of the OECD, UK, and EU; and share the sentiments of the AI guidelines found in ABA Resolution 604.
The UK’s Iterative Approach to Regulating AI
Much like the US, the UK lacks a purpose-built national law to regulate AI. However, existing laws and regulations apply to some AI technologies. For example, the UK General Data Protection Regulation (GDPR) and Data Protection Act of 2018 include requirements that apply to “automated decision-making and the broader processing of personal data,” which also encompass “processing for the purpose of developing and training AI technologies.” The UK government is, however, working closely with international bodies—such as the Organization for Economic Co-operation and Development (OECD) and the International Organization for Standardization (ISO)—to develop a coherent approach to AI regulation, policy, and law that transcends physical borders.
The UK government is taking a risk-based, technology-neutral approach to regulating AI, relying on their existing regulators to interpret, design, and apply five AI principles across industry sectors. These five principles—safety, security, and robustness; transparency and explainability; fairness; accountability and governance; and contestability and redress—will influence the development of formal rules in the future. In addition, UK policy directs regulators to be (1) context-specific, meaning AI-related risks will be evaluated within the context of how AI is applied/used/employed; (2) pro-innovation, meaning regulators “will focus on applications of AI that result in real, identifiable, unacceptable levels of risk, rather than seeking to impose controls on uses of AI that pose low or hypothetical risk so we avoid stifling innovation”; (3) coherent, meaning the legal/regulatory system will be “simple, clear, predictable and stable”; and (4) proportionate and adaptable, meaning initial regulatory emphasis will be on guidance and voluntary adherence rather than mandates.
The emphasis on context and proportionality allows regulators the flexibility to assess risk at the point of application, considering the environment in which the AI technology is applied. To further this nuanced approach, UK regulators identify two core characteristics of AI: (1) adaptiveness and (2) autonomy. Adaptiveness refers to the process of “training” AI technologies on vast quantities of large data sets, which, in turn, allows the technology to “execute according to patterns and connections” and “perform tasks without express, intentional programming by humans.” The second characteristic of AI, autonomy, refers to the ability of AI technologies to make decisions not explicitly programmed by humans. These two characteristics of AI are deemed “unique” and form a common understanding of AI among regulators across industry sectors (such as healthcare, finance, and law).
This approach is intended to balance the need to preserve an ecosystem of AI innovation with the need to protect the health and safety of citizens. UK regulators will consider the context in which AI technologies are used, the potential risks, the need to prioritize regulatory coordination, and the need for guidance.
The EU Set to Pass Europe’s First Bespoke AI Law in 2023
The UK and 27 EU Member States share a similar approach to regulating AI, namely a context-specific approach whereby AI technologies are identified and assessed at the application level. Like the UK, the EU is striving to regulate AI by balancing potential societal and economic benefits with risk of potential harm. Unlike the UK, however, the EU is moving to establish a risk-based “legal framework for trustworthy AI” based on “EU values and fundamental rights” and “aims to give people and other users the confidence to embrace AI-based solutions” and encourage continued business development of AI technologies. The AI Act is expected to become law in the EU before the end of 2023 and will be Europe’s first comprehensive, bespoke legal framework to regulate AI-based technologies in Europe and the first AI-specific law among the 38 democracies of the OECD.
The scope of the AI Act will apply to providers—natural or legal persons, public authority, agency, or another body—that develop an AI system for distribution or use in the EU market. Significantly, the scope includes providers and users of AI systems located in a third country, where the output is used in the EU and providers are established in the EU or in a third country, or users of AI systems are located within the EU.
The EU AI Act proposes four levels of risk: (1) unacceptable risk; (2) high risk; (3) limited risk; and (4) minimal or no risk. The risk profile of an AI system will guide the level of trust and confidence in AI-based solutions. Unlike the UK approach, the EU AI Act specifies what constitutes AI techniques and approaches, namely: (a) a variety of ML (e.g., supervised, unsupervised, reinforced learning); (b) logic-and-knowledge-based approaches; and (c) statistical approaches, Bayesian estimation, and search and optimization methods. An AI system is software that is developed with one or more of these techniques and approaches and “can for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.”
Unlike the UK approach, the EU AI Act details a list of prohibited AI practices and systems. For example, AI systems that use “subliminal techniques . . . to materially distort a person’s behavior” that is likely “to cause that person or another person physical or psychological harm” are prohibited. AI systems that use “real-time biometric identification systems” in public for law enforcement purposes are prohibited (exceptions apply). In addition, the EU AI Act describes characteristics of AI high-risk systems, those with the potential to pose safety concerns or undermine fundamental rights of citizens. For example, AI systems that manage and operate critical infrastructure are deemed high-risk, as are AI systems used for migration, asylum, and border control management. All high-risk AI systems must comply with requirements/obligations specified in the EU AI Act.
The AI Act establishes governance frameworks at both the EU and Member State levels. At the EU level, the AI Act creates a European Artificial Intelligence Board (the Board) composed of the heads of the national supervisory authority and the GDPR-mandated European Data Protection Supervisor. The Board will advise the European Commission (EC), part of the executive of the EU, and issue opinions, recommendations, or written contributions on matters related to the implementation of the AI Act, especially on technical specifications and/or standards. The Board will also provide Member States with best practices.
Each of the 27 Member States will be required to establish administrators—national competent authorities—to implement the AI Act and designate a national supervisory authority to oversee the AI market, among other tasks. All Member States will also be required to establish a notification body to verify high-risk AI systems are conforming to assessment procedures and fulfilling other organizational, quality management, and process requirements.
Unlike the UK approach, the EU AI Act specifies standards, conformity assessments, and certification and registration requirements for high-risk AI systems. In addition, there are transparency requirements for certain AI systems. For example, notice must be given to a natural person if that person interacts with an AI system (exceptions apply). Engaging online via instant messaging with a chatbot providing information related to your credit score would very likely require a notice to the user that they are engaged in a conversation with an AI technology, not a human. Also, unlike the UK approach, the AI Act details various “innovation measures of support” that Member States must adopt, such as “AI regulatory sandboxes,” a controlled environment to develop, test, and evaluate AI systems prior to being taken to market or operationalized.
The AI Act establishes a publicly available EU database containing high-risk AI system registration data. Providers—developers of AI systems—are required to input “meaningful information about their systems” and the results of “conformity assessments carried out on those systems” into the database. Providers of high-risk AI systems must also implement post-market monitoring systems to collect, document, and analyze performance-related data. The AI Act specifies post-market monitoring system requirements. Providers must also report serious incidents and malfunctioning of high-risk AI systems “to the market surveillance authorities of the Member States where the incident or malfunction occurred” no later than 15 days after becoming aware of the incident or malfunctioning high-risk AI system. In addition, providers of AI systems and/or organizations representing them may develop codes of conduct and apply them voluntarily to non-high-risk AI systems. The codes of conduct would include legal requirements for high-risk AI systems (e.g., data governance, documentation, transparency).
The AI Act includes several penalty provisions:
- For infringements against explicitly prohibited AI practices or noncompliance with high-risk AI data and data governance requirements, administrative fines of up to €30 million or, if the offender is a company, up to 6 percent of its total worldwide annual turnover, whichever is greater.
- For other violations of any requirements or obligations, administrative fines up to €20 million or, if the offender is a company, up to 4 percent of its total worldwide annual turnover for the preceding financial year, whichever is greater.
- If incorrect, incomplete, or misleading information is provided to notified bodies and national competent authorities, administrative fines of up to €10 million or, if the offender is a company, up to 2 percent of its total worldwide annual turnover for the preceding financial year, whichever is greater.
Member States are directed to promulgate rules on penalties but consider the economic viability of small-scale providers and start-ups.
Need for international and Cross-Sector Cooperation on AI
Governments are grappling with the need to respond to the rise of AI and to harness the potential economic and societal benefits with the need to develop policies, regulations, and laws to minimize potential economic and societal harms. Much like the US-led, internationally adopted Fair Information Practice Principles (FIPPS), there are calls for a shared understanding of AI principles among representatives from the US, UK, and EU, a standard frame of reference that transcends physical borders and reflects a shared commitment to trust, fairness, safety, transparency, and accountability. A common, FIPPS-like AI set of principles would inform policies and practices required for international trade, foreign direct investment, regulations, and laws required to harness the full promise, and to mitigate the myriad of risks, associated with AI.
Emerging Consensus Among the US, UK, and EU on the Following AI Principles
Accountability
Ensure a person/corporation is accountable for proper functioning of AI services, systems, and technologies (SST)
Transparency
Ensure AI SST logic and decision-making are meaningfully explained and disclosed
Fairness
Respect the rule of law, human rights, and democratic values
Safety & Security
Ensure AI SST function appropriately and do not pose unreasonable safety risk
The UK government, for example, is taking a risk-based, iterative approach to regulating AI, beginning with defining and implementing a set of AI principles across industry sectors. Eventually, the UK will develop and implement one or more bespoke AI laws. But this iterative approach provides the UK with feedback from regulators and time to develop and implement a more formal AI legal framework.
The EU, on the other hand, is on the cusp of passing a comprehensive, bespoke AI Act. In keeping with the UK approach, the AI Act is risk-based and shares the UK and OECD AI principles. With expected passage of the AI Act before the end of 2023, the EU will be the first among Western countries to pass a comprehensive AI-specific law. Unlike the EU, the UK policy is to empower existing regulators of industry sectors (such as communications) to assess AI-specific risks rather than pass and implement a new AI-specific law. While the United States has no AI-specific law to date, Congress is expected to introduce legislation before the end of 2023 and NIST and the White House are expected to continue providing guidance and resources. As noted above, the ABA is playing an active role in the evolving US AI legal and policy development, influencing lawmakers and regulators both here and abroad. ABA members and the legal profession at large demonstrate the need for a more coordinated approach to AI principles, legal framework, and regulations.
If this truly is as revolutionary as Dr. Hinton suggests, but as risky as Sam Altman, CEO of OpenAI, recently characterized—“If this technology goes wrong, it can go quite wrong”—then the breadth and depth of legal and regulatory responses must surpass the AI Act, requiring greater engagement, cooperation, and coordination between and among the 27 Member States in the EU, the UK, and the United States and other global actors. Governments alone do not harness the power to control AI. It will take a concerted effort to involve stakeholders from across government, academia, nonprofits, and the private sector to properly understand AI and govern AI.