chevron-down Created with Sketch Beta.
May 09, 2024 Feature

Navigating the Patchwork of AI Laws, Standards, and Guidance

Emily Maxim Lamm

The opening weeks of 2024 have seen a record number of state legislative proposals seeking to regulate artificial intelligence (AI) across different sectors in the United States. For example, in light of the upcoming presidential election, a handful of proposals focus on imposing limitations and requirements on the use of generative AI in the context of election campaigns. Meanwhile, on January 8, 2024, Indiana proposed S.B. 7, which would impose prohibitions on the dissemination of media created by generative AI technology, and on January 11, 2024, Georgia proposed H.B. 887, a bill that would prohibit the use of AI in making certain insurance coverage decisions. And several states, including Florida, Kentucky, Virginia, Washington, and West Virginia, have proposed bills creating AI task forces. At the same time, Congress is facing increased pressure to pass AI legislation to tackle an array of potential risks, particularly in light of recent media firestorms surrounding deepfakes of celebrities and robocalls impersonating presidential candidates.

With this type of rapid-fire start to the 2024 legislative season, the AI legal landscape will likely continue evolving across the board. As a result, organizations today are facing a complex and dizzying web of proposed and existing AI laws, standards, and guidance.

This article aims to provide a cohesive overview of this AI patchwork and to help organizations navigate this increasingly intricate terrain. The focus here will be on the implications of the White House AI Executive Order, existing state and local laws in the United States, the European Union’s AI Act, and, finally, governance standards to help bring these diverse elements together within a framework.

The AI Executive Order

On October 30, 2023, the Biden administration took a monumental step in releasing the Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (the AI Executive Order). This landmark AI Executive Order leverages the federal government’s significant role as a purchaser of AI software and hardware to establish guardrails and requirements regarding the development and deployment of AI. The 111-page order encourages the use of AI throughout the government, directing federal agencies to issue guidance in the forthcoming months.

The Department of Labor, for instance, is directed to “develop and publish principles and best practices for employers that could be used to mitigate AI’s potential harms to employees’ well-being and maximize its potential benefits.” Although simply guidance, these best practices and principles regarding AI in employment likely provide insight into how the agency will approach AI-related enforcement actions in the future. Meanwhile, the AI Executive Order tasks the Secretary of Commerce with requiring companies developing dual-use foundation models to report ongoing or planned activities related to training, development, or production of such models. U.S.-based Infrastructure as a Service providers are also required to submit reports to the Secretary of Commerce when a foreign person transacts with them to train a large AI model with potential capabilities that could be used in malicious cyber-enabled activity. In addition, the AI Executive Order requires the Secretary of Commerce for Intellectual Property and Director of the U.S. Patent and Trademark Office to develop guidance on inventorship, patent eligibility, treatment of copyrighted works in AI training, and the scope of protection for works produced using AI and the use of copyrighted works in AI training. The AI Executive Order also builds on the White House’s Voluntary AI Commitments, including by tasking the National Institute for Standards and Technology (NIST) with the development of guidelines for performing AI red-teaming (i.e., structured adversarial testing)of foundation models.

While the AI Executive Order is primarily focused on the federal government and those developing the most potent AI systems, the standards it creates are likely to impact organizations in the private sector. This is especially the case given that the federal government is already a significant AI consumer, which will inevitably influence how vendors with government procurement arms will develop their AI systems. Notably, the White House’s Office of Management and Budget (OMB) issued draft guidance to the federal government regarding its own use of AI, which may have a precedential impact on other legislation coming down the pipeline. The OMB draft memorandum focuses upon safety- and rights-impacting AI systems (i.e., systems with consequential and significant effects) and proposes requirements with respect to opt-out rights, notification, and impact assessments, among others.

Existing Patchwork of U.S. AI Laws

Amid these national developments, existing U.S. state and local laws, especially in New York City, Illinois, Maryland, Colorado, and California, contribute to the AI regulatory compliance headache for organizations.

In the context of AI in the workplace, there are three existing laws with a focus on hiring. First, Illinois regulates the use of AI video interview analysis by imposing advanced notice requirements about the use of AI and how it works, requiring consent from applicants, and providing applicants with the right to request that their video interview be deleted. Illinois also imposes data collection and reporting requirements on employers solely relying upon AI video analysis to determine if an applicant is selected for an in-person interview. Similarly, Maryland requires employers to obtain consent for the use of facial recognition services in applicant interviews. Meanwhile, on July 5, 2023, New York City’s Department of Consumer and Worker Protection began enforcing Local Law 144, the broadest law governing AI in employment in the United States. Local Law 144 prohibits employers from using an automated employment decision tool (AEDT) in hiring and promotion decisions unless it has been the subject of an annual bias audit based on race, sex, and ethnicity by an “independent auditor” no more than one year prior to use. The law also imposes certain posting and notice requirements to applicants and employees who are subject to the use of an AEDT.

Further, when deploying AI systems in the workplace, data privacy laws also must be taken into account. As of January 1, 2023, the personal information of employees, job applicants, and independent contractors became subject to the California Consumer Privacy Act (CCPA)/California Privacy Rights Act (CPRA). Under the CPRA, employers must provide notice about the collection of employment-related personal information, how that data are used, and the period for which the data will be retained, among other requirements. Meanwhile, Illinois’s Biometric Information Privacy Act requires informing individuals that a biometric identifier (e.g., a fingerprint or retina scan) or biometric information is being stored or collected, obtaining a written release from the individuals subject to the storage or collection, and publishing a written policy with a retention schedule and guidelines for destroying biometric identifiers and information.

In a different sector, the Colorado Division of Insurance implemented a final regulation, effective on November 14, 2023, requiring life insurers operating in Colorado to integrate AI governance and risk-management measures. Under these regulations, insurers must remediate any instances of detected unfair discrimination, conduct a comprehensive gap analysis and risk assessments, and comply with documentation requirements, including maintaining an up-to-date inventory of AI models, documenting material changes, bias assessments, ongoing monitoring, vendor selection processes, and annual reviews.

With the slew of sector-specific AI proposals across state legislatures, this patchwork is likely to continue growing.

Global Implications of the EU AI Act

Moving beyond U.S. borders, the European Union’s Artificial Intelligence Act (EU AI Act) stands out as a pioneering effort in comprehensive AI legislation. On December 8, 2023, EU legislators reached a political agreement on the EU AI Act, and on February 2, 2024, the member states of the EU unanimously voted to move forward with it. The EU AI Act’s comprehensive legislative framework aims to regulate AI across sectors and industries and, given its extra-territorial effect, may have far-reaching implications for organizations globally if they do business in the EU. The EU AI Act takes a risk-based approach to legislation, establishing requirements for AI depending on its level of impact on fundamental rights and potential risk.

An AI system is categorized as “high risk” if it poses a significant risk to an individual’s health, safety, or fundamental rights and is used, or intended to be used, in certain critical areas, such as employment, public services, education, critical infrastructure, law enforcement, border control, and the administration of justice. High-risk systems are subject to an array of compliance obligations, including technical documentation, data governance, human oversight, recordkeeping, conformity assessments, a risk management system, post-market monitoring, and fundamental rights impact assessments. So-called general purpose AI (GPAI) models (i.e., foundation models) posing a systemic risk—presumed when trained using a total computing power of more than 1025 floating point operations—are subject to additional rules, including model evaluations, adversarial testing, mitigating of systemic risk, and reporting on energy efficiency. The EU AI Act also prohibits certain AI systems posing an “unacceptable” risk (e.g., AI used to exploit the vulnerabilities of people)while imposing transparency requirements on those presenting a low risk.

The EU AI Act’s requirements will go into effect through a staggered schedule. After entry into force, its obligations will apply six months after for prohibited AI, 12 months after for obligations for GPAI/foundation models, 24 months after for Annex III high-risk requirements, and 36 months after for Annex II high-risk requirements.

The EU General Data Protection Regulation (GDPR) is another layer to keep in mind in the context of the forthcoming EU AI Act. For example, Article 22 of the GDPR applies to decisions based solely on automated processing, including profiling, that produce legal or similarly significant effects on an individual and requires such decisions to only be taken based on contractual necessity, explicit consent, or where authorized by an EU or member state law. Notably, a recent decision from the European Court of Justice applied Article 22 to instances in which automated credit scoring was allegedly used to automatically reject loan applications. In addition, the GDPR imposes several risk mitigation requirements on data controllers, including implementing data protection policies and conducting data protection impact assessments. Although the GDPR is, of course, narrower in scope than the EU AI Act, organizations that have already developed procedures and structures to comply with the GDPR will be able to leverage and expand upon them to comply with the EU AI Act’s data governance and impact assessment requirements.

AI Governance Standards: NIST and ISO 42001

Now that we’ve made our way through the many laws governing AI in different jurisdictions and sectors, you might be wondering how it’s possible to make sense of all of these concepts and fit the requirements together in a practical manner. This is where AI governance comes in. Admittedly, AI governance sometimes seems like a bit of an amorphous concept filled with fluffy buzzwords detached from practicality. However, implementing an effective AI governance system is ultimately the glue for an organization to be able to navigate and comply with this intricate regulatory landscape. Several leading organizations have remained at the forefront of this area and have developed tools to help organizations implement a governance plan.

On January 26, 2023, NIST issued the voluntary Artificial Intelligence Risk Management Framework 1.0 (AI RMF). In the absence of a mandatory regulatory framework in the United States—and with the threat of litigation and regulatory inquiries looming—a growing consensus arose around its emergence as the central risk-based framework for building AI compliance programs that incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, systems, and services. Indeed, NIST is no stranger to such reach—NIST’s Cybersecurity Framework issued in February 2014 has become the global standard for cybersecurity practices in the absence of federal regulation. NIST’s AI RMF provides guidelines for building AI compliance programs that incorporate trustworthiness and transparency considerations across the AI life cycle, including practical guideposts such as conducting risk assessments and audits.

The International Organization for Standardization’s (ISO) 42001 is an international standard that outlines another voluntary framework for establishing and maintaining an AI management system that ensures responsible development and deployment of AI within organizations. Like NIST’s AI RMF, ISO 42001 is intended for organizations of any size and is applicable across industries. Instead of imposing rigid definitions or requirements, ISO 42001 describes a coherent approach for policies, documentation, and risk management practices and controls. For example, under Sections A.9.2 and A.9.3, ISO 42001 imposes broad obligations on organizations to define processes for the responsible use of AI systems. In contrast, the EU AI Act specifies concrete practices that would fall under responsible AI use, such as listing prohibited AI practices/uses, transparency/notification obligations on deployers/users, and instructions for use.

Accordingly, both NIST’s AI RMF and ISO 42001 provide an umbrella within which an organization can develop a unified compliance plan by incorporating applicable legal requirements under the EU AI Act, existing U.S. state and local laws, and potentially forthcoming AI laws and regulations.

* * *

As AI regulations evolve globally, organizations must adopt a harmonized approach to compliance. The interplay between U.S. executive orders, EU legislation, and state and local laws necessitates a comprehensive understanding of AI governance standards. NIST’s AI RMF and ISO 42001 offer practical frameworks, guiding organizations through the complex web of AI regulations to facilitate responsible and ethical AI development and deployment.

    Entity:
    Topic:
    The material in all ABA publications is copyrighted and may be reprinted by permission only. Request reprint permission here.

    Emily Maxim Lamm

    Gibson, Dunn & Crutcher LLP

    Emily Maxim Lamm is an attorney at Gibson, Dunn & Crutcher LLP. Her practice has a dual focus on artificial intelligence matters and employment litigation, counseling, and investigations.

     

    The views and opinions expressed in this article are those of the author and do not necessarily reflect the opinions, position, or policy of Gibson, Dunn & Crutcher LLP, or their other employees, affiliates, or clients. The information provided in this article is not, is not intended to be, and shall not be construed to be either the provision of legal advice or an offer to provide legal services. The content here is intended as a general overview of the subject matter covered. Gibson, Dunn & Crutcher LLP is not obligated to provide updates on the information herein. Those reading this article are encouraged to seek direct counsel on legal questions.