©2024. Published in Landslide, Vol. 16, No. 4, June/July 2024, by the American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association or the copyright holder.
July 10, 2024 Webinar Feature
Incorporating AI: A Road Map for Legal and Ethical Compliance
Sean Collin, William Wright, and Barrett Spraggins
The landscape of artificial intelligence (AI) is continually evolving, marked by the emergence of new companies and innovative use cases at a rapid pace. These developments significantly impact existing technologies and businesses. With the potential to revolutionize industries and reshape consumer interactions, AI is becoming increasingly integral to corporate strategies. Consequently, legal frameworks must evolve in tandem with these technological advancements. This article explores the unique challenges that corporate legal teams face as AI weaves itself into the fabric of modern business operations.
Regulatory Landscape
The impact of AI’s disruption to the public, government, and commerce is currently under wide debate. AI has been in use for some time, but recent developments in generative AI in particular have significantly accelerated its adoption and visibility in the public sphere. This heightened awareness is creating discussion around benefits and challenges in the application of the technology broadly. There is an understanding among major stakeholders in both the private and public sectors about broad potential risks of AI adoption. There are negative and positive impacts in the adoption of new technologies historically, and AI is no exception. Negative potential implications of AI applications could be misinformation, job displacement, security challenges, discrimination, and bias. These can mirror, to some extent, discussions around when the web was being introduced into “normal life.”
As AI’s prevalence in daily life grows, it is likely that tech firms and consumers will increasingly advocate for regulatory oversight and look for ethical solutions, guidelines, and laws. Many nations and states are already contemplating their own regulatory frameworks, leading to a mosaic of laws and regulations. One potential problem with this approach is the possible fragmentation of global regulatory obligations. If an effort is not made to harmonize laws, companies will have to manage potentially conflicting or redundant legal obligations. Companies will need to stay abreast of these evolving landscapes, especially as AI-related laws, technologies, and applications continue to advance and evolve.
European Union
AI Act. On March 13, 2024, the European Parliament adopted the European Union Artificial Intelligence Act (AI Act), which aims to introduce a common legal framework and regulation for AI within the European Union. The AI Act will take effect 20 days following its publication in the Official Journal, anticipated in either May or June 2024. The majority of its provisions will be enforceable two years subsequent to the act’s enactment, while some provisions concerning prohibited AI systems will be enforceable after a six-month period and others pertaining to generative AI will take effect after 12 months.
The AI Act classifies and regulates AI applications pursuant to a risk-based approach, with varying levels of restrictions corresponding to the degree of risk associated with a particular use case. The AI Act defines five distinct categories of risk for AI applications and systems. These categories include:
- Unacceptable-risk AI systems. This category applies to AI systems that impair a person’s ability to make an informed decision in a manner that causes significant harm; exploit the vulnerabilities of a person due to their age, disability, or a specific social or economic situation; evaluate or classify a person based on their social behavior or personality characteristics; make risk assessments for criminal act predictions; expand facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage; infer emotions of people in workplace or education institutions; categorize people based on biometric data to deduce race, political opinions, trade union membership, religious or philosophical beliefs, or sex life or orientation; and apply real-time facial recognition in public places. The AI Act bans this category of AI systems.
- High-risk AI systems. This category applies to AI systems that threaten significant harm to people’s health, safety, and fundamental rights or the environment and critical infrastructure. Systems in this category may include technology involving biometrics, education, and vocational training and assignment; employment recruitment, selection, or decision-making; access to and enjoyment of essential private and public services, such as systems that make decisions regarding healthcare, creditworthiness, credit fraud, life insurance, and emergency calls; law enforcement; migration, asylum, and border control management; and administration of justice. AI systems that fall into this category must undergo a conformity assessment before being placed on the market. The conformity assessment is designed to evaluate the product’s compliance with legal and technology standards, including ensuring adequate risk assessment, proper migration systems, high-quality data sets, compliance documentation, traceability and logging, human oversight, and security.
- General-purpose AI systems. This category applies to foundation AI models, such as ChatGPT, that are trained on broad data at scale, are designed for generality of output, and can be adapted to a wide range of distinctive tasks. Systems in this category are subject to transparency requirements, and high-impact systems that have a cumulative amount of computation used for their training measured in floating point operations (FLOPs) greater than 10^25 must undergo a thorough evaluation process.
- Limited-risk AI systems. This category applies to AI systems presenting limited risk to users, which are subject to specific transparency requirements and obligations to ensure that users are aware that they are interfacing with an AI system or that the output they are viewing is artificially generated. For example, images generated by AI must be clearly labeled as such.
- Minimal-risk AI systems. This category applies to AI systems that pose minimal or no risk to citizens’ risks or safety, such as AI-enabled video games or email spam filters. Under the AI Act, these AI systems face no obligations for reporting and compliance.
Penalties under the AI Act for engaging in prohibited AI practices could result in fines of up to €35 million or up to 7% of a company’s worldwide annual turnover, whichever is higher.
GDPR. AI systems utilize vast amounts of data for training and processing. Europe’s General Data Protection Regulation (GDPR) imposes restrictions on the collection, processing, and storage of personal data. The GDPR requires companies to get consent from the individuals whose data is being collected and imposes restrictions on what data can be collected and how it can be used. For example, under the GDPR, some types of automated decision-making that impact people’s legal rights may not be allowed, or if the process is allowed, people have the right to contest the decision. The GDPR also imposes restrictions on using certain types of data for automated decision-making. For example, decisions should not be based on sensitive data related to race or ethnicity, religious or political viewpoints, or sexual orientation. The GDPR sets fines for violations of up to €10 million or up to 2% of a firm’s global turnover, whichever is higher.
United States
In 2016, under the Obama administration, the seminal report Preparing for the Future of Artificial Intelligence marked the beginning of a concerted federal effort to grasp the multifaceted nature of AI. This pivotal document not only mapped out the complexities inherent in AI but also underscored the pressing risks and outlined prospective regulatory frameworks. The dialogue that ensued across the United States has been centered on the critical need for immediate regulatory measures, the establishment of a unified federal governance architecture, the allocation of regulatory responsibilities to designated agencies equipped with precise powers, and the development of dynamic strategies capable of evolving regulations in step with the rapid pace of AI innovations. In parallel, state-driven regulations have taken shape, specifically addressing various AI technologies while reflecting the diverse concerns and priorities of local constituencies. Together, this intricate interplay of federal and state efforts constitutes a multifaceted regulatory landscape, demonstrating a government interest in spurring innovation while upholding ethical standards and safeguarding the public interest.
Federal. Although there is no comprehensive federal legislation regulating the use of AI in the United States, U.S. lawmakers are moving forward, having recently introduced numerous legislative proposals covering AI regulation and conducted congressional hearings with AI experts and technology leaders to debate oversight, transparency measures, and the risks posed by AI. The Biden White House has introduced working groups in the topic area broadly and has partnered with large technology companies to engage in voluntary ethical compliance regimes. In addition, on October 30, 2023, President Biden signed a comprehensive executive order to guide and regulate AI’s growth across various sectors, emphasizing competition, privacy, cybersecurity, health, education, and labor. The aim is to harness AI’s benefits while managing its risks and potential social impacts. The order provides guidelines to various federal agencies impacting the U.S. market through their purchasing power and regulatory tools.
In the absence of comprehensive federal legislation on AI, companies should consider the various current and proposed AI regulatory frameworks at the state and local level as well as track how the courts are applying existing laws and regulations to the category and its applications more broadly. Some of the enacted state laws addressing automated decision-making are summarized below, as AI and automated decision-making are increasingly integrated. This list of enacted U.S. state laws is in a constant state of evolution, with new additions regularly emerging. As such, staying informed about this dynamic landscape of law, regulation, and enforcement requires constant attention.
California. The California Consumer Privacy Act (CCPA), as amended by the California Privacy Rights Act (CPRA), gives consumers rights to opt out of automated decision-making including profiling based on personal data, such as work performance, economic situation, health, personal preferences, interests, reliability, behavior, location, or movements. Under the CCPA, businesses are also required to conduct a privacy risk assessment for activities that pose a significant risk to consumer security and privacy.
The Bolstering Online Transparency (BOT) Act requires organizations that use a bot to communicate with a person in California to incentivize a sale or transaction or influence a vote in an election to notify the person that the communication is with a bot.
Colorado. Colorado Senate Bill 21-169, Protecting Consumers from Unfair Discrimination in Insurance Practices, regulates insurers’ use of consumer data as well as algorithms and predictive modes that unfairly discriminate in insurance rate-setting mechanisms.
Connecticut. The Connecticut Data Privacy Act (CTDPA) gives consumers the right to opt out of profiling related to automated decision-making that produces significant effects. The act requires the performance of a data risk assessment before implementing some automated decision-making processes.
Illinois. The Illinois AI Video Interview Actrequires all employers to notify and gain consent from candidates before using AI technologies to assess the candidates for interviews.
Maryland. Maryland House Bill 1202 limits the use of facial recognition software during an applicant’s interview without the applicant’s consent.
Montana. The Montana Consumer Data Privacy Act (MCDPA) gives consumers the right, among others, to opt out of the processing of the consumer’s personal data for the purpose of profiling in furtherance of solely automated decisions that produce significant legal effects.
New York. New York City Local Law 144 imposes requirements for notice, reporting, and bias auditing for employer use of AI-enabled tools in employment decisions.
Tennessee. The Tennessee Information Protection Act (TIPA) requires data protection assessments in connection with automated processing performed on personal information to evaluate, analyze, or predict personal aspects related to a person’s economic situation, health, personal preferences, interests, reliability, behavior, location, and movements where the assessment results in certain risks to treatment, injury, and intrusion.
Tennessee’s Ensuring Likeness Voice and Image Security (ELVIS) Act protects musicians from the unauthorized use of their voices through AI technologies and against audio deepfakes and voice cloning. The act prohibits the use of AI to clone the voice of an artist without consent and can be criminally enforced as a class A misdemeanor.
Texas. The Texas Data Privacy and Security Act (TDPSA) enables opt-out of profiling, protects consumer data, and requires data protection assessments for certain high-risk profiling and data processing activities.
Virginia. The Virginia Consumer Data Protection Act (VCDPA) empowers consumers to opt out of profiling. It also safeguards consumer data and mandates data protection assessments for specific high-risk profiling and data processing activities.
China
The Chinese government indicated its interest in developing an AI industry at the national level with Beijing’s 13th Five-Year Plan (2016–2020), which highlighted AI as an economic growth target, and in the 14th Five-Year Plan (2021–2025), which included AI in the strategic vision for building strength in science and technology and in developing digital industries. Under these broader strategies, a collection of AI laws were formed, starting with the Regulations for the Promotion of the Development of the Artificial Intelligence Industry in Shanghai Municipality (Shanghai Regulations), which went into effect on October 1, 2022. AI has also been identified overall as a category of technology that is highly sensitive from a security standpoint and needed to support Communist Party–endorsed outcomes and outputs broadly and evolving applications specifically. Understanding these developments is essential, as they mark a distinct approach to AI development and regulation compared to other countries.
Federal. Beijing created the framework for regulating recommendation systems on March 1, 2022, with the Internet Information Service Algorithmic Recommendation Management Provisions (Algorithmic Recommendation Provisions). Rules targeting generative AI came into effect on January 10, 2023, with the Provisions on the Administration of Deep Synthesis of Internet Information Services (Deep Synthesis Provisions), which specified regulations regarding AI-generated media, including limiting deepfakes; required labeling of AI-generated media content; and provided guidance on applying deep synthesis technology in internet information services within China.
On August 15, 2023, the Interim Measures for the Management of Generative Artificial Intelligence Services (Generative AI Measures) came into effect, providing more rules regarding generative AI, and in combination with the Algorithmic Recommendation Provisions and the Deep Synthesis Provisions constitute the main body of AI regulatory compliance and supervision of the AI industry in China.
The Generative AI Measures impose responsibilities on generative AI service providers to take actions regarding content moderation (e.g., takedown requests, address through model optimization), source training data from appropriate sources, respect IP rules regarding training data, tag AI-generated content, safeguard user rights and personal information (e.g., prohibit storing data that could identify users and selling personal data to third parties), and perform security assessment and algorithm registry filing for generative AI services associated with public opinion or social mobilization characteristics (e.g., enabling public expression and participation in social activities). Under the Generative AI Measures, an AI algorithm is analyzed for compliance with a specific listed function. For example, a single application may require different algorithm registry filings for each algorithm utilized by the application. In addition, the Generative AI Measures require that the provision and use of generative AI services must not result in subversion of the government or socialist system, engage national security, harm the Chinese nation’s image, incite separatism, or undermine national unit and social stability. These are very open categories for interpretation, and certainty in what may or may not qualify as compliant is and will be an ongoing challenge.
Provincial regulations. The Shanghai Regulations, China’s inaugural AI laws, were crafted to bolster AI innovation by adopting a lenient regulatory approach. These guidelines introduced a structured grading system and implemented sandbox oversight. Notably, the regulations offered a distinctive flexibility in addressing minor violations. The Shenzhen government passed a law with similar provisions on November 1, 2022, the Regulations on Promoting Artificial Intelligence Industry in Shenzhen Special Economic Zone.
Ethical Considerations
As corporations integrate AI systems into their operations, their legal departments must consider a complex landscape of ethical concerns to ensure that these technologies benefit society without causing unintended harm. Key ethical concerns include ensuring fairness and eliminating biases in AI algorithms, which can perpetuate or exacerbate inequalities if left unchecked. Maintaining privacy and data protection is also important, as AI systems often process vast amounts of personal information, necessitating stringent measures to secure data and uphold individual rights. Users also expect transparency and accountability in AI decision-making processes so that they can understand and challenge AI decisions. Moreover, the societal impact of an AI deployment must be considered, and any negative effects mitigated. Ensuring that AI systems are designed and deployed with a commitment to ethical principles is essential for corporations to harness their potential responsibly and sustainably.
Best Practices for Legal and Ethical Compliance
Incorporating AI into product offerings or services comes with a significant responsibility to address the legal and ethical implications of AI implementation. For corporate legal teams, this entails devising strategies that strike a delicate balance between harnessing AI’s transformative capabilities and upholding ethical standards and legal compliance. As AI technologies advance, the frameworks and practices governing their use must evolve accordingly, ensuring that they contribute positively to society and uphold individual rights. Moreover, while the significance of ethics may vary across jurisdictions, it remains a pivotal consideration. Both the public and government entities are increasingly scrutinizing not only the actions taken but also the methodologies employed within this domain.
Perform Preimplementation Risk Assessment
Before deploying AI, evaluate its potential impact on health, safety, and fundamental human rights, including privacy and human dignity. Some AI applications might be restricted due to their undue risk to users. High-risk implementations demand stringent adherence to legal and technical benchmarks.
Notify Users of AI Implications
Depending on the identified risk, offer users clear information about the AI’s implications and operation for transparency and awareness up front and on an ongoing basis. Ensure that this information is always effectively communicated for its intended purpose.
Prioritize Governance and Security of Data Used in AI Systems
AI systems, particularly those harnessing machine learning, are deeply rooted in data, which may be proprietary, personal, or sensitive. Various international laws and regulations govern the collection and utilization of this data in AI. These laws typically mandate clear consent from data subjects and set guidelines on data acquisition, usage, storage, and protection against unauthorized access or disclosure. These differ from jurisdiction to jurisdiction, and one size does not fit all for compliance. Ethics is not always the same as the present state of law and regulation but is always a key consideration. Below is one foundational framework of key considerations for AI systems to align with data protection standards:
- Governance and accountability. Define a data governance structure and policy. Designate data protection responsibilities to identified people or teams to minimize risks; in corporations, create a data protection officer.
- Data usage transparency and explainability. Build guidelines and policies that keep users informed about the necessary procedures of data collection, processing, and storage and how that data is protected and can or cannot be shared with third parties. Notify users of how their data influences decisions and how it can be shared. Provide privacy documentation explaining details of AI processing and decisions and make that broadly available regularly as well as upon request.
- Automated decision-making safeguards. For automated decision-making processes, introduce protective measures such as human intervention rights, decision contestation, and unfair bias prevention. Creating, implementing, and communicating systems around these issues is critical, as well as aligning them with business and company ethics policies and guidelines.
- Documentation. Prepare for potential audits or inquires by logging and maintain records of data processing activities, compliance actions, and risk evaluations to ensure that compliance is not only undertaken but can be proven, and align this with data maintenance policies.
- Data management. Develop and implement data retention, access, and minimization strategies. Ensure that a system’s data processing aligns with the AI’s intended use and purpose. Make personal identification of data challenging by utilizing anonymization or pseudonymization techniques to enhance privacy.
- Data subject rights. Create systems that address requests for data access or rectification that comply with internal company guidelines and policies as well as laws and regulations.
- Data sharing considerations. Consider potential privacy implications when sharing data with other AI vendors and service providers, and put in place guidelines and policies for the same.
- Robust data security. Utilize data protection mechanisms and staff training to keep data safe. When data breaches do occur, proactively notify users in accordance with laws and regulations and ensure that ethical implications are key considerations in managing potential risks.
- Data use consent management. When possible, getting user consent and operating in a transparent fashion should take priority over the data collection and processing. Create systems and frameworks for users to grant, review, or withdraw consent seamlessly.
By embracing these enhanced practices, and more, businesses can navigate the AI landscape responsibly, staying aligned with evolving data privacy laws and regulations and positioning themselves as ethically structured and compliant partners.
Create Ethical AI Policies and Guidelines
Legal departments should not only establish and monitor compliance with AI standards and regulations but also champion ethical AI practices in alignment with an organization’s core values. These ethical implementations should adhere to principles of accountability, transparency, fairness, and human oversight. It is imperative for legal teams to draft clear policies and procedures detailing the ethical use, management, and dissemination of AI-generated results. Furthermore, in-house legal counsel should take the lead in heightening awareness and training staff on these critical matters. When formulating ethical AI policies and guidelines, legal teams should consider the following factors:
- Assess implications. Determine the impact of the AI system’s creation and rollout on fairness, transparency, accountability, and privacy of the user. Impact studies and assessments may help identify the issues and their associated impact on users.
- AI usage transparency. Commit to AI usage transparency. This entails being forthright about AI application and decision-making rationale and promoting consumer and stakeholder trust.
- Evaluate AI bias. Strive for fairness by ensuring that AI functions without discrimination or ingrained biases.
- AI use consent. Get consent for data use in AI systems. When applicable, create systems and frameworks for users to grant, review, or withdraw consent seamlessly.
- Accountability. Be transparent about mistakes, lapses, outcomes, and anomalies within the framework of law and regulation. Redress any harm. Respond to and address grievances promptly. Prompt mitigation is key.
- Beneficence and nonmaleficence. Consider potential societal repercussions, such as job implications, and shifting societal norms in all uses, and be prepared to address them with stakeholders and the public.
Counsel’s Critical Role
In-house counsel and outside counsel both play a critical role in ensuring a legally compliant and ethically responsible implementation of AI within organizations. By understanding the regulatory landscape, conducting comprehensive risk assessments, promoting transparency, mitigating bias, proactively building guidelines and policies, training clients and stakeholders on implications regularly, and fostering ethical AI use, counsel can contribute significantly to the successful integration of AI technologies in business operations while upholding legal and ethical standards. AI has the potential to create great efficiencies and great benefits if deployed creatively and aligned with laws, regulations, and ethics. Being proactive as counsel is the key to success in achieving these benefits and minimizing risks to clients.