Summary
- How will the risks of artificial intelligence be managed to achieve the benefits AI can deliver? Regulatory, commercial, and judicial guidance illustrate emerging risk management concerns and strategies.
AI is being integrated in business operations and personal lives much like the World Wide Web was a generation ago. Legal developments during the year ending May 31, 2024, acknowledge AI as an innovation with potential for both significant benefit and harm and inevitable impacts on business and consumers.
If its benefits are to outweigh its risks, AI needs to be integrated effectively into legal and risk management systems and practices. Managing AI may take the form of AI-specific rules or changes in existing rules and procedures otherwise applicable to conduct assisted by AI.
This survey discusses:
Impersonating a government or business official for the purpose of misleading consumers in commercial transactions is now, by trade regulation rule, an unfair or deceptive act or practice. The FTC’s new Impersonation Rule prohibiting such schemes took effect April 1, 2024. The Impersonation Rule enables the FTC to obtain monetary relief more efficiently and “significantly faster” than remedies it could otherwise pursue under the Federal Trade Commission Act absent a trade regulation rule.
The Impersonation Rule is short. Two substantially identical sections address government and business impersonation, respectively. The bracketed text below reflects the differences in the two sections:
It is a violation of this part, and an unfair or deceptive act or practice to:
(a) materially and falsely pose as, directly or by implication, a [government entity]/[business] or officer thereof, in or affecting commerce as commerce is defined in the Federal Trade Commission Act (15 U.S.C. 44); or
(b) materially misrepresent, directly or by implication, affiliation with, including endorsement or sponsorship by, a [government entity]/[business] or officer thereof, in or affecting commerce as commerce is defined in the Federal Trade Commission Act (15 U.S.C. 44).
In support of the rule, the FTC referred to its notice of proposed rulemaking, which:
cited data from a broad spectrum of commenters (businesses, trade associations, and government or law‑enforcement organizations) regarding the prevalence of government and business impersonation scams, . . . echo[ing] the Commission’s findings that these schemes are among the most common deceptive or unfair practices affecting U.S. consumers and businesses and continue to be a significant source of consumer injury.
Impersonations designed to defraud consumers are not new or limited to technologies or practices existing or foreseeable at a fixed date. Thus, the FTC rejected listings of specific impersonation details as potentially limiting its ability to respond as schemes vary and evolve. The FTC noted “it would be impracticable to list all possible violative conduct.”
Consistent with that perspective, the Impersonation Rule does not call out AI‑assisted impersonation. Nevertheless, the FTC anticipates scammers using AI. “The example of voice cloning—a relatively new technology—emphasizes the need for an illustrative, but non-exhaustive, list of unlawful conduct. Audio deepfakes, including voice cloning, are generated, edited, or synthesized by artificial intelligence, or ‘AI,’ to create fake audio that seems real.”
A joint statement of three Federal Trade Commissioners accompanying the final rule elaborated on the scale of losses experienced by U.S. consumers and added risk posed by AI. They cited “FTC data show[ing] that in 2023 consumers reported losing $2.7 billion to reported imposter scams.”
The rise of generative AI technologies risks making these problems worse by turbocharging scammers’ ability to defraud the public in new, more personalized ways. For example, the proliferation of AI chatbots gives scammers the ability to generate spear‑phishing emails using individuals’ social media posts and to instruct bots to use words and phrases targeted at specific groups and communities. AI‑enabled voice cloning fraud is also on the rise, where scammers use voice cloning tools to impersonate the voice of a loved one seeking money in distress or a celebrity peddling fake goods.
The FTC dropped from the rule a proposed provision that would have imposed liability on entities providing the “means and instrumentalities” to commit impersonation scams. The FTC received a range of comments about it, with some “arguing for the importance of holding intermediaries accountable for enabling or promoting impersonation schemes,” and others expressing concern that an overly broad reading risked “imposing strict liability against innocent and unwitting third‑party providers of services or products.” The FTC undertook additional analysis and, concurrent with the final rule, published a Supplemental Notice of Proposed Rulemaking seeking comment on a revised provision.
The revised “means and instrumentalities” provision would make it an unfair or deceptive act or practice “to provide goods or services with knowledge or reason to know that those goods or services will be used to” engage in impersonation scams. The revision, though not finalized during the Survey year, is noted here because the Commissioners have indicated their intention to regulate AI developers who are aware their products will be misused.
Under this approach, liability would apply, for example, to a developer who knew or should have known that their AI software tool designed to generate deepfakes of IRS officials would be used by scammers to deceive people about whether they paid their taxes. Ensuring that the upstream actors best positioned to halt unlawful use of their tools are not shielded from liability will help align responsibility with capability and control.
The Impersonation Rule exemplifies regulation that will aid the FTC in responding to fraudulent uses of AI against U.S. consumers.
The SEC adopted new rules and rule amendments, effective September 5, 2023 (with varied compliance dates), relating to public company cybersecurity risk management and incident reporting. The rules require:
current disclosure about material cybersecurity incidents … , periodic disclosures about a registrant’s processes to assess, identify, and manage material cybersecurity risks, management’s role in assessing and managing material cybersecurity risks, and the board of directors’ oversight of cybersecurity risks … , [and presentation of] the cybersecurity disclosures … in Inline eXtensible Business Reporting Language (‘‘Inline XBRL’’).
Since 2011, when it issued staff guidance on cybersecurity incident reporting, the SEC has observed varied disclosure practices. After reviewing relevant filings, SEC staff concluded that “companies provide different levels of specificity regarding the cause, scope, impact, and materiality of cybersecurity incidents.” SEC staff also found that some companies that report cybersecurity incidents do so in ways that make the disclosures difficult for investors to find and analyze, e.g., located in various form sections or with unrelated disclosures.
In adopting the new rule, the SEC twice referred to AI, once highlighting its cybersecurity risk and once highlighting its benefits in making information more accessible to the investing public. First, as to risk, the SEC identified three trends supporting rules bolstering cybersecurity incident reporting: continued proliferation of economic activity dependent on electronic systems and a corollary risk that disruption of those systems has on public companies and the economy; substantial rise in cybersecurity incidents; and increasing cost and adverse consequences of cybersecurity incidents. In this context, the SEC identified AI as an emerging risk: “[R]ecent developments in artificial intelligence may exacerbate cybersecurity threats, as researchers have shown that artificial intelligence systems can be leveraged to create code used in cyberattacks, including by actors not versed in programming.”
Second, as to the benefits, the SEC included a formatting requirement for cybersecurity reporting. Specifically, companies must tag the disclosures using Inline XBRL. The SEC explained:
Inline XBRL tagging will enable automated extraction and analysis of the information required by the final rules, allowing investors and other market participants to more efficiently identify responsive disclosure, as well as perform large‑scale analysis and comparison of this information across registrants. The Inline XBRL requirement will also enable automatic comparison of tagged disclosures against prior periods.
The SEC highlighted in its Economic Analysis more uniform and comparable cybersecurity disclosures, “in terms of both content and location, [as] benefiting investors by lowering their search and information processing costs.” Regarding formatting, the SEC said:
The requirement to tag the cybersecurity disclosure in Inline XBRL will likely augment the informational and comparability benefits by making the disclosures more easily retrievable and usable for aggregation, comparison, filtering, and other analysis. … Tagging narrative disclosures can facilitate analytical benefits such as automatic comparison or redlining of these disclosures against prior periods and the performance of targeted artificial intelligence or machine learning assessments (tonality, sentiment, risk words, etc.) of specific cybersecurity disclosures rather than the entire unstructured document.
In September 2023, the Writers Guild of America (“WGA”) and motion picture and television companies updated their collective bargaining agreement, which now covers uses of GenAI for screenwriting. The 2023 MBA, which ended a months-long strike, illustrates an early effort at balancing GenAI benefit‑risk for creative professionals and the entertainment economy.
Significantly, the 2023 MBA distinguishes GenAI from other types of artificial intelligence tools. WGA and production companies agreed that GenAI:
generally refers to a subset of artificial intelligence that learns patterns from data and produces content, including written material, based on those patterns, and may employ algorithmic methods … [but] does not include ‘traditional AI’ technologies such as those used in CGI [computer‑generated imagery] and VFX [visual effects] and those programmed to perform operational and analytical functions.
The parties also agreed that written material produced by either traditional AI or GenAI is not considered “literary material” as that term is used in the 2023 MBA (or any predecessor agreement); “because neither traditional AI nor [GenAI] is a person, neither is a ‘writer’ or ‘professional writer’ as defined in [various articles] of this MBA.”
The GenAI provisions appear designed to protect writers from being displaced by GenAI while accommodating uses of GenAI output consistent with copyrightability needs of both writers and producers. Companies may instruct writers to use, and writers may propose to use, GenAI output subject to a number of disclosures and restrictions:
Writers, in turn, must follow a production company’s GenAI policies, “e.g., policies related to ethics, privacy, security, copyrightability or other protection of intellectual property rights.” Companies may reject use of GenAI output, “including the right to reject a use of [GenAI] that could adversely affect the copyrightability or exploitation of the work.”
The 2023 MBA acknowledges the “uncertain and rapidly developing” legal landscape around GenAI. Each side reserved all rights not expressly addressed in the GenAI provisions, including the rights of writers to “assert[] that the exploitation of their literary material to train, inform, or in any other way develop [GenAI] software or systems, is within [a writer’s retained rights under the 2023 MBA] and is not otherwise permitted under applicable law.” Companies also agreed to meet periodically with the WGA to discuss information related to companies’ plans for using GenAI in motion picture development and production.
Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence required several measures aimed at critical infrastructure protection, including that the Secretary of the Treasury “issue a public report on best practices for financial institutions to manage AI-specific cybersecurity risks.”
In March 2024, Treasury issued the required report (“Treasury Report”). To develop it, Treasury conducted forty-two in-depth discussions with representatives from financial institutions, financial sector trade associations, cybersecurity and anti-fraud service providers, and other financial services sector entities. Treasury emphasized that its “observations reflect the participating stakeholders’ perception of the state of AI and is not intended as an authoritative assessment of actual AI usage.”
The Treasury Report provides an illuminating account of financial institutions’ efforts to add AI-augmented tools to existing risk management strategies for identifying and thwarting cyber-enabled fraudulent activities.
The Treasury Report sets the security context for AI-augmented anti-fraud tools. Cybersecurity incidents, from ransomware to data theft, are more frequent and severe, but AI-augmented anti-fraud tools may dampen the trend and impacts. The Treasury Report highlights the challenges incidents present to the financial services sector:
The costs of these incidents … continue to rise every year. According to IBM, the average cost of a data breach reached an all-time high of $4.45 million in 2023. . . . Losses from fraud also continue to rise every year. According to Juniper Research, online payment fraud is expected to cumulatively surpass $362 billion by 2028.
Participants in the IBM study using “AI and other automated technologies to detect fraudulent activity risk” experienced “lower costs associated with data breaches and a shorter timeframe for detecting an incident.”
The Treasury Report does not suggest, however, that “good news” of AI’s potential benefits offsets “bad news” of growing cyber risks. Instead, the Treasury Report reveals that the emergence of GenAI complicates the effort to deploy AI‑augmented anti-fraud tools because GenAI needs guardrails to ensure responsible, safe, and secure use of those AI tools. One source of added complexity is additional (not novel) cyber risks that GenAI tools themselves bring to an enterprise and that were not present in traditional software anti-fraud tools. Those risks originate in the need to train the AI models and to rely on potentially contaminated or compromised data to do so.
Regulatory requirements and guidance provide a framework for institutions to implement controls to mitigate AI-related cybersecurity risks. However, more advanced AI technologies, such as Generative AI, may require institutions to extend these controls or adopt new ones.
Data poisoning, data leakage, and data integrity attacks can occur at any stage of the AI development and supply chain. AI systems are more vulnerable to these concerns than traditional software systems because of the dependency of an AI system on the data used to train and test it. Data ingested by an AI system in training or even in testing can directly inform the production processing of the AI system. Source data, training datasets, testing data sets, pre-trained AI models, LLMs [large language models] themselves, prompts, and prompt and vector stores can all be subject to data attacks, making the security of data throughout the development and production cycle as important as protecting production data.
The most sobering observations in the Treasury Report concern evidence of GenAI use in attacks that undermine the sector’s prevailing security paradigm and could render it obsolete. The prime reason: emergence of inexpensive, easy‑to‑use GenAI tools to create plausible “deepfake” audio and video impersonations of persons authorized to initiate or approve electronic fund transfers. The Treasury Report explains that “AI allows bad actors to impersonate individuals, such as employees and customers of financial institutions, in ways that were previously much more difficult” and have now “become much more believable.” Financial institution participants in Treasury’s survey reported that fraudsters use of AI to “mimic voice, video, and other behavioral identity factors to verify a customer’s identity,” is a “chief concern about the malicious use of AI.” The Treasury Report noted two recent incidents that illustrate the severity and immediacy of the risk:
Based on such evidence, the Treasury Report cautioned that the prevailing security paradigm may no longer be effective: “It appears that even live video interactions with a known client may be no longer sufficient for identity verification because of advances in AI-driven video-generation technology.”
One Treasury official observed that deepfakes represent the “next new challenge to defense-in-depth” strategies. The remedy, the official noted, is training personnel to ask identity-verification questions eliciting non‑public, non‑obvious answers, something known based on a personal connection, such as the name of the person’s pet. Moreover, when dealing with proposed transfers related to a high net-worth individual, personnel may need to invoke extra verification factors because of the additional risks to such individuals’ assets.
During the Survey year, courts and state bar associations determined that reported misuse of GenAI by counsel suggested a need for rules or guidance. Some courts issued standing orders ranging from requiring counsel to disclose when they use GenAI, to lead trial counsel certifying they personally verified the accuracy of a filing’s contents, to mandating that counsel keep records of all prompts or inquiries submitted to GenAI tools.
Whether burdensome orders will become widespread appears doubtful; few courts to date have issued such orders. The U.S. Court of Appeals for the Fifth Circuit posted a notice declaring it had “decided not to adopt a special rule [first proposed in November 2023] regarding the use of artificial intelligence in drafting briefs at this time.” Nonetheless, it cautioned lawyers: “‘I used AI’ will not be an excuse for an otherwise sanctionable offense.”
In December 2023, the judiciaries of England and Wales and of New Zealand issued guidelines for use of GenAI in their courts and tribunals. In contrast with the detailed, often complex orders and guidance issued by some state and federal courts in the United States, the guidelines issued by New Zealand courts (“NZ Guidelines”) are clear and concise and summon counsel to use good judgment and common sense. The NZ Guidelines remind counsel that they are officers of the court and any use of Gen AI “must be consistent with the observance of lawyers’ obligations.” They instruct counsel to understand GenAI and its limitations and highlight risky limitations:
The NZ Guidelines caution counsel to “uphold confidentiality” and coach counsel on conduct they should accordingly avoid and security precautions to take, including: “[Y]ou should not enter any information into an AI chatbot that is not already in the public domain. The NZ Guidelines helpfully explain why and what extra precautions counsel should take:
The NZ Guidelines remind counsel they are responsible for ensuring accountability and accuracy of submittals to courts and tribunals. In case some counsel might think GenAI justifies shortcutting factual verifications, the NZ Guidelines adamantly instruct: “You must check the accuracy of any information you have been provided with by a GenAI chatbot (including legal citations) before using that information in court/tribunal proceedings.” Again, the NZ Guidelines helpfully and concisely explain why counsel should consistently exercise close scrutiny of GenAI outputs:
GenAI chatbots may:
- make up fictitious cases, citations, or quotes, or refer to legislation, articles, or legal texts that do not exist;
- provide incorrect or misleading information on the law or how it might apply;
- make factual errors; and
- confirm that information is accurate if asked, even when it is not.
All information generated by a GenAI chatbot should be checked by an appropriately qualified person for accuracy before it is used or referred to in court or tribunal proceedings.
Similar in structure, clarity, and insight to the NZ Guidelines, the Guidance for Judicial Office Holders on Artificial Intelligence issued by the courts and tribunals of England and Wales (“E&W Guidance”) provides counsel additional cautions to avert GenAI risks. The E&W Guidance sometimes drills deeper in its cautions than the NZ Guidelines. For example the E&W Guidance instructs counsel to understand AI’s limitations, specifically:
AI tools may be useful to find material you would recognise as correct but have not got to hand [i.e., readily at hand], but are a poor way of conducting research to find new information you cannot verify.
. . . .
. . . [T]he current public AI chatbots do not produce convincing analysis or reasoning.
As to confidentiality, the E&W Guidance specifies:
Any information that you input into a public AI chatbot should be seen as being published to all the world.
The current publicly available chatbots remember every question that you ask them… . That information is then available to be used to respond to questions from other users. As a result, anything you type into it could become publicly known.
. . . .
. . . [S]ome AI platforms, … if used as an App on a smartphone, may request various permissions which give them access to information on your device. In those circumstances you should refuse all such permissions.
The reviewed judiciary guidance may seem unduly critical of GenAI tools, especially to counsel eager to use and tout their use of GenAI to achieve cost-savings in client work. However, the judiciary guidance aligns with warnings made by GenAI developers in their Terms of Use for public GenAI applications. Consider, for example, this prohibition in OpenAI’s Terms of Use:
When you use our Services you understand and agree …
You must not use any Output relating to a person for any purpose that could have a legal or material impact on that person, such as making credit, educational, employment, housing, insurance, legal, medical, or other important decisions about them.
It appears OpenAI is prohibiting counsel from using ChatGPT 4 for legal research, drafting, preparation of court filings, or any other client work—or at least shifting any risk in that use away from OpenAI. In any case, the prohibition is a clear warning to exercise caution and diligence in verifying GenAI output.
In light of the developments during the Survey year, counsel eager to explore use of GenAI for client work need to proceed with caution, mindful of trade‑offs. It would be imprudent to assume improvements in those tools will make care in using them unnecessary.
Moreover, tools change the tool user, sometimes for the better, sometimes for the worse, and sometimes both. Consider the value of iteration in drafting legal memoranda, pleadings, briefs, and contracts. Analysis, argument, and the exchange of rights and obligations are challenged and refined through the drafting process. Lawyering skills improve as a consequence.
Delegating the labors of writing first drafts to GenAI and accepting the stylistic dross of GenAI outputs may diminish appreciation of the value of revising first drafts. Stanford Professor Rob Reich believes that students will lose more than they will gain if they delegate the labors of writing to GenAI: “The ability to write exercises their [students’] thinking; learning to write better is inseparable from learning to think better. Becoming a good writer is the same thing as becoming a good thinker. So if text models are doing the writing, then students are not learning to think.” That observation applies to counsel, too.
Disclaimer: The views expressed by the authors are solely their own and have not been reviewed or approved by, and should not be attributed to, the U.S. Military Academy, the U.S. Army, U.S. Department of Defense, the U.S. Government, or any institution to which they are or have been affiliated.
The authors thank Professor Sarah Jane Hughes for editing, and Shunyo Morgan for editing, cite checking, and bluebooking, this survey.