chevron-down Created with Sketch Beta.

ARTICLE

The Importance of Adopting AI-Related Policies and Procedures

Steven R Aquino

Summary

  • Artificial intelligence (AI) has the capacity to radically transform for the better how businesses operate. As a result, companies have expended, and are expending, considerable resources to develop enterprise and consumer AI tools.
  • Despite recent AI expansion, courts have yet to apply specific oversight duties to directors and officers regarding their company’s use or development of AI.
  • Nonetheless, directors and officers risk liability by adopting AI tools without first implementing appropriate policies and procedures.
The Importance of Adopting AI-Related Policies and Procedures
LightFieldStudios via Getty Images

Artificial intelligence (AI) is the kind of technology with the power not only to disrupt but also to shift entire paradigms—on countless fronts, at a micro and macro level, and with seemingly exponential leaps. The adoption and development of AI will change how we communicate. How we learn. How we generate art. How we consume. How we write. How we work. How we build. How we travel. How we predict.

This kind of thought exercise can be fascinating, in part because it is difficult to list all the ways AI can bring us change. The most bankable prediction is that change is coming. Perhaps the second most bankable prediction with the emergence of AI is that its potential also creates pitfalls in the event that it is not used with care. In just the few years since ChatGPT and other AI tools have entered the public’s consciousness, we have seen reports of “hallucinations,” “deepfakes,” bias, privacy and data protection issues, and intellectual property concerns.

Corporate executives and directors must tackle both sides of this coin—power and peril. Because of the business benefits to using AI, not using it arguably creates a competition problem. But, as I have noted, using AI without thoughtful and comprehensive policies, risk-management controls, supervision, and disclosures also creates significant potential legal liability for companies, their executives, and their directors. Below, I highlight the specific legal duties that, in my read, can be adopted to govern AI, and how corporate boards and officers can address them head-on to leverage AI’s potential while minimizing risk.

Foundational Board-Level Duties

Directors’ and officers’ fiduciary duties to the corporation are “unyielding.” Smith v. Van Gorkom, 488 A.2d 858, 872 (Del. 1985); see also Gantler v. Stephens, 956 A.2d 695, 709 (Del. 2009) (“explicitly” holding that officers’ fiduciary duties are concomitant with directors’ duties). The specific duties are well established: the duties of care (Van Gorkom, 488 A.2d at 872), loyalty (Cede & Co. v. Technicolor, Inc., 634 A.2d 345, 361 (Del. 1993)), and supervision (In re Caremark Int’l Inc. Derivative Litig., 698 A.2d 959, 970 (Del. Ch. 1996)). In a world where AI is becoming inevitable, the duty of supervision is particularly relevant. Directors and officers violate their oversight responsibilities if “(a) [they] utterly fail[] to implement any reporting or information system or controls; or (b) having implemented such a system or controls, consciously fail[] to monitor or oversee its operations thus disabling themselves from being informed of risks or problems requiring their attention.” Stone v. Ritter, 911 A.2d 362, 370 (Del. 2006) (adopting Caremark). Recently, the Delaware Court of Chancery noted that officers, in particular, have a duty “to identify red flags, report upward, and address the [red flags] if they fall within the officer’s area of responsibility,” and that officers must “make a good faith effort to establish an information[-reporting] system” within their area of “remit.” Id.

Courts have yet to specifically apply these oversight duties to a company’s use or development of AI. But given the enormous amount of that capital companies are pouring into enterprise and consumer AI tools, as well as the unique risks that AI presents—from creating false information to data security issues—it is a safe bet that case law is coming. And beyond common-law fiduciary duties, a company has a duty, under the securities laws and beyond, to make accurate and non-misleading statements to the public. As U.S. Securities and Exchange Commission Chair Gary Gensler recently noted, companies “need[] to be truthful about [their] use of AI and associated risk.”

Fulfilling Corporate Duties When Using Artificial Intelligence

The best path, of course, is the proactive one. A comprehensive and robust plan for using AI, I would argue, must consist of at least the following:

  1. Knowledge and training. Boards and executives should, at least, have an understanding of what AI is, how it works, how the organization uses it, and what risks that use presents. Beyond that, the board and executive team should stay informed on matters of significance or risk to the company—in line with the duty to “to monitor” and “oversee” significant company operations. Stone, 911 A.2d at 370.
  2. Board committee. In particular, public companies that use AI should form a specialized board committee or subcommittee to provide additional oversight of AI opportunities and risks.
  3. Policies and procedures. Companies should develop a company-wide AI use and development policy. It should be separate from, but work in tandem with, existing corporate policies, such as information security and privacy policies.
  4. Performance testing. Before and after the deployment of any AI tool, companies must enact systems to oversee the accuracy and integrity of the technology and how it affects the company’s goals and potential exposure. This process must be ongoing as the company’s goals and the AI itself advance.
  5. Institutionalized oversight. Directors and officers should not be the only ones who handle AI oversight. Rather, they should build a team of stakeholders, drawn from operations, legal, technology, product development, and other areas, to evaluate, advise on, and reduce AI risk.
  6. Disclosures. Organizations, and public companies in particular, must ensure that any communications with the public and shareholders about their use and development of AI accurately lay out the technology’s effectiveness and business risks. Claims that exaggerate the former or overly minimize the latter are fodder for suits.
  7. Ethical use. AI use standards should contemplate and address not only business risks but also ethical standards. Those standards should work to eliminate bias and maximize transparency and accountability.

Conclusion

Depending on the size of the enterprise and the nature of its use of AI, these measures may either be insufficient or overkill. But the overall point is that as the uses for AI multiply and the technology becomes more necessary, so, too, does the need for measured, comprehensive, and tailored AI-specific policies and procedures. Otherwise, the consequence may be a suit or regulatory proceeding (or both)—and perhaps an AI-assisted one at that.

    Author