Foundational Board-Level Duties
Directors’ and officers’ fiduciary duties to the corporation are “unyielding.” Smith v. Van Gorkom, 488 A.2d 858, 872 (Del. 1985); see also Gantler v. Stephens, 956 A.2d 695, 709 (Del. 2009) (“explicitly” holding that officers’ fiduciary duties are concomitant with directors’ duties). The specific duties are well established: the duties of care (Van Gorkom, 488 A.2d at 872), loyalty (Cede & Co. v. Technicolor, Inc., 634 A.2d 345, 361 (Del. 1993)), and supervision (In re Caremark Int’l Inc. Derivative Litig., 698 A.2d 959, 970 (Del. Ch. 1996)). In a world where AI is becoming inevitable, the duty of supervision is particularly relevant. Directors and officers violate their oversight responsibilities if “(a) [they] utterly fail[] to implement any reporting or information system or controls; or (b) having implemented such a system or controls, consciously fail[] to monitor or oversee its operations thus disabling themselves from being informed of risks or problems requiring their attention.” Stone v. Ritter, 911 A.2d 362, 370 (Del. 2006) (adopting Caremark). Recently, the Delaware Court of Chancery noted that officers, in particular, have a duty “to identify red flags, report upward, and address the [red flags] if they fall within the officer’s area of responsibility,” and that officers must “make a good faith effort to establish an information[-reporting] system” within their area of “remit.” Id.
Courts have yet to specifically apply these oversight duties to a company’s use or development of AI. But given the enormous amount of that capital companies are pouring into enterprise and consumer AI tools, as well as the unique risks that AI presents—from creating false information to data security issues—it is a safe bet that case law is coming. And beyond common-law fiduciary duties, a company has a duty, under the securities laws and beyond, to make accurate and non-misleading statements to the public. As U.S. Securities and Exchange Commission Chair Gary Gensler recently noted, companies “need[] to be truthful about [their] use of AI and associated risk.”
Fulfilling Corporate Duties When Using Artificial Intelligence
The best path, of course, is the proactive one. A comprehensive and robust plan for using AI, I would argue, must consist of at least the following:
- Knowledge and training. Boards and executives should, at least, have an understanding of what AI is, how it works, how the organization uses it, and what risks that use presents. Beyond that, the board and executive team should stay informed on matters of significance or risk to the company—in line with the duty to “to monitor” and “oversee” significant company operations. Stone, 911 A.2d at 370.
- Board committee. In particular, public companies that use AI should form a specialized board committee or subcommittee to provide additional oversight of AI opportunities and risks.
- Policies and procedures. Companies should develop a company-wide AI use and development policy. It should be separate from, but work in tandem with, existing corporate policies, such as information security and privacy policies.
- Performance testing. Before and after the deployment of any AI tool, companies must enact systems to oversee the accuracy and integrity of the technology and how it affects the company’s goals and potential exposure. This process must be ongoing as the company’s goals and the AI itself advance.
- Institutionalized oversight. Directors and officers should not be the only ones who handle AI oversight. Rather, they should build a team of stakeholders, drawn from operations, legal, technology, product development, and other areas, to evaluate, advise on, and reduce AI risk.
- Disclosures. Organizations, and public companies in particular, must ensure that any communications with the public and shareholders about their use and development of AI accurately lay out the technology’s effectiveness and business risks. Claims that exaggerate the former or overly minimize the latter are fodder for suits.
- Ethical use. AI use standards should contemplate and address not only business risks but also ethical standards. Those standards should work to eliminate bias and maximize transparency and accountability.
Conclusion
Depending on the size of the enterprise and the nature of its use of AI, these measures may either be insufficient or overkill. But the overall point is that as the uses for AI multiply and the technology becomes more necessary, so, too, does the need for measured, comprehensive, and tailored AI-specific policies and procedures. Otherwise, the consequence may be a suit or regulatory proceeding (or both)—and perhaps an AI-assisted one at that.