chevron-down Created with Sketch Beta.

ARTICLE

AI Legal Updates in the Insurance Industry

Jessica L. Gallagher

AI Legal Updates in the Insurance Industry
baranozdemir via Getty Images

Artificial intelligence (AI) is the hottest legal topic of the year—and with good reason! AI (and its algorithmic decision-making and big data counterparts) holds incredible power to compile and interpret data from countless data sources and provide outputs that improve accuracy, efficiency, and decision-making across nearly all industries. Insurance is no exception to that rule. Insurers across the globe have identified ways to use AI to improve marketing, underwriting, rating, claims decisions, and fraud detection. Like all technology used by insurers, AI is subject to regulation aimed at protecting consumers from inaccurate decision-making, unintended discrimination against protected classes, and privacy and safety issues.

NAIC Model Bulletin Adoption

On December 4, 2023, the National Association of Insurance Commissioners (NAIC) formally adopted its model bulletin: “Use of Artificial Intelligence Systems by Insurers” (Model Bulletin). The Model Bulletin sets forth regulatory guidance and expectations for insurers using “AI Systems.” Most significantly, the Model Bulletin requires insurers to develop and implement a written “AIS [AI Systems] Program” designed to protect consumers against adverse outcomes resulting from the use of AI, such as inaccurate, arbitrary, capricious, or unfairly discriminatory decisions. Insurers must also adopt a clear governance and risk management framework that creates accountability for the use of AI Systems within the organization and delegates responsibility and authority.

The Model Bulletin leaves open the exact structure of the framework, favoring an approach whereby insurers are encouraged to implement a risk-based approach. The Model Bulletin also “encourages” insurers to perform testing to identify errors and bias in AI Systems, as well as to identify the potential for unfair discrimination in the results of AI Systems. However, it does impose some objective requirements, including the disclosure of the use of AI Systems to consumers.

The Model Bulletin makes the use of both AI Systems and the AIS Program subject to review and evaluation through market conduct examinations. The Model Bulletin also makes insurers responsible for the use of AI Systems by third parties with whom they do business.

As this article went to publication, 17 jurisdictions had issued variations of the bulletin. Some states have adopted the Model Bulletin nearly word for word, while others have included heightened standards, creating a patchwork of regulation that requires careful tracking to ensure compliance. For example, Connecticut’s bulletin specifically requires testing to identify errors, bias, and unfair discrimination; and New Hampshire “strongly encourages” this type of testing. Connecticut also imposes a September 1, 2024, deadline for Connecticut domestic insurers to complete an “Artificial Intelligence Certification” confirming compliance with applicable laws related to its use of AI Systems. Illinois’s bulletin deviates from the Model Bulletin by defining Insurers to mean not only all insurance companies but also all regulated entities licensed to do business in Illinois, significantly expanding the reach and applicability of the requirements.

State Legislation

Meanwhile, a boom of AI-related legislation is occurring across the country. Twenty pieces of legislation have been introduced in Congress this year. At least 15 states have also introduced legislation relating to the use of AI and/or algorithmic decision-making, with many states introducing multiple bills. In contrast, states like Colorado took action years before the other states.

In 2021, Colorado passed Senate Bill 21-169, which prohibited insurance companies from using algorithms and predictive models that utilized external consumer data and information sources (such as credit scores, social media habits, and court records) in a manner that resulted in unfair discrimination. On June 1, 2024, life insurers subject to that legislation were required to furnish a progress report to the Colorado Department of Insurance explaining their progress in creating frameworks that would evaluate whether the insurers’ use of external consumer data was resulting in unfair racial discrimination.

Takeaway

The flurry of legislation and regulation could understandably cause an insurer to shy away from the use of AI. But the better approach is to develop an interdisciplinary team (which includes in-house counsel) focused on implementing the tools in an appropriate and compliant matter.