April 30, 2018 Articles

Artificial Intelligence and Insurance: A Glimpse of the Future

The growing use of artificial intelligence in business operations and in the insurance industry may provide interesting new twists on traditional legal issues that are commonly disputed in the insurance coverage arena

by John M. Sylvester and Ralph C. Surman Jr.

Advancements in artificial intelligence (AI) are happening in nearly all industries, such as the use of IBM’s Watson to help diagnose patient diseases and prescribe treatment plans, the development of autopilot technologies in cars to obviate the need for human drivers, and the expansion in industrial use of intelligent robots that not only perform work previously done by humans but also learn how to do that work more efficiently and effectively. While AI technologies are moving forward with lightning speed, the corresponding legal and regulatory systems are not necessarily keeping pace with the rapidity of change.

One area of business in which the disruptive technologies of artificial intelligence will present challenges and opportunities is the insurance industry. Fundamental issues that have steered the underwriting of insurance policies and the handling of claims for hundreds of years, such as concepts of uncertainty, fortuity, predictability, disclosure, and good faith, among others, take on a new dimension when the principal actors to be analyzed are intelligent machines, rather than human beings. Specifically, if the decisions, judgments, and actions that result in the issuance of an insurance policy or the occurrence of a loss or the payment of a coverage claim are being made by artificial intelligence rather than human actors, how does that fact affect the application of traditional legal principles governing insurance law?

This article considers some legal issues that often arise in the context of coverage disputes between policyholders and insurers and reviews those issues in the context of artificially intelligent actors that have a substantial role in the conduct in question. For example, the concept of “expected or intended” is one that can be the subject of coverage disputes when claims arise. Also, the concept of timely notice of claims, along with alleged prejudice from late notice, is often debated between policyholders and insurers when claims are presented. Moreover, questions regarding a policyholder’s alleged misrepresentations or material omissions (or both) in applying for an insurance policy can be the subject of an attempt by an insurer to void an insurance policy. Finally, whether an insurer has satisfied its legal duties to a policyholder of good faith and fair dealing can be the point of contention in the handling of an insurance claim.

This article can only scratch the surface of the impact of artificial intelligence on resolution of various insurance issues, but out hope is to provide food for thought among insurance lawyers who will be confronting these issues in the coming years.

The presence of artificial intelligence in policyholder decision making may affect the application of the “expected or intended” defense in a number of ways. For example, courts will be forced to grapple with the question of whose expectation or intention is relevant? Is it the expectation or intention of the person who allowed AI to make the critical decision that led to the liability—or is that person blameless if the decision was made by an algorithm embedded in the AI function, with no explicit ratification by a human actor? This question implicates a number of commonly debated legal questions relating to the “expected or intended” defense.

For example, one issue typically debated between policyholders and insurers is the question of whether the “expected or intended” question should be resolved by application of a subjective or objective standard. Policyholders typically argue that the subjective standard applies—namely, did the policyholder, in his or her mind, actually expect the resulting injury or damage resulting from the activity in question? By contrast, insurers frequently argue that an objective standard should apply—arguing that, regardless of whether the policyholder actually did expect or intend damage, a reasonable person in the position of the policyholder should have expected, or should have understood, that the activity in question would lead to injury or damage.

The New York Court of Appeals, in Continental Casualty Co. v. Rapid-American Corp., 609 N.E.2d 506, 510 (N.Y. 1993), adopted a subjective approach to the “expected or intended” defense. At issue in Rapid-American was a policyholder’s insurance coverage for asbestos-related personal injury actions. Id. at 508–9. The insurance policy at issue defined the term “occurrence” as a “continued or repeated exposure to conditions which unexpectedly or unintentionally results” in injury or damage. Id. at 509. As the court explained, “[f]or an occurrence to be covered under the [insurer’s] policies, the injury must be unexpected and unintentional. We have read such policy terms narrowly, barring recovery only when the insured intended the damages.” Id. at 510. Accordingly, in Rapid-American, the New York Court of Appeals essentially collapsed the meaning of expected or intended into an inquiry solely limited to the policyholder’s intent. In adopting a subjective standard, the Rapid-American court favorably cited two earlier decisions of the court, both of which are consistent with the subjective approach that measures expectation or intent based on the policyholder’s actual state of mind. Id. (citing McGroarty v. Great Am. Ins. Co., 329 N.E.2d 172 (N.Y. 1975), and Miller v. Cont’l Ins. Co., 358 N.E.2d 258 (N.Y. 1976)).

In McGroarty, the New York Court of Appeals held that the policyholder’s excavation and construction on its own property, which caused continuing property damage to a neighbor’s building, was not expected or intended despite the policyholder’s knowledge that its actions “might lead to some eventual damage to the building.” 329 N.E.2d at 173–75. This knowledge of a “calculated risk,” however, did not equate to a finding that the policyholder “intended that plaintiff’s building should, as a result, incur the damage which did eventuate.” Id. at 175. Similarly, a New York Appellate Division court has held that a policyholder who manufactured asbestos-containing products did not expect or intend—for purposes of voiding insurance coverage—the resulting asbestos-related injuries. See Union Carbide Corp. v. Affiliated FM Ins. Co., 955 N.Y.S.2d 572, 575 (N.Y. App. Div. 2012). Instead, that court found that the policyholder “was merely aware that asbestos could cause injuries and that claims could be filed” and that its “‘calculated risk’ in manufacturing and selling its products despite its awareness of possible injuries and claims does not amount to an expectation of damage.” Id.

By contrast, an insurer typically argues that an objective standard should be applied to determine if a policyholder expected or intended injury such that the resulting liability is excluded from coverage. For example, insurers cite County of Broome v. Aetna Casualty & Surety Co., 540 N.Y.S.2d 620, 622 (N.Y. App. Div. 1989), which involved a coverage dispute concerning environmental liability arising from a landfill that the county had operated for more than a decade before ceasing its operation. A federal complaint against the county alleged that certain wastes dumped in the landfill while it was in operation were contaminating the nearby soils and groundwater and causing personal injuries and property damages. Id. at 621. The insurers contended that there was no occurrence because the injuries and damage caused by the dumping should have been expected by the county, which was “aware” of pollution being caused by its operation of the landfill, and yet it chose to continue its operations unabated. Id. at 621–22. The county argued that the “consequences” of the dumping “were neither expected nor intended” and it was “at most only negligent in allowing pollution to develop.” Id. The Appellate Division sided with the insurer and found that the evidentiary record indicated that the county “was aware of the problems at the landfill” based on previous inspections conducted by various governmental units, and despite this awareness, the county “continued to permit dumping at the landfill.” Id. at 622. In reaching this conclusion, the Appellate Division stated that “personal injuries or property damages are expected if the actor knew or should have known there was a substantial probability that a certain result would take place.” Id. (citing Auto-Owners Ins. Co. v. Jensen, 667 F.2d 714, 719–20 (8th Cir. 1981)).

Considering these two different approaches for analyzing the “expected or intended” defense, let’s hypothesize that a pesticide manufacturer engages in robust pre-launch testing to determine if a newly developed pesticide product can be used safely and effectively for its intended purpose. In so doing, the manufacturer commissions hundreds of tests to be conducted regarding product efficacy, toxicity, and possible side effects under many different environmental conditions, over a period of many years, in many different countries around the world. Because of the substantial volume of data generated from this multiyear battery of testing, the manufacturer feeds all of the data into an artificially intelligent computer (think IBM Watson) to review all of the data and determine whether the product, when used, will be safe and effective such that it can be launched into the market for sale. Subsequently, the AI computer analyzes all the data and concludes that the pesticide manufacturer should proceed with launching the product. (Note: One example of an industry in which AI can assist a policyholder in business decisions is pharmaceuticals. See, e.g., Daniel Faggella, “7 Applications of Machine Learning in Pharma and Medicine,” Techemergence, Jan. 11, 2018 (“The use of machine learning in preliminary (early-stage) drug discovery has the potential for various uses, from initial screening of drug compounds to predicted success rate based on biological factors.”) After the product launch, however, there is a significant harm caused by the product—hypothetically, the pesticide kills desirable honey bees in addition to killing undesirable insects. This killing of honey bees leads to liability claims of farmers and others who rely on honey bees to pollinate their crops.

Under the subjective standard for the “expected or intended” defense, the policyholder cannot be said to have subjectively expected or intended the damage giving rise to liability because the policyholder’s only relevant thought was to rely on AI to make the determination that the product, when launched, would be safe for use. Because there was no subjective judgment by any particular individual at the pesticide manufacturer one way or another, there could not be any subjective expectation (or intention) of harm. The manufacturer is simply relying on the AI computer to make that judgment. Thus, it would seem that a pesticide manufacturer using AI in that scenario would, by definition, be insulated from any argument by the insurer that the “expected or intended” defense applies.

On the other hand, if the objective standard applied under the relevant state law for consideration of the insurer’s “expected or intended” defense, the analysis would be somewhat different in our pesticide product hypothetical. Specifically, the insurer would argue that, based on all available information, the policyholder “should have expected” the damage resulting from the product launch. Nonetheless, the pesticide manufacturer could counter that the judgment to launch the product was made by the AI computer; therefore, in relying on the “go forward” decision of AI, it cannot be argued that it “should have known” of the adverse consequences of the product launch because a purely objective analysis of the data performed by the AI computer made the determination that harm was not foreseeable. Thus again, the policyholder could insulate itself from an objective standard “expected or intended” defense because the quintessential objective actor—i.e., the AI computer dispassionately analyzing all available data—did not expect the harm that ultimately occurred. See also William Shaw, “What Insurers Need to Know as Driverless Cars Hit UK Roads,” Law360, Jan. 17, 2018 (noting that proposed law for driverless cars would “essentially impose strict liability for insurers,” who can then “pursue their own product liability claims against the manufacturers of software, or of the vehicles themselves”).

The analysis may be further complicated by a manufacturer’s retention of a third-party consultant to use its AI computers to analyze data and make product-launch decisions. Consider a scenario where a policyholder outsources those decisions to be made by a consultant that uses AI for such decisions. In this circumstance, an insurer advancing an “expected or intended” defense can pivot to a new argument that the relevant issue is whether it was reasonable for the policyholder to retain and rely on a third-party consultant with an AI computer to make the product-launch decision because that reliance was the relevant act, rather than the product launch itself, that led to the harm in question.

In short, the development of AI in business judgment decision making adds a new dimension to the “expected or intended” defense that policyholders may be able to take advantage of in fending off this particular coverage defense by insurers. If policyholders are no longer making the key business judgments that may give rise to liability, how can they be alleged to have “expected or intended” the injury or damage arising from those judgments? After all, “expected” or “intended” is an analysis of a human state of mind, not an analysis of a computer calculation.

Late Notice Defense

A common disputed issue in insurance coverage claims is that of whether the policyholder provided “timely notice” of a claim or loss to the insurer. If notice is deemed to have been untimely, some jurisdictions require an insurer to prove that it was prejudiced by the late notice in order to avoid coverage for the claim or loss, see Brakeman v. Potomac Ins. Co., 371 A.2d 193, 196 (Pa. 1977); whereas, in other jurisdictions, a showing of prejudice may not be required before an insurer can deny coverage based on late notice grounds. Am. Home Assurance Co. v. Int’l Ins. Co., 684 N.E.2d 14, 18 (N.Y. 1997). But see N.Y. Insurance Law § 3420 (McKinney 2013) (“No policy or contract insuring against liability . . . shall be issued or delivered in [New York], unless it contains in substance the following provisions[:] . . . (5) A provision that failure to give any notice required to be given by such policy within the time prescribed therein shall not invalidate any claim made by the insured . . . unless the failure to provide timely notice has prejudiced the insurer[.]”). In the case of reinsurers, a showing of prejudice is required before the reinsurer can deny coverage based on late notice. See Unigard Sec. Ins. Co. v. N. River Ins. Co., 594 N.E.2d 571, 583 (N.Y. 1992) (“[F]ailure to give the required prompt notice is of substantially less significance for a reinsurer than for a primary insurer.”).

With the rise of artificial intelligence in business operations, including those in use by insurance companies, (See Vincent Branch, “Artificial Intelligence & Insurance: The Unexpected Love Affair?,” XL Catlin, Nov. 26, 2017 (“And artificial intelligence will influence all parts of insurance, from how the product is sold, how we price and underwrite and to, most importantly, how we quickly pay claims.”). The importance of a policyholder giving specific notice to an insurer of a loss may diminish in importance. For example, if a hurricane makes landfall in a particular area of the Texas Gulf Coast, it would be relatively easy for an insurer’s artificially intelligent computer to scan all of the underwriting files of its policyholders and to determine, with speed and precision, which of its policyholders’ facilities are located in the landfall area. At that point, the insurer can contact the policyholder about any potential loss or damage and send out a claims adjuster immediately. In such a scenario, what is the necessity of the policyholder sending an email giving notice to the insurer about the potential for hurricane-related loss at its facility? That fact would already be common knowledge available to the insurer.

It would make no sense to penalize the policyholder for failing to give timely notice to an insurer when the insurer already knew about the potential loss—or could easily have learned about the potential loss—through its own artificially intelligent business processes. Indeed, in the context of a marine insurance policy that provides hull and cargo insurance, marine insurers likely have access to worldwide shipping information that rapidly conveys to interested parties notice that a particular ship has gone down in the ocean. Again, with that information, an AI business process could quickly search an insurer’s policy files and underwriting records to determine which policyholders either owned or chartered the vessel or had cargo being conveyed on that vessel. Moreover, there may be an existing, commonly accessible blockchain record that contains all transactions for that vessel, such that an insurer’s AI computer can connect to that blockchain record and search for any relevant policyholder property located aboard the vessel.

In these foregoing examples, it seems that the requirement of a policyholder giving prompt notice of a loss as a prerequisite to coverage is no longer necessary, given that the insurer can quickly learn of the loss—maybe as early or earlier than the policyholder learns of the loss. Accordingly, if an insurer is made aware of a policyholder’s potential loss through some means other than the policyholder’s own communication of that potential loss, the insurer should not be able to point to the policyholder’s lack of timely communication about the loss to serve as a basis for avoiding coverage.

Misrepresentation or Material Omission Defense

Another frequently disputed issue in insurance coverage cases is alleged misrepresentation or a material omission by a policyholder in applying for the insurance policy. Insurers may attempt to void coverage based on a policyholder’s alleged failure to disclose, which is known as a “misrepresentation” or a “material omission.” See Travelers Indem. Co. of Ill. v. CDL Hotels USA, Inc., 322 F. Supp. 2d 482, 498–99 (S.D.N.Y. 2004). It has been held that a policy procured through material misrepresentation by the insured may be rescinded by a court. See Nat’l Union Fire Ins. Co. of Pittsburgh, PA v. Hicks, Muse, Tate & Furst, Inc., No. 02 CIV 1334 (SAS), 2002 WL 1313293, at *5 (S.D.N.Y. June 14, 2002). A misrepresentation is deemed “material” when the insurer “would not have issued the policy had it known the truth.” Id. Specifically, in order for a misrepresentation or omission to be considered “material,” it must be established by the insurer that it would not have issued the same policy if the correct information had been disclosed. See Parmar v. Hermitage Ins. Co., 21 A.D.3d 538, 540–41 (N.Y. App. Div. 2005) (citing cases).

As in the case of the late notice and prejudice defense, the misrepresentation (or material omission) defense may become obsolete in the new world of AI and the ever-expanding use of blockchain technology. For example, an insurer would raise a defense of misrepresentation or material omission if a claim is made and the insurer then determines that there was some fact about the risk that it would have wanted to know at the time of underwriting but was not informed of it by the policyholder—and, had the insurer known of it, the insurer would never have issued the policy or it would have charged a higher premium.

This coverage defense arose centuries ago when there was unequal information between a policyholder and insurer about the nature of a particular risk to be insured. Indeed, the doctrine of uberrimae fidei (“utmost good faith”) dates back to the 17th century, when marine insurers relied on ship owners to disclose all of the relevant information about the seaworthiness of a vessel before a marine insurance policy was issued. See, e.g., N.Y Marine & Gen. Ins. Co. v. Tradeline (L.L.C.), 266 F.3d 112, 123 (2d Cir. 2001) (describing notion of uberrimae fidei in marine insurance).

However, the notion of a policyholder having to disclose all material information about a risk, whether or not the insurer specifically requests that information, should not be viable where an insurer has ready access to the relevant information through public sources, and the use of AI can speedily review the data in those public sources to pinpoint desired information. Whether it be the presence of a policyholder’s ship on the high seas or the values of the policyholder’s properties on land or the financial performance of a policyholder’s business unit, much if not all of that information may be obtained from open sources that an insurer’s AI computer can readily process and then summarize for an underwriter. (Note: In addition to making these information-sharing functions more time-efficient, blockchain technology will also achieve these tasks more cost-effectively. See Matthew Lerner, “Insurers Test Out Blockchain,” Bus. Ins., Dec. 2017, at 4.) This type of AI-aided underwriting may become standard operating practice for insurance companies before they issue particular policies. Accordingly, the whole premise for the misrepresentation or material omission defense may fade away if the underwriter need not rely on the policyholder to provide relevant underwriting information that is readily available through commonly accessible sources.

Bad-Faith Claims Handling

Finally, insurance policies place on insurers an obligation of good faith and fair dealing in their claims-handling practices. Under New York law, for example, bad-faith claims are a type of punitive measure arising out of breach of contract actions. See N.Y. Univ. v. Cont’l Ins. Co., 662 N.E.2d 763, 767 (N.Y. 1995). An insurer acts in bad faith when (1) its conduct is actionable as an independent tort, (2) the tortious conduct is egregious, (3) the egregious conduct is directed toward the plaintiff, and (4) the conduct is part of a pattern directed at the public generally. Id.; see also Sichel v. Unum Provident Corp., 230 F. Supp. 2d 325, 328 (S.D.N.Y. 2002) (describing NYU test as a claim for “bad faith denial of coverage”). Bad faith is a tort remedy available as the result of a breach of contract. See Wiener v. Unumprovident Corp., 202 F. Supp. 2d 116, 123 (S.D.N.Y. 2002). A bad-faith claim is available to an insured who can demonstrate that the insurer made a bad-faith refusal to pay out policy benefits. See Acquista v. N.Y. Life Ins. Co., 285 A.D.2d 73, 77–78 (N.Y. App. Div. 2001).

Recent reports suggest that insurance companies may be relying more and more on artificial intelligence in their business processes, including the handling of policyholder claims. See Brenna Hughes Neghaiwi & John O’Donnell, “Zurich Insurance Starts Using Robots to Decide Personal Injury Claims,” Reuters, May 18, 2017 (Zurich chairman stating, “We recently introduced AI claims handling . . . and saved 40,000 work hours, while speeding up the claim processing time to five seconds[.] . . . We absolutely plan to expand the use of this type of AI[.]”); see also Norton Rose Fulbright, Unlocking the Blockchain: A Global Legal and Regulatory Guide 10, 15 (noting that insurers are considering use of “smart contracts” to aid in claims handling); Matthew Lerner, “Blockchain Technology Breaks Through,” Bus. Ins., July 2017, at 7 (same).

If an insurer’s decisions on whether to accept and pay a claim—or, alternatively, to deny a claim—are made by AI rather than by human judgment, there are significant implications for insurance bad-faith law. For example, in bad-faith cases, key issues that are often litigated include whether an insurer’s decision to deny a claim is “unreasonable” and whether, in denying coverage, the insurer acted with malice—namely, with a reckless disregard for the facts supporting coverage. See Rancosky v. Wash. Nat’l Ins. Co., 170 A.3d 364, 377 (Pa. 2017) (holding that an insurer acts in bad faith when it knowingly or recklessly disregards its “lack of a reasonable basis in denying the claim,” which can be demonstrated by a “motive of self-interest or ill-will,” among other factors).

Thus, a bad-faith claim against an insurer can include inquiries that are objective in nature—i.e., whether a claim denial is unreasonable—as well as inquiries that focus on the state of mind of the insurer—i.e., whether the insurer’s decision to deny coverage was motivated by self-interest or ill will. In this regard, an insurer may attempt to insulate itself from a bad-faith claim by arguing that a denial of coverage was determined by an artificially intelligent computer, applying parameters set forth in an algorithm that was designed without any particular desired result. Of course, this argument may be rebutted by showing that an insurer’s reliance on AI to make coverage decisions was unreasonable because of obvious flaws in the relevant algorithm or because the algorithm was skewed improperly toward denying coverage rather than accepting coverage for a claim. Policyholders may also argue that it is per se unreasonable for an insurer to delegate claims-payment decisions to a computer without human review of those decisions. Such a debate could require new and different kinds of expert testimony on both sides of the bad-faith debate. Indeed, rather than each side proffering a bad-faith expert who typically has had decades of experience handling insurance claims, the parties may need to retain technical experts who understand in detail how the AI computers are programed and whether they have particular biases or leanings. Indeed, the battle of bad-faith experts could be transformed into a battle of AI experts.

Conclusion

The growing use of artificial intelligence in business operations and in the insurance industry may provide interesting new twists on traditional legal issues that are commonly disputed in the insurance coverage arena. In addition, entirely new areas of dispute will likely arise in the context of contested coverage claims. Courts will no doubt be forced to grapple with these new areas of dispute, and the insurance lawyers who are best equipped to understand and explain the workings—and limitations—of artificial intelligence will be ahead of the game when these questions arise.

John M. Sylvester and Ralph C. Surman Jr. are with K&L Gates LLP in Pittsburgh.

Copyright © 2018, American Bar Association. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or downloaded or stored in an electronic database or retrieval system without the express written consent of the American Bar Association. The views expressed in this article are those of the author(s) and do not necessarily reflect the positions or policies of the American Bar Association, the Section of Litigation, this committee, or the employer(s) of the author(s).