AI Risks Transferred to Insurers
AI brings not only benefits and improvements to the insurance industry and its processes. As more businesses across all industrial verticals are incorporating AI tools into their day-to-day operations and automating processes and decision-making, companies are increasingly worried about AI risk.
From privacy and data protection concerns to intellectual property infringement, the challenges are multifaceted. Among these, two critical risks—model bias and model underperformance because of data drift—loom large, casting shadows over the successful deployment of AI systems. Model bias refers to the inherent prejudices embedded in AI algorithms, leading to discriminatory outcomes. Data drift, the gradual evolution of input data over time, can cause AI models to underperform, affecting their accuracy and reliability.
These risks carry potentially significant financial implications for organizations. In fact, managing AI risks has been listed as the main barrier leaders face in scaling existing AI initiatives. Insurance has enabled business ventures in countless previous instances—as Henry Ford famously said, “[w]ithout insurance we would have no skyscrapers. . . .” Insurance could also prove to be the right vehicle to manage and transfer AI-related risks to support the safe adoption of AI by companies and society.
As we discuss below, traditional coverages do not fully protect against AI risks; they leave significant coverage gaps. We will touch on some of the available insurance coverages specifically for AI today.
AI as an insurable risk. For a risk to be insurable, it needs to be pure (resulting in a loss or no loss with no possibility of financial gain—to be contrasted with speculative risks like gambling), quantifiable (measurable in financial terms), and fortuitous (the insured event needs to occur by chance), and the corresponding losses need to be measurable. When analyzing the risk of model underperformance, it becomes clear that AI risks exhibit these elements of insurability.
What is model underperformance? Suppose an AI model classifies credit card transactions as fraudulent or not fraudulent. Further, suppose that the model correctly classifies these transactions 90 percent of the time on previously unseen test data—i.e., the AI model has an error rate of 10 percent as determined on test data. The performance of AI models (meaning the error rate) can fluctuate for various reasons; for example, data drift, which occurs at random and causes a spike in the error rate. As noted above, “data drift” refers to the unanticipated and gradual changes in the characteristics and distribution of the incoming data, introducing unexpected variations that can affect the performance and reliability of machine-learning models.
In our example, suppose the model correctly identifies only 80 percent of fraudulent transactions when actively used in the real world in a given month as the associations in the data change compared with what the model was exposed to in the test data (i.e., there is a data drift between test data and actual use case data in the given month). This data drift exposes users to double the amount of fraud claims than anticipated. More generally, in all scenarios where AI systems are crucial for operations, underperformance can result in losses, business interruptions, and decreased productivity. Transferring this statistical fluctuation risk on the error rate could be beneficial to many AI users as it creates financial certainty.
Insuring against AI underperformance falls within the domain of pure risk, as businesses seek coverage against the negative outcome of AI systems failing to meet predefined performance thresholds for reasons that cannot be fully mitigated technically. Establishing performance benchmarks and checking the AI’s historical performance (or representative test performance) against those thresholds allows an estimation of the probability of underperformance. Insurers can then determine the financial impact, should these benchmarks not be met, which provides a basis for calculating premiums and payouts. As underperformance is often caused by data drift—by definition unanticipated—it aligns with the need for fortuitousness. As a result, all elements of insurable risks are met.
The insurability of other AI risks (e.g., intellectual property infringement, discrimination, liabilities) is less straightforward. For one, the legal environment around AI liability and intellectual property infringement is nascent, and the treatment of AI in courts is still very opaque, making it difficult for insurers to estimate potential losses and calculate corresponding premiums. Pending court cases and regulations will increase the transparency of this risk. Furthermore, the quantifiability of other AI risks is more complex. This complexity can be simplified by tying those risks to a performance threshold (e.g., when measuring bias in the form of a fairness metric), framing the risks as performance risks, and then quantifying the risks as described above.
Uncertainty of traditional insurance coverage. Knowing that AI risks are insurable, our next focus is to determine whether and how existing policies protect against AI risks.
Given the widespread integration of AI, damages arising from AI-related incidents can manifest in various forms, including—as in the example above—financial losses and operational disruptions, but can also lead to data and privacy breaches, as well as legal liabilities including intellectual property infringement. The damages incurred may implicate a range of insurance policies, such as cyber insurance for data breaches, general liability insurance for physical harm caused by AI-based machinery, technology liability for negligence claims, and media liability for intellectual property infringement during AI model training.
Traditional insurance policies can offer coverage for certain AI-related losses, bridging some gaps but leaving significant areas unprotected. General liability policies most likely cover AI-caused physical harm, but most liability policies exclude discrimination. Discrimination is covered in employment practices liability insurance but only for employment-related discrimination. As AI models are being used increasingly in various areas where laws against discrimination apply (e.g., healthcare, real estate, credit approvals), this could leave users uninsured against potential lawsuits (class actions).
Cyber insurance policies are effective against data privacy issues, but they may fall short in cases like the Samsung data leak, where employees inadvertently leaked source code while using ChatGPT to help with work. As the unauthorized disclosure of the code involved the insured’s proprietary data, its cyber policy could refuse to pay its losses. Dependence on AI vendors’ coverage poses challenges, especially considering the potential size of financial effects on businesses.
Technology liability policies are meant to cover third-party claims for negligence, yet the complexity of AI risks challenges their effectiveness. The “black box” nature of certain AI models complicates determining negligence, and uncertainties around applicable standards make these policies a primitive tool for AI risk protection.
Because AI risks are novel, existing policies will change over time to also encompass certain AI risks. For now, companies and AI providers face significant coverage gaps. The uncertainty around coverage makes it hard for companies and risk managers to fully assess their exposure to AI risks. This uncertainty is burdensome for the insured but could also expose insurers to unexpected risks that are not priced into the insurance policy. This “silent AI” exposure will need to be thoroughly explored so as not to expose insurance portfolios to unexpected, significant, and potentially systematic risks and losses.
Available coverages today. To date, very few insurance policies consider AI risks. Few insurers are openly addressing or making public statements about the risks associated with AI, bringing up concerns about the industry’s apparent reticence in acknowledging and mitigating potential challenges in this rapidly evolving technological landscape. A notable exception is Munich Re, a leading global provider of reinsurance, insurance, and insurance-related risk solutions. Munich Re has been insuring AI risks since 2018 and has emerged as a pioneer in providing insurance solutions to mitigate the financial ramifications associated with AI underperformance and other AI risks.
Underperformance: Munich Re’s third-party flagship product protecting AI providers against model underperformance is called aiSure for performance guarantees. This insurance product allows AI providers to support the quality of their models, assuring customers of their AI tools’ reliability by providing them with performance guarantees. Suppose an AI vendor would like to promise its customers a specific accuracy level, such as 90 percent, in fraudulent transaction detection. When the AI falls short of this commitment, Munich Re provides financial restitution aligned with the losses suffered. This insurance-backed performance guarantee aims to instill confidence in AI with Munich Re’s financial stability, ensuring effective mitigation of risks associated with AI model underperformance.
In addition, Munich Re provides aiSure for its own AI models, addressing the needs of businesses implementing self-built (“home-grown”) AI solutions. Suppose a car manufacturer is relying on AI to identify the need for paint jobs via cameras as part of the car production process. Munich Re’s aiSure ensures protection against an AI error rate drifting beyond a predetermined threshold, which could otherwise leave the business unprotected against potential recalls and losses arising out of business interruption. This insurance solution enables enterprises to integrate AI models into critical operational tasks without undue concerns about potential underperformance.
Legal liabilities: While insurance solutions for AI underperformance are beginning to emerge, legal liabilities stemming from AI-related risks present an even more complex landscape. The intricate legal situation surrounding AI models and the evolving nature of court outcomes contribute to a scarcity of insurance products covering legal liabilities that are thus far not covered by traditional insurance solutions. As the responsibilities and legal implications of AI model failures remain largely uncharted territory, the lack of established risk-transfer solutions adds a layer of uncertainty.
Recognizing the need for comprehensive risk mitigation, Munich Re is now developing insurance products tailored to address legal liabilities associated with AI. Among these offerings is aiSure for discrimination risks, a first insurance solution designed to safeguard businesses against damages and financial losses arising from lawsuits that allege AI-made decisions result in discrimination against protected groups.
The AI landscape is constantly evolving, and a budding of awareness of AI risks is growing into implementation concerns. With a jump in the number of businesses incurring financial losses due to AI failure, existing insurance policies are expected to change over time to affirmatively incorporate or specifically exclude many of the existing AI risks. Tailored insurance policies and endorsements that deal with specific AI risks are expected to multiply in the meantime, as users and providers become more aware of their exposure and as the legal landscape clarifies.
Outlook on the Developing AI Insurance Market
We offer an outlook on potential future market developments by sharing ideas about insuring generative AI and drawing parallels between the young cyber insurance market and the rising AI insurance market.
Challenges of insuring GenAI. Generative AI (GenAI) represents a significant evolution from traditional AI. While conventional AI models are designed for specific tasks, GenAI, exemplified by models like GPT-4 and Bard, can generate novel content—text, images, and more. This generative capability introduces new and unique risks, such as the potential for hallucinations, intellectual property infringement, the spread of false information, and the generation of harmful content. Unlike conventional AI, GenAI operates in an unsupervised or semi-supervised manner, responding with a degree of “creativity.”
This creativity brings subjectivity and complexity to evaluating GenAI’s outputs, making the risks associated with GenAI distinct and challenging. The difficulty arises in defining concrete thresholds for underperformance, as GenAI’s outputs, such as hallucinations or false information, may not have a clear, objective benchmark. Testing regimes must be tailored to specific tasks, and the evaluation process involves considerations like the model’s input space, clear definitions of undesired outputs, and the continuous monitoring required to capture performance changes over time. In addition, the updating of foundation models further complicates the underwriting process, requiring higher standards of monitoring and adaptation.
Munich Re outlines a framework for insuring different risks associated with GenAI. Risks like hallucinations, false information, and harmful content could be insured by developing a model evaluation pipeline in collaboration with GenAI providers. The insurance would be based on defined performance metrics and thresholds, with a focus on specific tasks and a comprehensive testing regime. For addressing model bias and fairness, Munich Re proposes determining and agreeing on fairness metrics aligned with the application’s goals. The evaluation involves defining thresholds and assessing the trade-off between fairness and accuracy. Munich Re also delves into the challenges of insuring against intellectual property and privacy violations, proposing methods like using narrow definitions agreed on by both insurer and insured and leveraging training techniques for quantifiable risks. But many risks, including environmental impacts, are still under exploration, and Munich Re plans to adapt its risk transfer solutions as the risk landscape and demand for protection become clearer.
AI insurance market and cyber insurance market. Much like the internet, AI has received wide adoption across nearly all corporate functions and all industries. For an outlook on AI insurance, analyzing the treatment of cyber risks and the corresponding insurance provides valuable insights into the potential future development of the AI risk market.
Much as the rise of cyber risks in the late 1990s prompted insurers to explore new territories, the surge in AI usage will soon become a focal point for emerging risk teams. AI insurance is in its early stages, akin to the initial forays into cyber insurance. The first cyber policies written focused on specific loss scenarios and were tailor-made with a strong technology focus. This seems to be the stage of AI insurance to date: To navigate the complexities of AI adoption, specific risks are addressed through tailor-made policies. Munich Re’s underperformance insurance validates this theory with risk assessment focusing on the robustness of each individual model, premiums dependent on specific performance data, and payout structures developed case by case.
As businesses grapple with the transformative potential of AI, insurers will start developing coverage to manage AI-related liabilities. When losses from cyber incidents started spiking, risk managers, brokers, and insurers started thinking about cyber risks in a more systematic and strategic way. An increase in AI-related losses seems to be on the horizon, considering the recent uptick in intellectual property lawsuits, lawsuits against healthcare companies using AI, and an increased interest shown by regulatory agencies in not tolerating discrimination from AI models.
Regulatory landscapes, exemplified by the European Union’s General Data Protection Regulation for cyber risks, play a pivotal role. Similarly, AI regulation will likely spur businesses to follow evolving guidelines and adopt responsible AI initiatives, creating a parallel with the regulatory journey in cyber insurance. Once these regulatory cyber landscapes were more clearly defined, markets started navigating compliance phases, developing standardized processes aligning with regulatory norms. This shift simplified underwriting and marked a transition toward an informed, standardized market practice, echoing the journey of other established insurance sectors.
The ultimate vision is a mature AI insurance market, marked by standardized practices and structured pricing—akin to the evolution witnessed in cyber insurance.
Risk Management: AI-Generated Legal Risks
AI is changing the way businesses operate, and with those changes come many known—and unknown—legal risks. Among other things, AI might
- expose companies to additional cybersecurity and data privacy risks;
- give rise to product liability claims if AI-enabled products generate faulty (or even dangerous) outputs;
- create fiduciary liabilities for directors, officers, and managers who greenlight or fail to oversee AI deployment;
- result in intellectual property infringement;
- facilitate unwitting discrimination through algorithmic bias; or
- compel newly displaced employees to sabotage their former employers.
Because AI is largely novel, complex, and unregulated, AI may very well also generate unforeseen—and unintended—consequences. With this uncertainty, businesses face a panoply of risks that they may not fully understand or appreciate. Businesses should work to get ahead of these risks now before they face exposure later. In doing so, they should think about insurance and risk management issues early and often.
Just as no two businesses are the same, no two businesses have the same legal risk profile when it comes to AI. Potential legal liabilities turn on many factors, including the industry in which a business operates, the products or services it sells, and the corporate form it adopts, among other factors. Together, these differences highlight an essential bottom line: Risk managers and insurance professionals must analyze the business fully to determine its risk profiles and exposures, which will differ even from others in the same field.
The three case studies below—which focus on the use of AI to address retail shrinkage, prevent fraud, or improve operational efficiencies—highlight the importance of business- and industry-specific AI audits.
Case study 1: AI-driven loss prevention. Retailers are increasingly deploying AI solutions to avoid losses in the form of theft or shrinkage, which has been growing rapidly in recent years. The uptick in retail theft has cost retailers billions and even threatened shoppers and employees. While AI offers a promising solution to retail shrinkage, it is not without unique legal risks.
Tangibly, retailers are using AI to complement existing anti-theft technologies. One of the goals of this AI-assisted technology is to catch thieves before they act. AI-assisted cameras that analyze images and detect suspicious activity are an example. AI cameras can not only monitor people in stores but can also monitor shelves, display cases, checkout lanes, and other areas in the store to detect theft before it occurs. This AI technology, together with a related suite of AI-enhanced technologies like acousto-magnetic electronic article surveillance systems and radio frequency identification systems, could be transformative for retail businesses seeking to minimize the rates of retail theft. As one commentator noted, intelligence-led loss prevention may not only thwart theft but also increase brand loyalty by using data garnered from surveillance activities to better understand specific customers’ shopping habits.
Despite its promise, this technology brings with it many potential—and unique—legal risks. One prominent example is the potential for lawsuits against retailers by customers alleging civil rights violations through false accusations of shoplifting occasioned by AI technologies. That is, AI-driven loss-prevention technology may cause certain individuals to be singled out based on a protected characteristic like their race, sex, or age. Allegations of this sort—even if untrue—might be very damaging for businesses, and not only because such allegations are prone to high levels of publicity that can cause large financial losses. Exposure is also heightened by the specter of class action litigation.
Preventive steps include, for example, the adoption of stringent loss-prevention policies and employee training programs. But while an ounce of prevention can often be worth a pound of cure, some risk of this type is likely to materialize in any event. Businesses should thus consider how their risk management and insurance programs can prevent (or minimize) any attendant financial exposure.
Another potentially unique risk exposure associated with AI-driven loss-prevention technology involves privacy-related concerns. Businesses may face lawsuits alleging that they violated customers’ privacy-related rights. One example of a potential exposure relates to the Illinois Biometric Information Privacy Act (often called BIPA), which regulates the collection, use, and handling of biometric information and identifiers by private entities. BIPA—and other state-specific statutes, including Texas and Washington statutes—confirms that AI risk profiles are likely to vary, based on not only the specific use of AI but where that AI is used.
In sum, AI offers considerable promise for retailers seeking to minimize retail shrinkage. But AI’s promise is not without risk. The risks unique to AI-related shrinkage technologies underscore why risk managers and insurance professionals must analyze the business expansively to determine unique risk profiles and exposures. Indeed, only by thoughtfully considering all the various benefits and drawbacks can the full array of legal risks be addressed.
Case study 2: AI-driven fraud detection. Enhanced fraud detection is another way AI can benefit businesses’ bottom lines. Like retail shrinkage, fraud is incredibly costly to individual firms and the broader economy. The recent “tidal wave of pandemic fraud” is just one example. There, financial institutions were, on average, fleeced out of more than $500,000 each within a year. By 2027, estimates are that fraud losses are likely to surpass $40 billion. Here too, AI-assisted technologies offer promise for businesses, including financial institutions, healthcare organizations, and even governments.
The promise of AI in fraud detection involves marked improvements to legacy technologies. Financial institutions, for example, have been using some form of technology-assisted fraud detection for decades. But the success rate for these legacy technologies is low. Today’s machine-learning systems and AI enhancements offer considerable promise for organizations of all types looking to improve upon legacy technologies. They not only can better identify fraud before it happens; they can also reduce the number of false alerts associated with prior fraud-prevention systems. It is therefore no surprise that banks and other institutions are increasingly looking to AI-driven tools as a potential solution.
Just as using AI to address retail shrinkage brings with it new risks, so too does using AI to address fraud. Indeed, as in other areas, the legal risks occasioned by AI in fraud detection will depend on exactly how AI is deployed and in what industry. For example, healthcare organizations may use AI to detect fraud in medical billing, which could give rise to unique potential liabilities not faced in other arenas. AI also offers promise in reducing government-facing fraud by identifying gaps in gargantuan federal and state budgets. But in the healthcare and government contexts, for example, there are unique risks, including under the federal False Claims Act. Also unique to the fraud-prevention context, AI technologies may even unwittingly cause more fraud than at baseline levels.
As these risks show, the use of AI in fraud detection presents a nuanced legal landscape that requires careful consideration as technology continues to evolve. While AI systems bring substantial advantages to the identification and prevent of fraud, they also raise concerns that risk management and insurance professionals should consider. In the end, only by navigating this domain thoughtfully can organizations harness the benefit of AI-driven fraud detection while mitigating associated legal risks.
Case study 3: Predictive analytics. AI also offers promise for businesses seeking to improve their operations through predictive analytics. The term “predictive analytics” generally refers to the use of statistical algorithms to forecast future trends, enabling businesses to optimize inventory, improve delivery times, and potentially reduce costs. When using predictive analytics along with AI, companies may be able to identify even more insights that benefit their businesses. Companies that deploy AI in this way also face unique legal risks—just like retailers using AI for loss prevention and organizations using AI to decrease fraud. One key category of unique risks involves corporate litigation risks.
Suppose that predictive analytics harm a corporation’s bottom line. That corporation—and its directors and officers—may face lawsuits alleging that they breached their fiduciary duties. These suits might take many forms, whether that be direct lawsuits, derivative lawsuits, or class action lawsuits. Before such a lawsuit is filed, corporations may face demands that they produce books and records under the Delaware books and records statute (i.e., Delaware General Corporation Law section 220) or other states’ analogues. Businesses may also choose to disclose how they have used AI-driven predictive analytics to improve their business. In doing so, they face potential exposure under federal and state securities laws for the quality, content, and scope of those disclosures. None of these potential risks are static. They are all unique to exactly how a business is using predictive analytics to improve its operations.
Two specific and often discussed flaws highlight how predictive analytics can result in these types of corporate lawsuits. Predictive analytics may cause errors attributable to historical bias or otherwise faulty data inputs. In other words, because predictive analytics rely on historical data, they may produce faulty forward-looking outputs because of the inherent reliance on backward-looking data. Similarly, predictive analytics might otherwise include faulty data inputs that can harm a business’s bottom line.
Another wrinkle is that corporate law could also develop such that corporations can be sued for not using AI. While this novel argument has not yet been tested in court, the argument would be that corporate law requires the use of AI because of its superior information-processing capabilities and the legal requirement that directors act on an informed basis. As this example shows, the legal effect of AI is still being tested, which is yet another feature that businesses may want to consider as they contemplate their own unique AI risk profile.
All told, businesses should not reflexively assume that AI-driven business improvements are risk-free. Risks of all types abound, including corporate-law-specific risks that risk managers and insurance professionals would be wise to consider.
As the preceding case studies highlight, no two businesses are likely to face the same set of AI-generated legal risks. These differences highlight why businesses must consider AI risk holistically and conduct AI-specific audits of their particular business practices. Indeed, because insurance products and other risk management tools are often developed relative to specific risks, only by first understanding risks can those risks be adequately mitigated.
Conclusion
In conclusion, AI is poised to have a significant and lasting effect on the insurance industry. The use of AI algorithms can streamline processes, improve customer experiences, and facilitate the development of innovative insurance products. But it also raises legal challenges, such as the potential for biased algorithms, data privacy concerns, and questions around the accountability and transparency of AI decisions. Despite these challenges, with thoughtful risk management and the development of tailored insurance products, AI can offer substantial benefits to the insurance industry while mitigating potential risks. The evolving regulatory landscape and ongoing court cases will shape the future development of AI insurance and the legal frameworks surrounding AI in the insurance sector.