chevron-down Created with Sketch Beta.

The Antitrust Source

Antitrust Magazine Online | August 2022

The Algorithmic Accountability Act: Potential Coverage Gaps in the Healthcare Sector

Maneesha Mithal, Gabriella Monahova, and Andrew Stivers


  • The proposed Algorithmic Accountability Act requires companies to implement assessments of their algorithmic decision tools to detect issues with privacy and discrimination among others.
  • The Act would be enforced by the FTC, which may not have jurisdiction over some companies, such as non-profits and companies in the business of insurance.
  • Potential gaps in coverage may create unequal treatment of companies in the same sector with potential impacts on the ability of the Act to achieve its stated goals, as well as on competition and innovation.
  • Many entities in the healthcare industry, such as nonprofit or government-owned hospitals and health insurance companies may not be covered, which could impact competition and innovation in this sector.
The Algorithmic Accountability Act: Potential Coverage Gaps in the Healthcare Sector
SimonSkafar via Getty Images

Jump to:

The “Algorithmic Accountability Act” (AAA) proposed in February 2022 aims to address discrimination and privacy concerns related to the use of computer algorithms in corporate decision making. In this article we examine how the implementation of the AAA through the existing authorities of the Federal Trade Commission (FTC) could have a differential effect on market participants and may lead to unintended consequences for competition and innovation. Using the healthcare sector as an example, we discuss how applying the AAA’s requirements to the entities subject to the jurisdiction of the FTC may create differences in costs and incentives across competitors, both vertically and horizontally, because many participants in the healthcare sector are exempt from FTC jurisdiction. These differences in costs and incentives may in turn create some barriers to competition, affect the way in which algorithms are developed and marketed, and distort the conclusions drawn from the AAA’s central requirement that decisional algorithms be subject to formal assessments of their impact on consumers. We focus on the healthcare industry, but the analysis is relevant to any sectors where algorithms are used to make decisions and where at least some, but not all, market participants are regulated by the FTC, such as in the education sector.


Individual lives have always been profoundly influenced by other people’s decisions and actions: Does that driver decide to turn out in front of me? How does a retailer determine whether to give me a personal discount? How does the admissions officer react to my application versus that of another candidate? Do my insurance company and doctor agree on my treatment? People make these decisions using the inputs available to them, processed through their understanding of the question, their experience, and their preferences. This results in individually-tailored decisions that are consistent with the goals and responsibilities of the decision maker. Drivers do not want to be hurt or liable for accidents. Retailers want to retain customers. Admissions officers want to maximize their admission rates and ensure that admitted students can succeed at their institutions. Doctors and insurers want to apply the most cost-effective treatment consistent with an agreed standard of care.

In practice, individual decisions are often costly to make and difficult to apply consistently. The quality of any human decision can vary wildly depending on the mental state of the decision-­maker—e.g., tired or rested, distracted or focused—and their desire for a particular outcome—e.g., personal tie or animus to an individual. Any number of human factors may overpower their incentives to make a good choice. These are known faults, and both internal and external systems are often in place to prevent or correct errors. However, policing these decisions can be difficult because the decision-making process itself is opaque and idiosyncratic. Overseers can observe the outcomes, and sometimes make inferences about the quality or bias of decision making, but large-scale enforcement and control may not be cost effective.

As a technological response to the cost and inconsistency of individual decision making, the plummeting cost of data collection and processing has allowed more of the decisions that would have been made by a person to be either augmented or automated by non-human computation, that is by an “algorithm.” This means that critical decisions related to driving, pricing, employment, healthcare, credit, and other important areas can now be made consistently at scale and at much lower cost of implementation.

News stories and academic studies have documented both the successes and failures of such automated decision systems—as they have documented the successes and failures of our human decisions. A crucial difference is that the scale and thus magnitude of potential effects from automated decision making—good or bad—is much larger. At the same time, economies of scale for enforcement have improved. A single enforcement action against a harmful algorithm could have much greater benefit to consumers and the market than against individual human decision making.

While the mass automation of decision making may create new opportunities and needs for regulation, an oft-invoked criticism of proposed market regulation is that it can create or entrench competitive advantage when the costs of compliance are significant. In this article, we examine how inconsistent coverage of regulation could affect competition, the development of decision-making algorithms, and the achievement of the AAA’s goals. We do not take a position on the merits of the proposed legislation or the merits of the unintended consequences that we highlight. Our goal is to bring attention to these potential consequences to aid in the deliberation of the AAA and others like it that are likely to be proposed in the future.

Outline of the AAA

Senator Ron Wyden (D-Ore.), Senator Cory Booker (D-NJ), and Representative Yvette Clarke (D-NY) first introduced the Algorithmic Accountability Act in 2019. Based on input from stakeholders, they revised the bill and re-introduced it as the Algorithmic Accountability Act of 2022 in February of this year. The AAA does three main things.

First, it directs the FTC to develop regulations requiring certain “covered entities” within the FTC’s jurisdiction to perform impact assessments of automated decision systems that are used to make “critical decisions,” such as those relating to a consumer’s access to education, employment, essential utilities, healthcare, or financial services. Covered entities include those that make decisions, as well as those that build the technology to enable the decision making, subject to the following financial thresholds:

  • For entities that use an automated decision process to make critical decisions themselves, the AAA would cover them if they (1) have greater than $50 million in average annual gross receipts or greater than $250 million in equity value for the preceding three-year period; or (2) handle identifying information about more than 1 million consumers, households, or devices.
  • For entities that build the technology for use by others to make critical decisions, the AAA would cover them if they have greater than $5 million in average annual gross receipts or greater than $25 million in equity value for the preceding three-year period.

Among other things, the impact assessments must: (1) describe current processes being replaced by the new automated decision system; (2) document any data or other input information used for development, testing, maintaining, or updating the system; (3) test and evaluate the privacy risks and privacy-enhancing measures; (4) test and evaluate the current and historical performance of the system; and (5) evaluate the rights of consumers, including the degree to which a consumer may contest, correct, or appeal a decision or opt out of the system. Covered entities must attempt to eliminate or mitigate likely material negative impacts.

Second, the AAA includes transparency and reporting requirements. For example, covered entities must disclose their status as a covered entity to any partner organizations that develop automated decision systems for the covered entity. They must, to the extent possible, meaningfully consult with internal and external stakeholders in developing their automated decision system. And they must provide summary reports of the impact assessments to the FTC. The FTC, in turn, is required to annually publish a report on the trends and lessons learned from the summary reporting. The FTC must also develop a public registry of information on automated decision systems, with information about data sources, metrics, and, where available, documentation of any mechanism for a consumer to contest critical decisions.

Third, the AAA sets up an implementation structure, requiring the FTC to provide guidance and training materials for covered entities, to review its regulations every five years, and to consult with other federal agencies. The AAA establishes a new Bureau of Technology within the FTC to be staffed with fifty new experts and authorizes the FTC to appoint twenty-five additional personnel in the Bureau of Consumer Protection’s Enforcement Division. Finally, the AAA calls for enforcement by the FTC and State Attorneys General.

Entities within the FTC’s Jurisdiction

As noted above, the AAA’s requirements would apply only to covered entities within the FTC’s jurisdiction. The FTC Act gives the FTC broad authority over “persons, partnerships, or corporations,” engaged in “unfair or deceptive practices in or affecting commerce,” except the following:

banks, savings and loan institutions described in section 57a(f)3) of this title, Federal credit unions described in section 57a(f)(4) of this title, common carriers subject to the Acts to regulate commerce, air carriers and foreign air carriers subject to part A of subtitle VII of title 49, and persons, partnerships, or corporations insofar as they are subject to the Packers and Stockyards Act, 1921, as amended, except as provided in section 406(b) of said Act . . .

To simplify, the exceptions relate to entities in the nonprofit sector (because of the limitation to practices “in or affecting commerce”), banks, federal credit unions, common carriers (e.g., telecommunications companies), air carriers, or those subject to the Packers and Stockyards Act (i.e., the meatpacking/livestock industry). A separate exception is set forth in the McCarran-Ferguson Act, for “the business of insurance” to the extent regulated by State law.

The exemptions to FTC jurisdiction laid out under these laws generally fall into three categories. The first category of exemptions is status-based. In other words, these exemptions are defined in terms of the nature of the entity itself. The FTC Act exceptions for banks, credit unions, and nonprofit entities are framed as status-based exceptions. This has two implications. First, these entities are exempt from FTC enforcement under the FTC Act, no matter what activity they are engaged in. Under the AAA, the FTC could not require them to conduct impact assessments of their automated decision systems. Second, because the exemptions are based on an entity’s status as a nonprofit, bank, or credit union, if any of these entities were to hire, partner with, or license software from, other companies that are not exempt from FTC jurisdiction, the FTC could pursue those companies under the FTC Act, even if they are working with, or at the direction of, exempt entities. Thus, for example, if a technology firm were to build software that a bank licensed to make automated decisions about its customers, the technology firm would not necessarily be exempt from FTC jurisdiction under the AAA by virtue of the bank’s exemption.

The second category of exemptions is purely activities-based. The insurance-based exemption fits within this category. The McCarran Ferguson Act does not exempt only insurance companies; rather, it exempts all entities engaged in “the business of insurance” that are regulated by state law, regardless of whether the entity is an insurance company itself. Accordingly, the development of an algorithm used to underwrite insurance, whether by the insurance company, by an affiliate, or by an unaffiliated software company, would likely be considered “the business of insurance.” If this activity is regulated by state law, it would be exempt from FTC Act enforcement and exempt from the AAA. To take a slightly more complex example, suppose an insurance company were to acquire a software subsidiary that creates algorithms for use in insurance underwriting, but also creates algorithms for non-insurance purposes (e.g., credit, employment). Although the insurance underwriting services of the subsidiary would be exempt from FTC jurisdiction if regulated by state law, the other services would not.

The third category, which covers common carriers, air carriers, and those subject to the Packers and Stockyards Act, looks at both the status of the entity and its activities to determine if the FTC Act applies. In terms of status, the exception applies only to the common carriers, air carriers, or those subject to the Packers or Stockyards Act themselves. Thus, for example, one of these entities could hire, or partner with, a non-exempt entity to develop automated decision systems about, for example, telecommunications or airline pricing, and the AAA could apply to those entities.

But the common carriers, air carriers, and those subject to the Packers and Stockyards Act are not automatically exempt from the FTC Act based on their status alone; rather, they are exempt only to the extent they are engaged in exempt activities. In a 2018 decision, the Ninth Circuit ruled that the FTC was free to apply the FTC Act to the non-common carrier aspects of a common carrier business. In that case, the FTC alleged that AT&T’s data throttling program was unfair and deceptive because the company advertised “unlimited mobile data,” but in fact imposed data-speed restrictions on customers who exceeded a preset limit. Because the FTC Act exempted only common carriers whose activities were “subject to” the communications laws, and the provision of mobile data service was not regulated by these laws, the court ruled that the FTC could enforce the FTC Act against AT&T. Based on similar reasoning, the exceptions for air carriers “subject to” Part A of subtitle VII of title 49, and those “subject to” the Packers and Stockyards Act are also activities-based in nature. Unlike in the bank/credit union/nonprofit example above, the FTC could enforce the AAA against communications companies, air carriers, and meatpacking companies, to the extent they are engaged in activities not covered by the laws referenced in Section 5(a)(2) of the FTC Act, above. Thus, for example, if a telecommunications company developed automated decision systems internally to price broadband services, as opposed to telecommunication services, those activities could be subject to the AAA.

Discussion of Gaps and Potential Consequences in the Healthcare Sector

The healthcare sector, with its complex market structure, provides an interesting case study of how limiting the AAA’s application to entities within the FTC’s jurisdiction could lead to differential effects. Hospitals and physicians make investments to provide care. Insurers determine what and how much care to pay for and set premiums. Consumers—often through an employer—choose insurers. Some of these consumers get sick and, within the decision structure administered by physicians and insurers, consume healthcare services, triggering payment. With so many interested parties to decisions about provision and payment on top of the complex inputs to clinical decision making, it is unsurprising that algorithmic care decision tools are increasingly being embedded in electronic health record and insurance claims adjudication systems.

On top of this complexity, healthcare entities are not uniformly subject to the FTC’s authorities. A significant number of healthcare providers are likely to be exempt from FTC oversight falling into either of the first two categories—status-based or activities-based—discussed above. Over seventy-five percent of hospitals in the United States are either not-for-profit or government-owned and are thus exempt from the FTC Act because of their status. None of these entities would have to conduct impact assessments of the automated decision systems they develop or use under the requirements of the AAA. Most physician-owned practices would also be exempt given the AAA’s revenue threshold. In addition to potential differences in the risk of privacy harms or discriminatory care across entities, this may create two types of imbalances related to competition: 1) a differential impact on competitors’ use of such systems in healthcare decision making, e.g., between exempt nonprofit hospitals and non-exempt for-profit hospitals; and 2) a differential impact on the development of such systems between exempt healthcare providers that have vertically integrated software-development functions on one hand and non-exempt healthcare providers or independent developers of such systems on the other. In either instance, assuming it meets the revenue thresholds, a private, for-profit hospital would have to incur costs to comply with the AAA, thereby placing non-profit and government-owned hospitals at a competitive advantage in implementing automated decision-making techniques.

Health insurance providers would also likely be exempt from the proposed AAA under the activity-based exemption from FTC oversight. To the extent that the AAA aims to address discrimination and privacy concerns in the way health insurance companies determine eligibility and coverage, the exemption may prevent it from achieving this objective. The limited reach of the bill could influence how newly developed automated tools are deployed because developers could strategically structure the development or deployment to exploit these loopholes. The language of the AAA may also create gray areas of enforcement given the close interconnection between decisions about coverage and care.

Finally, take the example of a software developer providing automated decision-making tools to both for-profit and nonprofit hospitals. The AAA may create incentives for the software developer to treat the two sets of entities differently, particularly if the software developer does not meet the revenue thresholds that would require it to conduct its own impact assessment. The developer may find it unappealing to sell its products to for-profit entities, which could generate additional scrutiny of their products, add to their compliance burdens, and invite unwanted attention from regulators. Furthermore, if only for-profit hospitals are conducting evaluations of augmented decision systems in healthcare, these assessments would not be representative of the overall performance of these systems. This may not be an issue for the results of each individual assessment, but if legislators and the FTC are looking to draw broader conclusions about the performance of these systems in the healthcare sector—as seems to be the case based on Section 6(a) of the AAA, which instructs the FTC to publish an annual report summarizing the assessments provided by covered entities—the fact that a large portion of the participants in the healthcare sector would not provide information on their systems may distort those conclusions.

One response to these concerns may be that even without being compelled by the AAA, any healthcare entity that deploys an automated or augmented decision system would choose to evaluate and monitor the effects of such a system in some manner. The entity’s incentives to do so would stem both from pressure to maintain its reputation for quality in the competition for patients and from a desire to mitigate the risk of liability in the case of post-treatment malpractice lawsuits. In response, we note that these incentives are unlikely to result in practices that match the AAA’s requirements in at least four areas.

First, as noted in the outline of the AAA above, it requires covered entities to evaluate and document the failings of the existing “critical decision-making process.” In the context of an automated clinical decision tool, this would likely mean documenting “any known harm, shortcoming, failure case, or material negative impact” of a healthcare provider’s prior clinical decision processes. While non-covered entities would be likely to document their business justifications for investing in such tools, they may be reluctant to document all harms from past practices, especially if those harms potentially overlapped with liability for discriminatory practices. Second, the AAA requires engaging with all “relevant” internal and external stakeholders and documenting their concerns and the entity’s response. Leaving aside the potential value of such a dialogue for improving algorithms, entities not subject to this provision would likely have cheaper and quicker paths to development and deployment. Third, the AAA requires submission of a summary report to the FTC. While the AAA explicitly allows for the voluntary submission of such a report by any entity, the potential liabilities arising from such disclosures make it unlikely that healthcare entities would submit reports without being compelled to do it. Fourth, the AAA has a variety of specific recordkeeping and disclosure requirements that may not match non-covered hospital’s needs or incentives. All of these requirements are likely to create differential costs for entities who are subject to the AAA versus those who are not. In proposing the AAA, the drafters clearly anticipate that market incentives do not by themselves induce the required practices. While some of the costs of the additionally required practices could be small, in aggregate they are likely to be significant. In addition, any requirements are likely to introduce some uncertainty into whether practices are violative or not. The costs of adjudicating or mitigating those risks may be significant in themselves.

Finally, even if there would be near universal overlap between practices that firms independently believe to be in their interest due to market pressures and the AAA-required practices, coverage by the AAA adds a significant risk from civil penalties and costly “fencing in” injunctive requirements either if the covered entity makes a mistake in its efforts to comply or if a regulator makes a mistake in bringing a case where none was warranted. These potential penalties are likely to increase ex-ante the costs for covered entities as they will need to account for the market risks stemming from noncompliance (shared by non-covered entities) as well as the regulatory risks.

Examples in the Healthcare Sector

With the proliferation of electronic health records, healthcare sector participants have gained access to large amounts of patient data, which can be used in a variety of ways. Hundreds of technology vendors offer different products aimed at improving the quality and efficiency of care. One such set of products is Clinical Decision Support (CDS) systems, which include various tools for improving decision making during clinical care. These tools provide accessible clinical guidelines for choosing procedures, medications, or tests based on patient data, sending reminders and alerts, and aiding diagnoses. CDS tools are used by both healthcare providers and healthcare insurance companies who use them to determine what procedures are “medically necessary” and whether certain procedures should be covered. Because these tools use “computation, the result of which serves as a basis for a decision or judgment,” they would fall under the AAA’s definition of augmented decision systems and would require impact assessments. Below we explore scenarios in which the proposed legislation may result in unequal treatment and potential harm to competition.

A Nonprofit or Government-Owned Hospital Using CDS Is Not Covered Under the Proposed Act, but a For-Profit Hospital Is.

Hospital markets are local (e.g., a metropolitan area or a county) and competition between hospitals happens at two stages. First, hospitals compete to be included in insurers’ networks because patients are more likely to visit hospitals within the network of their chosen insurance provider rather than out-of-network hospitals. Competition at the first stage is mostly on the prices that hospitals negotiate with insurers. Second, hospitals compete to attract patients. This second-stage competition is mostly on the quality of hospital care and the attractiveness of the facilities (including location). In a given local market, a nonprofit or government-owned hospital may compete directly with a for-profit hospital. The fact that the former would be exempt from the AAA while the latter would not, may affect competition between them in two ways. First, compliance with the law, i.e., conducting an impact assessment of its use of CDS tools as often as deemed necessary by the FTC, directly increases the costs of the for-profit hospital. This may lead the for-profit hospital to increase its prices and put it in a disadvantaged position in its negotiations with insurers.

Second, because compliance with the law imposes these costs, for-profit hospitals may decide to scale down or forego their use of CDS tools. According to the Centers for Disease Control, “the evidence base demonstrating the effectiveness of CDS is very strong,” so if the requirements of the AAA introduce too many costs and for-profit hospitals find it unprofitable to use CDS tools, this may degrade the quality of care they provide relative to the quality provided by exempt hospitals.

An Independent Developer of CDS Tools Is Not Exempt, but a Nonprofit Hospital That Has Developed Such Tools In-House or Acquired an Independent Developer Is.

As written, the AAA covers both users of augmented decision systems and developers of such products. It is not unusual for hospital systems to dabble in the healthcare technology space by either partnering with technology companies to develop products or by acquiring technology companies with already established offerings. Because nonprofit entities are generally exempt from FTC oversight, a nonprofit-hospital-owned developer of CDS may not be required to implement impact assessments of its products, but independent companies developing and marketing those products would be required to do so, provided they clear the revenue and/or user count thresholds. This may create barriers to competition and entry in the CDS market by increasing the costs of independent developers already in the market and those of potential entrants relative to the costs of CDS suppliers owned by exempt hospitals. It may also create incentives for independent developers of CDS tools to consider getting acquired by an exempt entity to avoid paying the costs of compliance with the law.

Insurers and Their Business Partners Have a Blanket Exemption for the Business of Insurance, if Regulated by State Law, Whereas Only Some Care Providers Are Likely Not Covered.

The effects of the exemption for the activity of insurance are likely to be even more difficult to predict. When managing care, the lines between clinical decision and coverage decision are often blurry. For example, the set of choices considered by a physician and a patient may be determined by which treatments are covered by the patient’s insurer. Integrated decision tools that address both clinical efficacy and insurance coverage could provide caregivers with recommendations that may be difficult to parse into an “insurance” activity or a “clinical” activity in the context of the AAA’s coverage. If so, the differential costs of developing and implementing such a system could shift incentives for doing so toward insurance companies. This could potentially place purely clinical tools at a competitive disadvantage because of the higher costs of developing and implementing a system that is clearly outside the business of insurance. Payers (who are in the business of insurance) and healthcare providers compete with each other to capture some of the welfare gains from providing care to patients. As such, a competitive advantage in developing CDS stemming from the AAA could affect the competitive balance between the two healthcare sectors.

Possible Solutions

In the previous sections we highlighted potential unintended consequences of limiting application of the AAA to entities within the FTC’s jurisdiction. Below, we set forth some options for addressing these consequences.

One option is to simply understand potential consequences and leave the proposed legislation as is. We note that these issues are not unique to the AAA—many regulatory regimes have loopholes or gaps, and often legislators have competing priorities and have to pick and choose between them. For example, the historical split in food safety regulation between the Department of Agriculture and the Food and Drug Administration means that functionally similar foods may be subject to different standards, authorities, and compliance regulation. Indeed, it may even be desirable to limit the application of the AAA to entities within the FTC’s jurisdiction. The practicalities of implementing algorithmic decision-making tools may vary by sector, and it may make sense to task the responsible regulator, in this case the FTC, with creating rules that apply to sectors with which it is already familiar.

A second option is for Congress to consider modifying the AAA to explicitly disregard the exemptions in the FTC Act. One model comes from the recent American Data Privacy and Protection Act (ADPPA) introduced in the House of Representatives, which would give the FTC jurisdiction over both common carriers and non-profits. It simply defines a “covered entity” subject to the requirements of the Act as one that collects, processes, or transfers covered data and is (1) subject to the FTC Act, (2) a common carrier subject to title II of the Communications Act of 1934, or (3) an organization not organized to carry on busines for their own profit or that of its members.

Notably, in this vein, Congress recently enacted the “Competitive Health Insurance Reform Act of 2020,” which amended the McCarran-Ferguson Act to state that “Nothing contained in this Act shall modify, impair, or supersede the operation of any of the antitrust laws with respect to the business of health insurance (including the business of dental insurance and limited-scope dental benefits).” “Antitrust laws” in turn are defined to include Section 5 of the Federal Trade Commission Act, but only “to the extent that such section 5 applies to unfair methods of competition.” Congress could similarly include a modification of the McCarran-Ferguson Act for purposes of the FTC’s enforcement of the Algorithmic Accountability Act.

A third solution is for Congress to have multiple agencies enforce the AAA against entities within their jurisdiction, as opposed to just the FTC. For example, in addition to stating that the FTC can enforce the Fair Credit Reporting Act (FCRA) under the FTC Act “irrespective of whether that person is engaged in commerce or meets any other jurisdictional tests under the Federal Trade Commission Act,” Congress also designated multiple agencies to enforce the law. Thus, for example, federal banking agencies enforce the FCRA for certain banks and the National Credit Union Administration enforces the FCRA for certain credit unions. The CFPB also has enforcement authority under the FCRA. Congress could set up a similar scheme to enforce the AAA.

Some of these solutions may be impractical. Modifying McCarran-Ferguson or implicating the telecommunications or banking laws likely involves referral to multiple Congressional committees, which would significantly impair the AAA’s chances of moving forward. Nonetheless, as in the other examples described herein, these concerns have been overcome in prior instances, and could similarly be overcome here.


This article has attempted to spot certain potential competitive issues with the Algorithmic Accountability Act. It offers no conclusions as to the correct course of action, but simply sets forth some considerations as the proposed legislation moves forward. We look forward to further discussions among stakeholders on these issues.