chevron-down Created with Sketch Beta.

ARTICLE

Panel Spotlight: One Click from Disaster: the Fair Lending Implications of Digital Targeted Marketing in a Big Data World, presented September 24, 2021, at the ABA Business Law Section Virtual Annual Meeting.

Ross Michael Speier and Emily J. Honsa Hicks

Panel Spotlight: One Click from Disaster: the Fair Lending Implications of Digital Targeted Marketing in a Big Data World, presented September 24, 2021, at the ABA Business Law Section Virtual Annual Meeting.
Photo by Tim Schmidbauer on Unsplash

On September 24, 2021, a panel moderated by Jason Cover of Troutman Pepper LLP and consisting of Consumer Financial Protection Bureau (“CFPB”) Senior Counsel Albert Chang, Stephen Hayes of Relman Colfax PLLC, Capital One Fair Lending Assistant General Counsel, Brian Larkin, and JPMorgan Chase Head of AI Research, Manuela Veloso, explored the potential implications for fair lending, consumer protection, and other pitfalls related to digital targeted marketing in light of recent regulatory activity and private litigation.

The panel defined “digital targeted marketing” as a form of marketing by which advertisements are disseminated through a variety of online platforms including web services, paid searches, banners, and social media using sophisticated data analytics that effectively preselect a precise target audience. This can occur through ”self-selecting” programs, which allow the advertiser to choose participant criteria using a platform’s pre-existing categories and attributes, or through “look alike programs,” which use the advertiser’s existing customer data to find similar potential customers, generally using machine learning that identifies predictive attributes.

Advertisers conducting digital targeted marketing may use both internally sourced data or data purchased from companies involved in the gathering or storing of large amounts of data for analytic use. Advertisers often use artificial intelligence and machine learning to sift through data to identify patterns, connections, and likely outcomes. The outcomes can be predictive and allow accurate identification of interested and qualified consumers. However, the use of certain attributes—despite being highly predictive—may implicate fair lending laws intentionally or unintentionally by directly or indirectly excluding consumers on prohibited bases. Panelists highlighted that the advantages of machine learning in these areas can eliminate human bias, and that the risks that surround this area are similar to those in the space of pre-screened offers and traditional targeted marketing.

The panel identified certain regulatory activity and litigation against Facebook as illustrative of the fair lending risks that may be applicable to consumer finance platforms. Notably, the panel indicated that this ongoing regulatory activity and litigation are based on a number of different theories of discrimination, and that while some have been settled, there have been pervasive issues of standing and many actions have not reached decisions on the merits.  Examples of Facebook’s regulatory activity and litigation in this area include:

  • A Washington State Attorney General (“AG”) investigation into digital targeted marketing practices. Of note, the AG alleged that Facebook allowed advertisers to exclude particular ethnic groups from certain advertisements and provided tools allowing advertisers to exclude members of protected classes.
  • An alliance of consumer groups brought suit alleging discriminatory practices that resulted in a $5 million settlement that also required changes to Facebook’s “look alike” campaigns impacting housing, employment, and credit products.
  • A Department of Housing and Urban Development Charge of Discrimination in violation of the Fair Housing Act; specifically alleging that Facebook allowed housing advertisers to use prohibited bases to target audiences, and that ad-delivery algorithms were independently discriminatory.
  • A civil suit in California’s Northern District alleging violations of the Fair Housing Act, Equal Credit Opportunity Act, and the California Fair Lending Laws.

The panel discussed whether additional regulatory action or guidance was required, and what would be necessary to result in effective outcomes, including the March 2021 interagency Request for Information and Comment on Financial Institution’s Use of Artificial Intelligence, in which the Office of the Comptroller of the Currency, the Federal Reserve, Federal Deposit Insurance Corporation, Consumer Financial Protection Bureau, and National Credit Union Association solicited information about the use of artificial intelligence and machine learning by financial institutions as well as challenges to the development, adoption and maintenance of artificial intelligence. They also sought to understand whether agency clarification would assist financial institutions. The panel predicted that the end product of this request would be guidance similar to interagency model risk guidance rather than a formal rulemaking.

The panel also addressed risks associated with unfair, deceptive, or abusive acts or practices (“UDAAPs”), including access and use of data, steering risks, and the targeting of vulnerable consumers, such as using data that may serve as unintentional or intentional proxies for race. Panelists pointed out that even if a financial institution is not requesting any targeting based on prohibited bases, a third-party platform may use data in a manner that is discriminatory regardless of the intent of the financial institution. This can result in unwanted steering or pricing issues.

The panelists also offered some practical insights as to how to mitigate some of these fair lending risks. These risk mitigation techniques include negotiating clear understandings of what type of data is used by a financial institution in its ad targeting as well as the type of data a third party uses or leverages—or does not use—in any ‘black box’ algorithms. Additionally, the panelists emphasized the need to continually train any artificial intelligence and machine learning to reject outcomes that have discriminatory bases, or disparate impact on a protected class, and, if found, seek a better alternative.

This article was prepared by the Business Law Section's Consumer Financial Services Committee.

    Authors