The Algorithm in Culture and Its Recent Dramatic Rise in Prominence
The dangers of data-centric technology and especially artificial intelligence (AI) have captured our imaginations for decades. In 1968, the same year the FHA was passed, the world met a sentient AI entity named HAL 9000 on a spaceship in the film 2001: A Space Odyssey. (Recall that HAL, speaking in its iconic voice of eerie calm, also decided for the sake of the mission to close the door against someone who wanted in.) Kazuo Ishiguro’s novel Klara and the Sun (2021) explored the integration of AI into humanity and society, and, as the author said in an interview for WIRED magazine, “I think there is this issue about how we could really hardwire the prejudices and biases of our age into black boxes, and won’t be able to unpack them.” On the documentary side, the acclaimed Coded Bias (2020) presents how Joy Buolamwini, an MIT-trained computer scientist and founder of the Algorithmic Justice League, discovered coded racial bias in facial recognition systems and then fought with others to expose that bias.
Since 2022, however, both AI’s notoriety and popularity have blown up. With the advent of CHAT-GPT and other new products and advances, algorithms have been writing magazine articles, researching new drugs, and predicting complicated scientific processes. In the current AI boom, algorithms have seemingly advanced further into our lives to points of no return.
But since 2022, lawmakers, enforcement agencies, and civil rights and consumer rights attorneys have also ramped up their examination of AI’s pitfalls, especially algorithmic bias. Algorithms’ roles in decision-making in housing, employment, consumer protection, and privacy are subject to lawsuits, new laws, and advisory documents. Congress has conducted hearings, the Biden administration has issued an executive order, and attorney generals (AGs) from 15 states have jointly served a letter on the Federal Trade Commission.
The AGs’ letter focused on tenant screening programs. And just as the U.S. Department of Justice filed an amicus brief in June 2022 advising the court in Mary Louis’s case that SafeRent’s software might violate the FHA, the department also settled a case first advanced in 2019 by the U.S. Department of Housing and Urban Development in which Meta—then Facebook—had been sued as an advertiser for housing discrimination. The Meta matter involved another type of double-blind scheme: allegedly neither the housing providers nor those seeking housing knew that Facebook’s advertising algorithm had, based on information about Facebook members’ race, gender, and other protected classes, effectively blocked those looking for housing from seeing providers’ housing advertisements. (See Gary Rhoades, “Facebook and the Fair Housing Act,” Los Angeles Daily Journal, April 11, 2019.)
The Creation of an Algorithmic Tenant Screening Product
For those of us in the legal field who are not computer scientists, data-centric technology and algorithms can be difficult to understand. First, it’s notable that the central mission seems to be to create a product that will be profitable and alluring for housing providers. A hypothetical software team is told that residential landlords want a simple, easy-to-use product that tells them who will be good or bad tenants. The sales team is on standby, ready to switch out good or bad for the more provocative safe or unsafe.
After research and consultation with someone with tenant screening experience, they come up with a set of instructions—the algorithm—for the computer to predict the “safety” of any prospective tenant. The software team then might expose those instructions to historical data to train the algorithm to generate more useful predictions. For example, an instant study of civil court systems data might train the algorithm to show that applicants from certain zip codes are more likely to get involved in some level of dispute or litigation with their housing provider. Whether the algorithm receives this machine training or not, the sales team might use puffery to imply that high-tech AI is present with a secret and infallible formula for predicting who will be an unsafe tenant. They emphasize that the program will comb government and private repositories that hold information about an applicant’s arrests, convictions, bankruptcies, credit scores, and eviction litigation and then apply that magic formula to the applicant’s data. Finally, the product is ready to be sold and downloaded by landlords who then often discard more hands-on holistic approaches to tenant screening in exchange for the simple score or label. After all, the work has already ostensibly been done by a secret formula applied to everything known about the tenant. However, the formula’s criteria and the applicant’s data come with deep flaws.
“Most of the criteria used to screen tenants are inherently arbitrary and are not based on any kind of empirical evidence or studies,” says Eric Dunn, litigation director at the National Housing Law Project and one of the plaintiffs’ attorneys in Arroyo v. SafeRent Solutions. “That’s especially true for criminal history screening—it’s largely just stereotypes and racist biases repackaged as concerns about safety and security, even though the studies have found there’s no connection between criminal history and being a poor tenant or posing any kind of safety hazard.”
Disparate Impact Analysis Applies in Algorithmic Bias Cases
A key fair housing issue raised with algorithmic bias is whether a landlord or software company can be held liable under fair housing laws for discrimination if a landlord uses software with the seeming race-neutral mission to exclude so-called “unsafe” renters, even if the software company owns no housing and did not directly deny an applicant.
Whether one actually provides housing is irrelevant under the FHA. It prohibits a wide array of discrimination, up and down the chain of housing, from the newspaper ad to the insurance company to the manager to the owner. Also, federal fair housing law has always prohibited not just outright intentional discrimination but also any policies and decisions that have a “disparate impact” or discriminatory effect on the protected classes. Fair housing advocates breathed a sigh of relief in 2015 when the Supreme Court in Texas Department of Housing and Community Affairs v. Inclusive Communities Project, Inc., 576 U.S. 519 (2015), upheld the use of disparate impact analysis in fair housing cases.
After the Inclusive Communities case and despite it, the Trump administration attempted to make it very difficult to prove algorithmic discrimination claims, but the Biden administration has since scuttled those regulations and implemented its own executive order that affirms the need to regulate algorithmic discrimination in housing and other areas. Thus, the use of a tenant screening program that has a statistically significant effect of excluding minorities should still be unlawful under the FHA, even if there is no intent to discriminate.
Conclusion: Both Litigation and Reform Needed to Hold the Line Against Algorithmic Bias
The cases against SafeRent are still in litigation or on appeal, and the record is mixed, but Dunn is optimistic that the FHA and other fair housing laws will hold. He emphasizes that having an automated system apply an arbitrary, non-empirically validated screening policy just reproduces the discriminatory outcomes baked into that policy. “Maybe it adds a false veneer of objectivity to have a machine do it instead of a person,” he says, “but that’s the only real difference. And all that is exacerbated by the frequent data errors and misidentification problems that regularly arise when you have machines processing all this information.”
Reform efforts have included calls for any algorithmic tenant screening to include disclosure of any reliance on an algorithm, provision of a report to the tenant of what data was used, the opportunity for the tenant to correct any errors, and adding the value of any housing vouchers. The AGs’ report also recommended audits for “race based or digital redlining resulting from biased underwriting” in all tenant screening products.
Dunn also laments that what is being lost is the “human common sense filter” to catch the machine’s errors. And perhaps, at the end of the day or at the beginning of the next congressional hearing, that is what can be pursued in the all-important civil rights issue of tenant screening—what some computer scientists and much of science fiction have aspired to—a way to dynamically combine objective data, truth, and technology with our common sense.