chevron-down Created with Sketch Beta.

Real Property, Trust and Estate Law Journal

Fall/Winter 2024

The Trouble With Traffic Studies: Why Bad Traffic Predictions Are Making Our Cities Worse And What Courts Should Do About It

Kenneth Alan Stahl and Kristina Currans

Summary

  • Most land use require predictions about the amount of traffic a new development will generate, but these predictions are often inaccurate because the predominant method used to predict traffic impacts tends to significantly over-estimate vehicle traffic. 
  • As a result, land use decisions may perpetuate automobile dependency and generate hostility to new housing development. 
  • This Article argues that the conventional, vehicle-oriented method for predicting traffic impacts does not withstand judicial scrutiny.
  • The Article concludes that courts should review predictions of future traffic with greater skepticism and agencies should to explore and adopt more reliable methods for estimating the traffic impacts of new development.
The Trouble With Traffic Studies: Why Bad Traffic Predictions Are Making Our Cities Worse And What Courts Should Do About It
Hand-robot via Getty Images

Jump to:

Synopsis: Land use decisions today almost invariably require predic-tions about the amount of traffic a new development will generate, but these predictions are often inaccurate because the predominant method used to predict traffic impacts tends to significantly over-estimate vehicle traffic. As a result, land use decisions predicated on such inaccurate traffic predictions end up perpetuating automobile dependency and generating hostility to new housing development, deepening a severe housing crisis throughout the country. Planners, lawyers, developers, elected officials, and courts mostly seem content to ignore these obvious problems, but their complacency may not last much longer. This Article argues that the conventional, vehicle-oriented method for predicting traffic impacts does not withstand judicial scrutiny under any of the standards of review applicable to land use decisions. Therefore, we conclude that courts should review predictions of future traffic with greater skepticism. Indeed, the United States Supreme Court recently decided an important case, Sheetz v. County of El Dorado, California, 601 U.S. 267 (2024), that may require courts to review traffic predictions more closely. Our research should also encourage members of the public to push back against traffic studies that over-estimate traffic impacts and spur agencies to explore and adopt more reliable methods for estimating the traffic impacts of new development.

I. Introduction

Virtually all land use decisions require rough predictions of future traffic. Suppose a developer asks a city for approval to build a large apartment complex on a busy commercial thoroughfare. Before approving the project, the city will need to know how much automobile congestion the project will add to already congested roads. Or, suppose a city is considering whether to change its zoning regulations to permit religious institutions in a residential neighborhood to satisfy federal requirements that cities’ land use rules accommodate religious land uses. A “rezoning” of this sort is likely to have some impact on existing automobile traffic, but the precise impact will depend on numerous factors, such as the number of congregants expected to attend worship services, how often and when such services take place, the mode of travel congregants typically use to attend services, and so forth. In order to craft an appropriate zoning ordinance that meets the need for new religious institutions while respecting existing neighborhoods, the city must find some way of estimating traffic impacts that takes account of these vari-ous factors.

For years, cities and other planning agencies have conventionally predicted vehicle traffic using “trip generation” statistics provided by a non-profit organization called the Institute of Transportation Engineers (“ITE”). These are “off the rack” estimates of the number of trips produced by a particular land use—5.44 trips per day for each apartment, 3.81 trips for every 1,000 square feet of a shopping center, and so on. To determine how much traffic a particular project will generate, agencies will commission a traffic study, or “Traffic Impact Analysis” (TIA), which typically plugs these trip generation rates into the actual land uses proposed by the project to produce a number of estimated trips. For example, if a project proposes 100 apartments, the traffic engineers conducting the study will multiply 100 times the trip rate of 5.44 for a total of 544 trips per day.

These traffic studies, and the trip generation method underlying them, have become increasingly important parts of the land use decision-making process. For a variety of reasons, traffic congestion tends to worsen with population growth and prosperity, causing traffic to become a hot-button political issue that can raise the ire of local residents and put pressure on local officials to reject or shrink projects that are predicted to increase traffic congestion. As a result, more than simply informing land use decisions, a traffic study’s prediction of future traffic may be determinative of a project’s success.

The centrality of traffic studies and trip generation in land use decisions is troubling because, as we discuss in this Article, these studies are often unreliable and have been shown to over-estimate vehicle traffic in many urban contexts, while often treating alternative forms of travel, like biking, walking, and transit, as an afterthought. Moreover, their overpredictions of vehicle traffic can become self-fulfilling prophecies because cities often attempt to accommodate the predicted traffic by requiring road widening and increased parking that induce the very traffic they predict.

The flawed nature of the conventional ITE trip generation method creates two important practical consequences for our cities: first, it perpetuates an auto-dominant society. Adding more roads and more parking not only causes people to drive and park more, it also takes away space that could be devoted to parks, housing, bike lanes, and other amenities residents could enjoy and use safely and affordably. The sum total is more needless automobile deaths, more greenhouse gas emissions, and an impoverished urban environment. Second, where studies over-predict traffic or parking demand, they can heighten opposition by residents who fear more traffic on congested roads or difficulties finding parking, resulting in such housing being blocked or shrunk, or simply made much more expensive.

Despite the dire consequences that flow from the flawed ITE method, most of the actors in our land use planning system seem resigned to its continued use. Courts tend to defer to the ITE method on the grounds that it is the “industry standard,” without inquiring as to its underlying validity. But ITE remains industry standard largely because the engi-neers and planners who use it know that courts will accept it uncritically, even as the planners and engineers themselves often bemoan the inaccu-racy and unreliability of the method. Elected officials rarely have the background or expertise to understand the impact these methods can have. One might think developers have the most incentive to move away from a flawed status quo that increases opposition to projects and raises costs, but developers prefer the certainty of a process that they know how to navigate, regardless of its flaws, over a potentially better process that is new and untested. And because these methods are considered “off-the-shelf,” a relatively simple and readily available dataset that’s easy to apply, all stakeholders in the process have little incentive to change. As one local staffer put it, “The easiest thing to do is keep doing what you’ve been doing for 10 or 20 years.” The status quo is thus a vicious cycle in which innovation stagnates and criticisms of the conventional approaches to measuring traffic impacts rarely seem to be addressed in the methodologies and guidelines.

As we argue in this Article, however, the status quo may not survive much longer because courts have increasingly—and appropriately—called for a hard look at municipal traffic predictions. Though courts are generally quite deferential to municipal land use decisions, we find that they frequently apply heightened scrutiny to decisions based on alleged traffic concerns because of the likelihood that such concerns are rooted in empirically unsupported fears. Indeed, the Supreme Court recently issued an important decision in Sheetz v. County of El Dorado, California that looks likely to elevate the level of scrutiny applied to traffic predictions. For years, cities have been able to demand that developers pay to mitigate impacts of their developments without even showing any kind of relationship between the development and the asserted impact, as long as the demand is made through a broadly appli-cable policy rather than an individualized determination. Sheetz, which actually involved a traffic mitigation fee policy formulated using the ITE trip generation method, now requires that such “legislative” fees have at least some correlation with the impact of the development. Though the Court left it to lower courts to determine in the first instance how closely to scrutinize such legislatively enacted fees, it is unlikely that the ITE method will be able to meet any standard other than an exceptionally deferential one. Indeed, conventional, vehicle-centric traffic studies seem to be exactly the type of “junk science” that courts typically reject in other contexts.

This Article accordingly has three goals. The first is to demonstrate that traffic studies demand more scrutiny than they have customarily received. Second, we argue that the conventional methodology is unlikely to survive that scrutiny in many contexts. Third and finally, we propose ways to break the inertia that marries all stakeholders to the status quo. While courts cannot reasonably be expected to spark inno-vation given their institutional limitations in engineering and planning disciplines, legislatures and transportation agencies can—and have been shown to—push practitioners to innovate and give courts confidence to meaningfully assess the reliability of trip generation and traffic studies.

Part II below describes the conventional method of trip generation, shows how and why it has become so prevalent, and chronicles the major critiques of that method, including its inconsistent reliability and ten-dency to substantially overestimate vehicle traffic. We conclude that the conventional method is too inconsistently reliable to be so widely used and accepted. Part III then discusses several of the standards of review applicable to different types of land use decisions. While these standards vary, we find ample precedent for the conclusion that under any standard of review, the methodology of traffic studies should be subjected to a level of judicial scrutiny that the conventional, auto-oriented trip generation method will have difficulty satisfying. Finally, Part IV addresses what a world without the conventional ITE method would look like and responds to some likely objections to abandoning that method.

II. The Use and Misuse of Trip Generation
and Traffic Studies

Before getting into the nuts and bolts of trip generation and traffic studies, we should step back and provide some context as to why they have become so significant in land use decisions. Two important devel-opments are at the root. First, traffic is such a volatile political issue at the local level that municipalities are under enormous pressure to manage the traffic impacts of new development. Numerous studies show that traffic is one of the most contentious issues in local politics today and deeply affects attitudes towards new development. For example, a recent study on development in the Sacramento area revealed that parking and traffic were far and away the most significant concerns articulated by residents regarding new housing development. Sixty-three percent of respondents identified traffic and parking as their primary concern. The next closest issues, neighborhood character and school overcrowding, received only 23% and 22%, respectively.

The political salience of traffic places a premium on the ability of local governments to predict future traffic. Such predictions can be used to sell projects to a skeptical public by reassuring them that the project will have a minimal traffic impact, or conversely, to kill a project with fears of gridlock and parking nightmares. Traffic predictions can be used to plan the city’s future development so as to prospectively avoid provoking the ire of residents over congestion, as well as to craft appropriate project-specific “mitigations” such as fees or road widening to reduce the traffic impact of new development.

The second reason why traffic predictions have become so important is because, as we detail in Part III, courts increasingly require cities to support their land use decisions with some kind of credible evidence, rather than just the subjective fears of residents. The U.S. Supreme Court itself has required, for example, that cities quantify the impacts of new development when crafting mitigations to ensure the developer is not being overcharged. By and large, when it comes to traffic impacts, municipalities have responded to the demand for quantification by commissioning traffic studies that employ the concept of trip generation described below to estimate the number of trips a new development will create. Traffic studies and trip generation, accordingly, have become major parts of how agencies make land use decisions. This development is troubling, however, because the conventional methodology often underlying traffic studies and trip generation is deeply flawed in numerous ways that tend to induce more driving, block needed housing projects, and raise housing costs. This observation raises the question, which we address in Part III, of whether land use decisions based on this conventional methodology can satisfy judicial scrutiny.

A. Trip Generation and Traffic Studies

The foundation of the conventional approach to predicting traffic is the concept of “trip generation,” which is the idea that a new land use such as an apartment building or a restaurant will “generate” a number of new vehicle trips arriving and departing. Trip generation is usually determined using standard trip generation rates published by ITE. For example, as described above, ITE’s standard trip generation rates for multi-family dwellings is 5.44 trips per day for each apartment.

Traffic analysts use these trip generation rates to create traffic studies that purport to predict the impact of land use development on nearby transportation facilities. Traffic analysts generally create these studies in four steps:

  • First, the study estimates the number of vehicle trips gener-ated by the development (meaning daily trips to and from the development) by applying the standard trip generation rates to the development. For example, to predict the amount of traffic an apartment building will generate on a daily basis, engineers simply multiply the number of apartments in the building by the standard trip generation rate of 5.44.
  • Second, some number of trips may be deducted to account for alternative modes of transportation such as biking or transit, if data are available and local guidelines allow for it (called “mode share adjustment”);
  • Third, the remaining vehicle trips are then distributed to different nearby facilities and in different directions, usually by extrapolating from existing traffic patterns; and
  • Fourth, the baseline traffic before the development is compared to the amount of traffic predicted for the new development based on different performance measures (often seconds of vehicle delay and/or queuing vehicles).

In this Article, we constrain our critique primarily to step one, or the demand-side of traffic studies—the methods for predicting use of the development and the corresponding ways that use is evaluated and applied in land use decisions. There exist other analytical steps in the traffic study evaluation process—such as micro-simulation analysis of traffic signal performance or timing traffic assignment (sending estimat-ed demand in different directions). While there may also be reason to scrutinize those methods of analysis, in this Article, we focus primarily on the methods that associate traffic demand with land development—often called trip generation—within the context of traffic studies. This area of work is primarily where traffic studies begin and—because of the limitations in these approaches explored in latter subsections of this Part—is an important stage where there is the most concern.

B. The Prominence of Traffic Studies/Trip Generation and Their Uses in Practice

As described above, traffic studies and trip generation have become prominent due to a combination of the political imperative to predict traffic and the legal imperative to quantify traffic impacts in some way. As an illustration of how pervasive traffic studies and trip generation are in land use decisions today, Table 1 shows many of the different ways that these tools are used by municipalities to make predictions about future traffic:

Table 1 Land Use Decisions: Drawing From Trip Generation Methods

Land Use Evaluation

General Description

Concurrency/Adequate Public Facilities Ordinances

If development causes estimated traffic congestion to worsen below some threshold performance measure, called “Level of Service” or “LOS,” developments cannot be approved until LOS is improved.*

General Plan Circulation Element

Local governments must prospectively plan road usage in correlation with future land uses based on estimated traffic demand.†

Growth Control Measures

Development generating more than X-trips is subject to voter approval.‡

Performance Zoning Standards

Development generating more than X-trips is prohibited or requires a conditional-use permit.§

Exactions (Impact Fees, System Development Charges)

Fair-share mitigation fee based on trips/parking generated.||

Transfer of Development Rights (TDR)

Estimated trips in a plan area are “capped” and developers within the plan area can trade trips for cash or development rights.#

Environmental Review

Projects generating more than a certain number of vehicle miles traveled or similar metrics subject to heightened environmental review.**

Subdivision Review

Project cannot be approved unless infrastructure is adequate, usually measured by trip/parking generation.††

Rezonings, other discretionary permit

Project may be denied based on impact/incompatibility with surrounding area, often based on projections of increased traffic.‡‡

Special Assessment/ Transportation Utility Fee

“Benefitted” landowners charged based on trips generated.§§

* This approach is prevalent in Florida. See Ruth L. Steiner, Florida’s Transportation Concurrency: Are the Current Tools Adequate to Meet the Need for Coordinated Land Use and Transportation Planning?, 12 U. Fla. J. L. & Pub. Pol’y 269 (2001).
† See generally, Governor’s Off. of Planning & Rsch., State of Cal., General Plan Guidelines 77-78 (2017) [hereinafter Guidelines] (describing conventional practice under California’s general plan guidelines).
‡ Newport Beach, California’s “Greenlight” initiative, for example, requires a public vote on any land use change that would permit a development generating more than 100 peak-hour vehicle trips per day, as determined by the ITE trip generation method. See Michael Manville & Taner Osman, Motivations for Growth Revolts: Discretion and Pretext as Sources of Development Conflict, 16 City & Cmty. 66, 75-76 (2017).
§ See Donald Shoup, Truth in Transportation Planning, 6 J. of Transp. and Stat. 1, 9 (2003) (quoting Beverly Hills Municipal Code section [currently section 10-3-1632(5)] providing maximum trips for uses in the Commercial-Transition zone and specifying that trip rates shall be calculated using the ITE manual).
|| See, for example, the mitigation program described in Sheetz v. County of El Dorado, 300 Cal. Rptr. 3d 308 (Cal. Ct. App. 2022), review denied (Feb. 1, 2023), vacated and remanded sub nom, Sheetz v. Cnty. of El Dorado, Cal., 601 U.S. 267 (2024).
See City of Irvine, Cal. Zoning Ordinance § 9-36-18.
** See Governor’s Off. of Plan. & Rsch, State of Cal., Revised Proposal on Updates to the CEQA Guidelines on Evaluating Transportation Impacts in CEQA: Implementing Senate Bill 743, at 18 (2016) [hereinafter Revised Proposal] https://www.opr.ca.gov/docs/Revised_VMT_CEQA_Guidelines_Proposal_January_20_2016.pdf.
†† See, e.g., Cal. Gov’t. Code § 66474 (requiring denial of subdivision map if the site is not “physically suitable” for the type and density of development).
‡‡ See Norman Williams, Jr. & John M. Taylor, 1 American Land Planning Law § 11:1 (2024) (“[O]ne of the major criteria used in almost all land use classification—and thus in the formulation of zoning districts—is the amount of traffic generated by different types of land use.”)
§§ See infra Section III.D.

C. Theoretical and Practical Issues with Traffic Studies

As this table makes clear, traffic predictions pervade every aspect of the land use decision-making process. By and large, agencies use the conventional ITE method to make these predictions. This agency practice is deeply problematic because the ITE method is frequently inaccurate, biased toward overestimating vehicle trips, and overly theo-retical. Making matters worse, the method has never been systematically tested to determine whether the method is reliable, leaving us to speculate about how accurate or inaccurate its predictions may be.

To simplify the discussion, we will frequently refer to the “conventional” traffic study or ITE method to refer to the more traditional approach to estimating vehicle traffic impacts, often relying on older and more suburban data, focusing mostly or entirely on vehicle traffic impacts, and typically assuming alternative modes of travel are not feasible in those locations. In subsection 3, we explore innovations developed across the U.S. to adapt the traditional approaches to modern urban environments and contexts where commuters use a variety of transportation methods (often referred to as “multimodal” travel).

1. Flaws in the ITE Method

ITE predicts trip generation rates by extrapolating from data about existing trips to various land uses. If the data shows that existing apartment buildings are associated with X number of trips per day, or shopping centers are associated with Y number of trips per day for every 1,000 square feet, ITE assumes that future apartment buildings will also generate X trips per day, and so on. Though this approach is sensible generally speaking, it only works if the underlying data is reliable. Unfortunately, the data is not. For one thing, not much data exists to rely on. Many of the trip generation rates are based on extremely small sample sizes—sometimes as few as eight data points. Generalizing trends based on such small samples is perilous. In addition, the data tends to be old and represents largely suburban or exurban land use patterns with ample free parking but little access to transit or other modes of transportation. Studies focusing on urban contexts and “smart growth” transit-oriented development have consistently found the suburban nature of ITE’s data to overestimate vehicle demand in urban areas and underestimate travel by bike, walking, and transit. In short, because it assumes that all trips are made by automobile regardless of context, ITE is likely to significantly overestimate vehicle traffic in places where other modes of transportation are widely used.

Aside from the underlying data, the conventional trip generation method itself is flawed in three principal ways, with major consequences for the way we live. First, conventional trip generation methods assume that vehicle demand predicted must be accommodated (often called “predict and provide”). For example, if a study were to show that a new development would add additional vehicle traffic that affected the vehicle delay at a nearby non-signalized intersection, the “provide” paradigm may require the intersection be upgraded with a signal. This expense can be prohibitive for a small development. Or, if a study were to show increased traffic congestion on a nearby street, the developer may be required to pay to widen the street to accommodate more vehicles. However, research consistently shows that street widening without traffic demand management techniques, such as congestion pricing, simply induces more cars onto the street. Hence, “predict and provide” has a circular logic that causes it to conjure up the very traffic it predicts, locking us into an unbreakable cycle of automobile depend-ency.

Second, the conventional method often assumes that all trips are new, with limited data to support estimates around trips that are pass-by (i.e., already on the same street) or diverted from other uses. In reality, some types of new development, like retail and service, are likely shift-ing regional demand around. Indeed, California’s new environmental guidelines for evaluating the impacts of vehicle travel recognize that “retail projects typically re-route travel from other retail destinations.” As a result, the conventional method is very likely to overestimate the number of new trips “generated” by a development. In fact, one study reports that ITE’s data likely overestimates vehicle trip-making in the U.S. by about 55%. Again, this persistent overestimation leads agen-cies to demand mitigation measures that raise costs for consumers and induce more driving, while also increasing public ire about new develop-ment.

Third, and perhaps most fundamentally, in the conventional trip generation approach, an analyst assumes that land uses “generate” vehicle demand. In reality, people demand activities and goods, and from those activities travel is derived. In other words, the whole idea that a development generates trips is something of a myth. Wrongly attributing trips to a specific new development, rather than public demand for the activity that generated the development, has two significant conse-quences. For one thing, it means that the common practice of agencies requiring developers to pay to mitigate the traffic impacts of new development is based on a false premise that the development is actually causing the new traffic. As a result, the cost of mitigating traffic impacts is misallocated to developers, and ultimately to consumers of the new development, rather than to the real “culprit,” which is the public at large. One outcome of that misallocation, for example, is that consumers of new housing are paying more for housing to subsidize benefits for the wider general public. It also means, as we will soon see, that the practice of traffic mitigation fees is vulnerable to a constitutional challenge on the grounds that it asks the developer to pay for something that in fairness ought to be borne by the public as a whole.

The other consequence of misattributing traffic impacts to new development is that it causes public anger about traffic to be mistakenly focused on new development. Angry residents rally around traffic studies showing that a development will “cause” thousands of new cars to appear on the roads, despite the fact that it is the residents’ own activities and demands that are responsible for the new traffic. This self-fulfilling prophecy leads to housing projects being rejected or shrunk, or developers being saddled with additional costs to “mitigate” traffic impacts, which, as noted before, are passed on to consumers in the form of higher housing costs.

In sum, conventional traffic studies likely overestimate driving, needlessly heighten political opposition to new development by confus-ing correlation with causation, and create a self-fulfilling prophecy by inducing the very driving they predict.

2. Reliability: What we do and don’t know

These theoretical issues could be addressed if we had some sense of how accurate the ITE method is in practice. But we don’t. Very rarely are completed TIA predictions compared with actual measured travel demand impacts to validate and test the reliability of the conventional approach, especially across land use types, built environment contexts, and time periods. There have been no systematic efforts to understand how close predictions of vehicle trip and parking generation estimations are to actual observed counts after the development has been built. This lack of validation makes it difficult to identify evidence regarding the reliability of conventional TIA methods.

Although several studies have been conducted that collect data in new contexts—such as those studies aimed at new land use types not previously represented in ITE’s references or new urban contexts—very few have been aimed at evaluating the reliability of TIA predictions. One of the rare studies to attempt an evaluation of reliability compared twelve completed and approved TIAs from Oregon with actual observed travel patterns after development. The authors found retail land uses to have predictions ranging from -55% to 153% of the actual vehicle travel after development, industrial and office land uses from -9% to 64%, and other land uses from -39% to 239%. These results give little reason for confidence in the accuracy of ITE’s predictions.

3. Innovations

As scholars and practitioners have become more aware of the flawed nature of the conventional trip generation method and the resulting traffic studies, they have increasingly innovated new methods for more accurately predicting trips. The increasing use of these innovations in practice raises the question of whether the conventional ITE method remains the industry standard for predicting trips, as it has been for many decades. This question is important because, as discussed further in the next Part, courts typically defer to the ITE method on the grounds that ITE is the industry standard, which may no longer be the case.

For example, given the conventional method’s nearly exclusive focus on automobile travel, one primary innovation seen across practice and research realms over the past decade is increased attention to multimodal travel data—including biking, walking, and transit. A study by Chris De Gruyter found a total of 153 publications focused on multimodal trip generation data and methods, with over half published since 2010. De Gruyter also reported on a number of multimodal studies that used novel data collection methods to capture non-automobile trips far more effectively than ITE. A study by Kristina Currans identified thirteen different approaches to estimating urban and multimodal trip generation—including three approaches developed by public agencies.

Many examples also exist where agencies and/or practitioners have adopted modifications to conventional methods to better align with local planning goals and policies at a more incremental level. One review of TIA practices in North Carolina and the Washington D.C. region by Tabitha Combs and co-authors identified thirty-eight distinct ways that local TIA practices varied from the conventional ITE approach across thirty-six jurisdictions. Among the variations, agencies frequently waived the need for a traffic study where projects met certain criteria indicating a likely lower traffic impact, required applicants to account for non-automobile modes, adjusted trip generation models using local data or calibrated models to local contexts using other types of data, and so forth.

Although no systematic review of the entire U.S. has been performed to inventory the state of practice in TIA guidelines, these innovations can be seen in many parts of the U.S. in various forms. For example, the Texas Department of Transportation has sponsored locally collected trip generation data to provide data more closely reflecting local contexts. San Francisco has expanded its guidelines to incorporate a range of the innovations found in the Combs study, including incorporating planning expertise early in the scoping process and incorporating multimodal data and evaluations. The Washington, D.C. Department of Transportation adopted a range of innovations to address concerns that overbuilding vehicle facilities induces additional vehicle travel demand, such as waiving the requirement of preparing a TIA for projects within a short distance of transit.

While we were writing this Article, ITE also published the Multimodal Transportation Impact Analysis for Site Development (MTIASD), which has been under development for over six years. The MTIASD provides a recommended practice for conducting TIA studies, updating prior vehicle-oriented guidance to include multimodal facilities and a more holistic and integrated approach for multimodal transpor-tation impact analysis. Earlier this year, the Florida Department of Transportation (FDOT) published its Multimodal Transportation Site Impact Handbook and Applications Guide, which includes a compre-hensive plan amendment zoning change from agricultural to planned development, multimodal case studies for fast-food restaurants by an interchange, downtown mixed-use, and subdivision on rural high-speed road. FDOT also recently published its Quality/Level of Service Handbook, which provides guidance on multimodal levels of service. Within both ITE and Florida guidance documents, non-auto trips are estimated as part of the ITE third edition Trip Generation Handbook and eleventh edition Manual. And more recently, as many agencies are focusing on their Vision Zero goals due to rising pedestrian fatalities, some have pushed for guidance to incorporate additional safety measures as part of the TIA process. The United States Department of Transpor-tation published a data-driven approach to incorporating safety in TIA, and ITE has released a technical brief outlining essential components necessary to incorporate safety into TIA processes.

A newer and less well-studied form of evaluating transportation impacts of new development has been taking shape as well. Pro-Rata Share Districts (PRSD) provide an opportunity to replace TIAs with payments based on the allocation of costs in area-wide improvements. PRSD differ from Special Area Districts by developing a rational nexus with anticipated development in the area, identifying the cost of improvements needed to support new development, and allocating the costs according to the size of development. PRSD may help alleviate issues with incremental evaluation of infrastructure, ensuring no “free-riders,” reducing “last-in” issues, and improving predictability. PRSD can be useful in areas anticipating rapid change or requiring substantial investment. The ITE MTIASD recommended practice provides some high-level explanation. Clifton and coauthors explore one such exam-ple in Bellingham, Washington, but the working group behind the ITE MTIASD also identified other newer examples: DelDOT Improvement Districts; Florida Multimodal Transportation Districts; special districts in Baltimore and Montgomery County, MD and Portland OR.

The notable takeaway from practice is the growing number of exam-ples to draw from more than ever before. While innovation in urban and multimodal TIA guidelines is often lagging, many agencies are moving away from a vehicle-oriented “predict and provide” paradigm and towards a “predict and prevent” paradigm by focusing on connecting the underlying travel data and performance metrics in TIA processes with local planning goals. With improved sensitivity towards urban environ-ments and corresponding planning goals, these agencies may be better primed for responding to a changing landscape in transportation tech-nologies and services—including those that have already shifted the expectations around curb space management, like ride share, (dockless) bike share and electric scooter shared programs, and increases in online shopping/ ordering and urban delivery services. Conventional, vehicle-oriented protocols are simply not prepared to accommodate development with multiple uses oriented in an environment with so many new and changing transportation options.

III. Standards of Review

Considering how unreliable trip generation and traffic studies often are, it is more than a little troubling that jurisdictions rely on them so heavily. Maybe equally surprising is how readily courts have deferred to the use of these methods, content to describe them as “industry standard.” As we argue in this part, the standards of review applicable to land use decisions require more scrutiny of traffic studies than courts typically apply.

As a caveat, there is a significant amount of variation in the standards of review courts apply to land use decisions, depending on the state and the type of decision involved. Furthermore, the standards of review are applied inconsistently and there are few coherent principles. Sometimes, courts purport to apply a very lenient standard of review while actually applying a quite rigorous one. At other times, they claim to be applying heightened review while actually being very deferential. The inability or unwillingness of the courts to articulate clear standards of review and commit to applying them is likely a result of the conflicting imperatives and incentives courts face in reviewing land use decisions. On one hand, courts are confronted with a long-standing and probably reasonable tradition of deference to municipal authority on land use matters and humility about the judicial role in reviewing such matters. On the other hand, courts recognize that municipal land use regulation gives enormous opportunities for local officials to abuse discretion, thus implicating the need for some meaningful amount of judicial scrutiny.

With that caveat in mind, we think it worthwhile to sketch the basic standards of review courts generally apply and attempt to discern some basic principles. We find support for the idea that courts should be applying at least somewhat more scrutiny to traffic studies than they have traditionally applied. But we also find that there are serious institutional limits to what courts can realistically accomplish through judicial review, which indicates that meaningful reform may have to come from other avenues, such as state legislatures or administrative agencies.

A. Legislative Decisions and the “Fairly Debatable” Standard

The most common and fundamental type of land use decision agencies make is a “legislative” or “quasi-legislative” decision, typically defined as a decision that is very broad and prospective in nature, reflect-ing a policy judgment that affects the public at large. For example, the enactment of a general plan or a comprehensive zoning ordinance setting out permitted land uses in different areas across a jurisdiction is generally considered a legislative land use decision.

Legislative land use decisions often call for predictions of future traffic and parking. As a result, local governments frequently rely on the conventional ITE trip generation method described previously when making these sorts of decisions. Below are some of the principal examples:

  • Many states require local governments to adopt “general plans” that anticipate future land uses and make accom-modations for residents to conveniently travel between those land uses, usually by automobile. Local governments conventionally accomplish this by calculating “level of service” (LOS), a measurement of the degree of congestion on local roads. LOS is generally based on impacts in vehicle delay and/or vehicle speed, determined using ITE trip generation rates.
  • In crafting a zoning ordinance, cities may attempt to forecast traffic impacts from different land uses to assist in deter-mining where those land uses should be permitted within the city.
  • Zoning ordinances sometimes prohibit uses that generate more than a certain threshold of trips, referring to the ITE trip generation rates conventionally used to determine this threshold.
  • Growth control measures, often enacted by voters, limit or require voter approval for new developments that exceed a set number of trips, as similarly calculated by ITE’s trip generation rates.
  • Some states have “concurrency” policies that require or encourage coordination between local land use growth and traffic LOS standards, sometimes triggering traffic studies in response to plan amendments.
  • Although a less common approach, agencies that use performance-based zoning may tie approval of development applications to various evaluation criteria, such as traffic impacts and/or trip generation.

As discussed in Part II, the conventional ITE trip generation method is, at best, inconsistently accurate. The upshot is that many of the legis-lative land use decisions cities are making based on the ITE trip generation rates are likely to be incorrect. And specifically, they are likely to be overestimating vehicle trips, with the result that cities may be rejecting housing projects they should be approving due to a mis-guided fear of generating too much traffic, while sometimes completely ignoring alternative modes of travel (biking, walking, and transit). So can legislative land use decisions based on the flawed conventional, vehicle-oriented traffic study approach withstand judicial scrutiny?

While it is hard to answer this question definitively given the varying standards of review employed by courts, we conclude that the answer is likely to be “no.” To elaborate, we must first say a bit more about the judicial treatment of legislative land use decisions generally. In prin-ciple, courts approach such decisions with extraordinary deference—perhaps the most deferential treatment courts provide to regulatory actions. Legislative acts are upheld as long as their wisdom is “fairly debatable,” or a “rational basis” exists to support the decision, and there is no requirement to produce findings in support of the decision. According to Adam MacLeod, this standard means that

[t]he land use decision is presumed to be both legally legitimate and factually supported by the evidence, and the reviewing court defers to both the local government’s statement of purpose and the means chosen to protect that state interest, unless some irrationality, which is inexplicable on the record, unambiguously presents itself.

In some states such as California, the standard of review is even more deferential—the decision will be upheld unless the decision is “entirely lacking” in evidentiary support.

The deferential treatment of legislative acts is sensible. A policy decision that affects the public broadly rather than affecting one or a small group of landowners disproportionately is the kind of decision we conventionally leave to the political arena, an arena “in which the demarcation between facts, reasoning, policy, and discretion is quite vague.” How courts could evaluate wide-ranging policy decisions on any principled basis that would not simply substitute their own judgment for that of the legislative body is unclear. In addition, assertive judicial intervention in legislative decisions could be seen as violating a norm of separation of powers between the judiciary and the legislative/executive arms of the state.

Nevertheless, courts have been uncomfortable taking such a passive role in reviewing legislative decisions by local governments. Courts often express concern that local governments will make decisions based on the unfounded fears and prejudices of local residents rather than evidence and planning considerations, or that they will excessively focus on parochial local interests to the detriment of the region as a whole. For that reason, courts frequently relax the presumption of legitimacy for legislative decisions in practice, while paying lip service to it in principle. In the famous case of City of Cleburne v. Cleburne Living Center, for example, the United States Supreme Court applied rational basis review to a municipal decision to disapprove a group home for individuals with mental disabilities, but nevertheless heavily scrutinized the decision in question and concluded that there was no rational basis for the decision. The Court found that the city had yielded to the unsupported prejudices and fears of the surrounding community, which was not a rational basis because “mere negative attitudes, or fear, unsubstantiated by factors which are properly cognizable in a zoning proceeding, are not permissible bases” for a land use decision. In accordance with Cleburne, courts have often silently reversed the usually deferential standard of review applicable to legislative decisions when they have had reason to suspect that municipal denial of a project was motivated predominantly by neighborhood opposition without adequate evidentiary support.

It perhaps goes without saying at this point that neighbors often oppose projects based on fears of future traffic nightmares without much evidence to support those fears. Accordingly, a number of cases have held that neighborhood opposition based on traffic concerns could not support a denial absent some empirical evidence weightier than neighbors’ fears or anecdotal observations. In a similar vein, some state courts have struck down zoning ordinances restrictively defining “families” for purposes of residence in single-family home districts, observing that stated concerns about traffic could not justify the measure. Notably, although some of these courts claimed to be applying the very deferential standard of review applicable to legislative land use judgments, they did not in fact give much deference to the articulated traffic concerns.

In general, as Sara Bronin and Dwight Merriam’s leading treatise on land use law concludes, courts approach traffic predictions by municipal-ities skeptically and frequently hold that the mere fact that a project will increase traffic is not a basis to deny a rezoning (typically considered a legislative decision):

Perhaps more so than on other issues, courts demonstrate a tendency to closely examine the record of zoning proceedings to ascertain whether evidence and expert testimony therein fairly supports zoning decisions based on traffic considerations. This seemingly heightened judicial scrutiny may be the result of both a sensitivity to the potential for the misuse of traffic consider-ations in zoning decision making, and a natural outgrowth of the case law in a number of jurisdictions that a mere increase in traffic, unaccompanied by particular traffic hazards or unman-ageable traffic congestion, generally will not be sufficient to deny a permit or development application.

Granted, none of the cases cited by Bronin and Merriam directly address the validity of the methodology used by traffic experts to predict traffic impacts, but if Bronin and Merriam are right that skepticism of expert testimony supporting traffic decisions is warranted, that logic would naturally extend to skepticism about the methods used to predict traffic.

The Supreme Court itself has recently indicated that legislative traffic predictions should be scrutinized more closely. As discussed further below in Section III.C, where jurisdictions demand the payment of a fee or the dedication of land in exchange for a development permit, commonly called an “exaction,” the Court subjects such exactions to a fairly rigorous standard of review. However, for years many courts held that this rigorous standard of review does not apply to exactions that are legislative in nature because of the deference commonly accorded to such decisions. Sheetz v. County of El Dorado, California was just such a case. In Sheetz, the county of El Dorado, California imposed a blanket fee on all new land uses (using the ITE trip generation method) based on their type and location, but without taking account of the impact of the particular development. The plaintiff was accordingly charged a fee of $23,420 for a modest residential project. The California Court of Appeal upheld the fee on the grounds that this fee was a legislative approval and therefore subject to extremely deferential review. The Supreme Court reversed, holding that all exactions must be subjected to heightened judicial review, regardless of whether they are classified as legislative or something else. Though the Court did not set out the exact standard applicable to legislative exactions, and suggested they might be subject to a more generous standard of review than other types of exactions, local governments clearly cannot evade meaningful review of traffic studies on the grounds that the decision is “legislative.”

In summary, courts are evidently uncomfortable with the near abdication that the fairly debatable/rational basis standard implies, perhaps especially in situations where predictions of future traffic are involved. This conclusion suggests that, even in legislative settings, courts should apply more scrutiny to traffic studies than they have customarily applied.

B. Quasi-judicial Decisions and the Substantial Evidence Standard

In administrative law, legislative decisions are often contrasted with “quasi-judicial” or adjudicative decisions. Where legislative matters are broadly applicable, prospective decisions that implicate a wide range of legal, factual, and policy questions, quasi-judicial decisions focus narrowly and retrospectively on the use of a particular parcel of land.

The previous section observed that legislative land use decisions often call for traffic predictions. It is difficult to engage in prospective planning on a large scale without some understanding of how planning decisions will affect traffic patterns. But making small-scale decisions regarding specific projects often also requires predictions about future traffic to gauge the impacts of new development on the surrounding community. For example:

  • Discretionary permits such as conditional use permits, variances, and subdivision maps typically require traffic studies. In many states, subdivision approval is contingent upon a finding that the existing road infrastructure is adequate to handle the volume of traffic anticipated.
  • In many states, and under federal law, certain developments require detailed analysis of a project’s environmental impacts, which is usually interpreted to include traffic impacts.
  • In some states, smaller scale rezonings are treated similarly to quasi-judicial decisions and subjected to heightened scru-tiny. As discussed above, traffic impacts are often major considerations in whether to grant a rezoning to a more intensive use.

ITE is the conventional method used to predict traffic impacts in the quasi-judicial context. As discussed previously, however, that method can be inaccurate, unreliable (in the few cases it has been tested), and prone to significantly overestimating vehicle traffic. Therefore, the fact that cities typically review development proposals based on the ITE method is problematic because it means that jurisdictions are making decisions based on methods with unknown reliability. And, as a practical matter, development projects may be getting denied too frequently because of misinformation about traffic impacts. Additionally, and even more perniciously, development may not even get a chance at approval before being prematurely scaled back or stopped in the planning phase because corresponding mitigations associated with the traffic study outcomes are too costly. Again, we must ask whether the judiciary should apply more scrutiny to quasi-judicial decisions rooted in the ITE methodology.

In general, because quasi-judicial decisions are site-specific and retrospective, rather than broad and prospective like legislative deci-sions, courts tend to apply somewhat higher scrutiny. While courts see legislative decisions as counseling in favor of more lenient judicial review due to separation of powers concerns, courts often perceive quasi-judicial decisions as requiring stronger judicial oversight because they implicate an individual landowner’s property rights. Similarly, while judicial review of legislative decisions is difficult because of the policy questions involved and the free intermingling of facts, law, and policy, judicial review of quasi-judicial decisions is easier because they are, effectively, the kinds of decisions that courts make themselves in judicial proceedings.

Therefore, while legislative decisions are generally subjected to the highly deferential “rational basis” or “fairly debatable” standard, quasi-judicial decisions are typically evaluated using a somewhat more rigorous standard called “substantial evidence” or something similar. Under this sort of standard, quasi-judicial decisions must in principle be supported by some credible evidence. When it comes to expert testi-mony specifically, the California Supreme Court’s important decision in Pacific Gas & Electric Co. v. Zuckerman [hereafter PG&E] makes clear that the substantial evidence standard requires a hard look at expert testimony to assess its reliability:

Where an expert bases his conclusion upon assumptions which are not supported by the record, upon matters which are not reasonably relied upon [by] other experts, or upon factors which are speculative, remote or conjectural, then his conclusion has no evidentiary value . . . . In those circumstances the expert’s opinion cannot rise to the dignity of substantial evidence . . . . When a trial court has accepted an expert’s ultimate conclusion without critical consideration of his reasoning, and it appears the conclusion was based upon improper or unwarranted matters, then the judgment must be reversed for lack of substantial evidence.

According to the leading treatise on California administrative law, PG&E stands for the proposition that “[t]he obligation to take a ‘hard look’ at the quality of the evidence in conducting a substantial evidence review is no less compelling when the evidence consists of expert witness testimony.”

Under the standard articulated by PG&E, traffic studies using the conventional ITE method are unlikely to meet the substantial evidence standard. According to PG&E, expert testimony does not satisfy the substantial evidence test unless the testimony is “reasonably relied upon [by] other experts,” and such testimony may not be based “upon factors which are speculative, remote or conjectural.” As detailed in Part II, testing has never established the accuracy of the ITE method, and several studies show that ITE does not reliably predict multimodal traffic impacts, especially in urban areas. Thus, the method is appropriately described as “speculative and conjectural.”

Undoubtedly, the standard for expert testimony in a quasi-judicial proceeding is not as demanding as the standard usually required in jury trials. The federal court system and many states evaluate the admissibil-ity of expert testimony in jury trials using the five-part test articulated in Daubert v. Merrill Dow Pharmaceuticals, Inc. Under the Daubert test, courts inquire as to five factors: (1) whether a method is testable and has been tested; (2) the method’s known or potential error rate; (3) the existence and maintenance of standards controlling its operation; (4) whether the method has been subjected to peer review; and (5) whether it has attracted widespread acceptance in the scientific community. At bottom, the premise of the Daubert test is that expert testimony presented in court should have some indicia of reliability.

Courts have often noted that quasi-judicial actions are not really judicial, and thus many of the procedural protections that apply in judicial proceedings do not apply in quasi-judicial ones. Some courts, however, have recognized that for the substantial evidence standard to be meaningful, courts must use Daubert-like evidentiary standards in quasi-judicial settings to weed out unreliable expert opinions. In Niam v. Ashcroft, the Seventh Circuit reasoned that “‘[j]unk science’ has no more place in administrative proceedings than in judicial ones” and “the spirit of Daubert . . . does apply to administrative proceedings.” The “spirit” of Daubert was implicated despite the acknowledgment that “the federal rules of evidence [on which Daubert is based] do not apply to the federal administrative agencies; so, strictly speaking, neither does Daubert.” In a similar vein, Donahue v. Barnhart found that “[Federal] Rule [of Evidence] 702 [and Daubert do] not apply to disability adjudications . . . . But the idea that experts should use reliable methods does not depend on Rule 702 alone, and it plays a role in the administrative process because every decision must be supported by substantial evidence.”

If the Daubert test were used in administrative proceedings, the conventional TIA approach and ITE’s vehicle-oriented trip generation method would surely have difficulty satisfying it based on the discussion in Part II. The scientific validity of the ITE method has never been established, and under the Daubert test is unlikely to be established. That method has not been systematically tested, nor are error rates in predic-tions evaluated with much frequency. A few studies have attempted to compare the predicted accuracy of ITE’s trip generation data, but those studies capture a limited range of land uses. To the extent traffic predic-tions can later be validated by actual traffic patterns, it is only because the predictions themselves lead to interventions that create the traffic patterns they predict. In terms of peer review, scholars with expertise in the area have peer-reviewed the method and found it to be inconsistently reliable. Finally, to the extent the trip and parking generation manuals contain standards, those standards are consistently violated. The ITE manual itself warns practitioners to be careful about the small sample sizes in many of its land use classifications and advises that practitioners correlate the ITE data with locally collected data, but this advice is rarely heeded in practice to validate the applicability of ITE national data for local contexts.

Of the five Daubert factors, the only one that the conventional ITE method seems to clearly meet is that the method is accepted within the scientific community of traffic engineers. However, its acceptance within that community has nothing to do with the scientific validity of the method. As discussed before, there is a circular logic at work in which the conventional TIA method is widely used within the scientific community because courts have historically accepted the validity of the method, but the reason courts have accepted the validity of the method is because the method is the “industry standard” widely used in the community. For all of these reasons, traffic studies using the ITE method should not be considered “substantial evidence.”

As a caveat, we must stress that courts have not been consistent in their application of the substantial evidence standard. Despite the case-law discussed above indicating that substantial evidence means some significant threshold of reliability, in practice the substantial evidence standard is generally quite forgiving. The U.S. Supreme Court recently stated that the threshold for evidentiary sufficiency under the substantial evidence standard is “not high.” Even in contexts where courts have said that heightened scrutiny applies, such as in the case of small-scale rezonings, they have softened that stance in practice. As in the legislative context, courts are aware of the limitations of judicial review and sensi-tive to the need for political bodies to act without impractical limitations. For example, in one notable case the Oregon Supreme Court found that legislators acting in a quasi-judicial capacity were not required to recuse themselves due to conflicts of interest because that rule would be practically impossible to implement, given that legislators are inevitably enmeshed in political dealings. In the context of traffic studies specifically, courts have typically not applied much scrutiny to these studies when applying the substantial evidence standard, usually content with the conclusion that ITE’s trip and parking generation manuals are the “industry standard.”

Nevertheless, there is ample reason to believe that traffic studies employing the conventional, vehicle-oriented ITE method should not pass the substantial evidence test, at least in urban areas. The common judicial reasoning that ITE is the “industry standard” does not withstand much scrutiny. ITE is widely used in the industry not because the ITE method is credible, but because people in the industry know that courts will accept it. And courts in turn accept it because it is widely used in the industry. Neither the courts nor the industry have really evaluated the credibility of the methodology. Furthermore, considering the innova-tions in practice explored in Part II, it’s unclear that ITE even really is the industry standard anymore. For all these reasons, the substantial evidence standard requires courts to scrutinize traffic studies used to support quasi-judicial decisions with far more skepticism.

C. Exactions and the Nexus/Rough Proportionality Test

In a series of cases, the Supreme Court has held that where an agency demands a developer provide some public benefit as a condition of receiving a permit, commonly called an “exaction,” the demand must have an “essential nexus” with the rationale for the permitting require-ment, and the demand must be “roughly proportional” to the impact of the development. To determine rough proportionality, agencies must “quantify” the impacts of the development so the exaction can be tailored accordingly.

Given the often intense political opposition to new developments that may generate significant traffic, municipalities frequently require as a condition of approving a project that developers “mitigate” the traffic generated by the project, such as by paying a fee into a traffic mitigation fund used to build or widen roads to accommodate vehicle traffic. ITE is, naturally, the standard method used to calculate the number of trips generated.

As we have argued, of course, ITE is often unreliable at predicting traffic. So can traffic studies predicated on the ITE method satisfy the Court’s exactions test? Is ITE the type of “quantification” that the Court had in mind?

Probably not. The exactions cases are fairly demanding in terms of the quality of evidence required to support an exaction. Indeed, some commentators have argued with good reason that the cases effectively reverse the traditional presumption of legitimacy often accorded to land use decisions and implement a form of heightened scrutiny for exac-tions. This conclusion finds ample support in Dolan v. City of Tigard, the case that established the “rough proportionality” prong of the exactions test as well as the “quantification” requirement. As it turns out, Dolan itself centered on the adequacy of a traffic study used to determine an exaction—in this case, the dedication of a bicycle and pedestrian path to offset the traffic generated by the development. The Court held that the city’s traffic study insufficiently quantified its findings, and therefore the city could not establish rough proportionality between the traffic impacts of the development and the required dedication. While the Court emphasized that “no precise mathematical calculation is required” to determine rough proportionality, it also found that the traffic study was inadequate because it merely found that the bicycle and pedestrian path could alleviate some traffic, not that it would or was likely to do so. This standard is fairly rigorous, a far cry from the “substantial evidence” standard that simply requires some credible evidence. As relevant here, it’s unlikely that a traffic study premised on the ITE method could satisfy the Dolan standard. The ITE method is, at best, capable of predicting whether a development could generate a certain quantum of traffic; given all its flaws, the ITE method certainly cannot predict whether a development will or is likely to generate any specific traffic threshold. Therefore, it does not appear to satisfy the Supreme Court’s rough proportionality test.

As an important caveat, it’s not entirely clear how meaningful the nexus/proportionality test really is in practice. There’s little empirical evidence on this point, but it appears that agencies have mostly treated quantification as a box-checking exercise, with little concern about what method is used to quantify project impacts. Their approach is reminis-cent of Donald Shoup’s classic critique of traffic studies, in which he makes an important distinction between precision and accuracy. Shoup argues that transportation planners often use very precise numbers to provide what are essentially extremely rough (and often wildly inaccu-rate) estimates. For example, the ITE’s trip-generation manual says that a fast-food restaurant generates precisely 632.125 trips per day for every 1,000 square feet of floor space. According to Shoup, the precis-ion gives the impression of accuracy, but this impression is misleading because the data shows no relationship at all between floor space and the number of trips generated. Precision becomes a substitute for accuracy that deters scrutiny of the trip-generation methodology. Likewise, under the nexus/proportionality test, “quantification” has become an end in itself, and whether the quantification is the result of a reliable process seems largely irrelevant.

Anecdotally, the exactions doctrine appears to have done little to change the bargaining that occurs between developers and municipal-ities. A comprehensive study by Tim Mulvaney reviewing the state of the exactions jurisprudence since the Supreme Court’s 2013 decision in Koontz v. St. Johns River Water Management District reported that the case has had relatively modest impact and been interpreted quite narrowly by the lower courts. On reflection, it makes sense that the doctrine has not really affected the relationship between developers and municipalities. Developers are often repeat players in local politics and are reluctant to spoil their relationships with municipal officials by suing them. Certainty is far more important for developers than vindicating their constitutional rights, so they can live with illegal exactions if the ground rules are consistently applied. For that reason, cities are confident they won’t get sued and act with impunity.

Nevertheless, and as discussed above in Section III.A., exactions will likely face more judicial scrutiny in the future. Though traditionally many courts have declined to apply the Nollan/Dolan test to broadly applicable legislative exactions, in the case of Sheetz v. County of El Dorado, California, the Supreme Court unanimously agreed that the Nollan/ Dolan test does apply to legislative exactions. While the Court suggested that test may apply somewhat more liberally in the legislative context, it did not elaborate on what that could mean. Sheetz thus sends a strong signal to the lower courts that the Supreme Court expects exactions to be closely scrutinized regardless of the context. And it is hardly a coincidence that Sheetz, like Dolan before it, revolves around a traffic mitigation measure calculated using dubious traffic predictions. Agencies cannot reasonably expect the courts to ignore this problem forever.

D. Special Benefit/Competent Evidence

A final context in which traffic studies and the accompanying ITE method are often used is in calculating certain types of municipal fees such as the special assessment or the Transportation Utility Fee (TUF). Special assessments are a venerable form of municipal finance that have been used since before the Civil War to finance roads and other infra-structure. Instead of paying for municipal services or infrastructure from general tax funds, municipalities charge landowners for the improvement based on how much each landowner benefits. The TUF, a more recent innovation, charges landowners an ongoing fee for the use of roads in the same way cities charge for water or sewer usage. Both the special assessment and the TUF often use the ITE trip generation method to calculate the fees users are to be charged. If a particular land use is estimated to generate a certain number of trips, the owner of that site can then be charged the appropriate fee based on that estimate.

As a practical matter, special assessments and TUFs are popular because they are conceptualized as a form of user fee rather than a general tax, and thereby allow agencies to circumvent often strict limi-tations on raising taxes. For that reason, special assessments and TUFs are permitted only where the assessed landowner receives some special benefit above and beyond the benefit conferred on the general public.

The use of trip generation as a basis for special assessments and TUFs is designed to increase the likelihood that courts will perceive them as user fees rather than general taxes. The idea is that each landowner benefits in proportion to the volume of road they will use, and therefore each landowner can be charged an appropriate special assessment based on their anticipated volume of road usage. Of course, this logic only works if the method used to predict the volume of road usage is actually accurate. As we have seen, however, ITE is not reliably accurate. For that reason, ITE is insufficient to establish the “benefit” landowners receive from municipal improvements, and cannot be used to sustain special assessments or TUFs.

To be sure, the courts have made clear that special assessments do not have to be exact and are necessarily “a matter of forecast and estimate.” Moreover, special assessments are usually presumed to result in a special benefit unless a challenger can present some credible evidence that the assessment does not do so. At that point, the burden shifts to the city to prove by “competent evidence” or some similar standard that the assessment does indeed confer a special benefit.

Nevertheless, as we have detailed, a wealth of credible evidence exists that the ITE method is not consistently accurate, and it would be very difficult for a city to present competent evidence proving otherwise. Indeed, at least one court has explicitly determined that the ITE is inadequate to support a special assessment, pointing out the flawed methodology underlying it. The Washington Supreme Court in Bellevue Plaza v. City of Bellevue held that a special assessment determined using the ITE trip generation rates was invalid because the ITE manual “warns that extreme care must be taken in use of the data therein. The publication states that local data should be collected when using the national data, but no local data was taken or presented to the City Council.”

As we have discussed previously, although the ITE manual itself says that its trip generation rates should be used cautiously, there is little evidence that practitioners actually do use it cautiously. Analysts of traffic studies rarely articulate what steps they have taken to ensure the accuracy of the trip generation rate under local conditions—in many cases, they mechanically apply the national rates to the proposed project without much adjustment. Bellevue Plaza makes clear that such traffic studies do not satisfy the judicial standard for special assessments.

IV. Addressing Objections

Based on the preceding discussion, we reach two conclusions: (1) existing precedent requires courts to scrutinize the methodology used to make traffic predictions rather than blithely deferring to ITE or any other method on the grounds that the method is the “industry standard,” and (2) the conventional ITE method is, unless modified to account for local variations, unlikely to withstand such scrutiny under any of the standards of review we have discussed.

We anticipate that a few objections may be raised at this point regarding some of the practical implications of our conclusions. We address each in turn.

A. Generalist Courts Cannot Reasonably be Expected to Evaluate Expert Testimony with This Level of Granularity

The first objection is that in expecting courts to scrutinize the methodology of traffic studies, we are asking too much of them. Courts are not experts in how to evaluate traffic studies, and they generally play a passive role in reviewing the small percentage of land use disputes that come before them. Rather, concerns about the validity of traffic studies should be addressed to the planners and engineers who develop and implement these studies.

We have two principal responses to this objection. First, all we are asking courts to do is to apply the standards of review they themselves have articulated. Courts have insisted that expert testimony supporting land use decisions should be reliable, and courts have expressed partic-ular skepticism about the validity of stated traffic concerns. They have done so for good reason. The courts are aware that the land use power is subject to abuse, that traffic concerns are often based on empirically unsupported fears about neighborhood change, and that such fears lead to projects being needlessly shrunk or rejected.

Undoubtedly, we are asking courts to do something they, for the most part, have been unwilling to do—scrutinize the method underlying traffic studies and trip generation. But this unwillingness is largely because to date, courts have been given little reason to doubt the validity of the “industry standard” method. This Article should give them reason to doubt it. As we have discussed, the conventional trip generation method is industry standard not because it has been proven to be accurate, but because the “industry” knows courts will accept it—which in a circular fashion then leads courts to continue accepting it without any real indicia of reliability. Furthermore, as we have discussed, practitioners have become so dissatisfied with the conventional method’s unreliability that they have developed an array of workarounds to the method, raising the question of whether the method even is the industry standard any longer.

This point leads to our second response. While ideally reforms to the conventional method would come from the planners and engineers with expertise in the area (some of whom have begun doing so), the reality is that these experts act in the shadow of the law, meaning that planners and engineers adhere to practices that they believe will pass judicial scrutiny. In the context of traffic studies and trip generation, practitioners are reluctant to innovate because they know the ITE method is accepted by courts as the industry standard, whereas how courts will treat more innovative methods is unclear. For that reason, courts may have to take the first steps to break this vicious cycle.

In fact, it may not even be necessary for courts to act. If practitioners have reason to believe that the conventional method is on uncertain legal footing and could be vulnerable to legal challenges, that may be enough to spur them to change, even if such legal challenges never actually materialize. In other words, our conclusions in this very Article may induce practitioners to innovate.

Moreover, legislatures and agencies can do several things to give better guidance to the courts regarding the validity of traffic studies, which will also hopefully push planners to innovate beyond conventional TIA practices. Some efforts are already underway. For example, the Texas Department of Transportation sponsors locally collected trip generation data, the Washington, D.C. Department of Transportation waives the requirement of preparing a TIA for projects within a short distance of transit, and the State of California’s new environmental guidelines no longer recognize traffic congestion as a significant environmental impact for purposes of environmental review. The guidelines explicitly acknowledge that such impacts are often overstated because many vehicle trips are not actually “generated” by new development. Further efforts to improve the TIA process should be encouraged. If regulators get the message that the conventional ITE method may not stand up in court, they will have a strong incentive to invest in efforts to predict traffic impacts more accurately.

B. Applying Heightened Scrutiny to the Conventional Method would Undermine the Ability of Agencies to Finance Needed Services and Infrastructure

Local governments today operate in an environment of dramatically reduced tax revenue, requiring them to rely increasingly on non-tax forms of revenue, such as exactions or user fees, to finance roads, utili-ties, schools, and other municipal services. As discussed in Sections III.C and D, however, what legally distinguishes these fees from taxes under judicial precedent and voter-imposed rules in some states is a quantified connection between the assessed charge and a specific impact of the activity being assessed or a specific benefit the assessed person or property will receive. This quantification often requires predictions of future impacts or benefits—how many new kids will be added to the schools, how many low-wage jobs will new market-rate housing create, how much property values will increase because of the installation of street lights, and so on.

The problem is that there really isn’t much of a reliable way of predicting any of these sorts of things. We have already exhaustively detailed the inadequacies of traffic studies, but housing “nexus” studies, school impact studies, and so on are often just as unscientific and unreliable. Special assessments are typically based on extraordinarily sketchy benchmarks of benefit, such as the “front-foot method,” which assumes that landowners abutting a road to be improved benefit in direct proportion to how much square footage of their land abuts the street front, which is completely nonsensical. It has long been known that predicting specific impacts and benefits from government activity is extraordinarily difficult. Two hundred years ago, the economist Adam Smith advocated for a progressive income tax precisely because he thought it impossible to determine how much specific government actions “benefit” particular individuals, and he figured it was easier to just assume that rich people benefit from the existence of government more than poor people do.

Nevertheless, local governments have found general taxation to be extraordinarily unpopular, and so they have been forced to resort to non-tax revenue streams, which demand some rough calculation of specific benefits or impacts. For that reason, they engage in something of a “noble lie” that these various methods actually predict impacts and benefits because otherwise they would be unable to keep the lights on. Our argu-ment that courts should more rigorously scrutinize traffic studies could, if accepted, completely blow up this noble lie and drive local govern-ments to the poor house.

Our first response to this objection is familiar. We are not the ones demanding more of traffic studies and trip generation; the courts themselves have done so. Indeed, the Supreme Court’s Dolan decision is far more demanding than we think is necessary, holding that a traffic study could not support an exaction because it merely showed that the project could create more traffic, not that it would. It is unreasonable to expect any study to predict impacts with that high a degree of certainty. It should be sufficient for a method to produce some consistent evidence of reliability—and even that may be too high a standard for the conventional ITE method.

Nevertheless, the Supreme Court has not indicated that it will be backing away from its exactions jurisprudence any time soon. To the contrary, as discussed previously, the Court has chosen to constrain municipal discretion over exactions even further in Sheetz v. County of El Dorado, California, holding that the Nollan/Dolan test applies to all exactions regardless of whether they are characterized as legislative or something else.

Second, and more fundamentally, there is a reason why local govern-ments are required to prove some sort of connection between an activity and the fee assessed. Local governments are beholden to local residents, especially local homeowners, and therefore have a very strong incentive to shift the burden of operating government from politically powerful residents to developers—which burden is ultimately passed on to consumers in the form of higher prices. The thrust of the precedent requiring some kind of connection between the development and its impact is to ensure that developers are only being asked to pay the actual costs for which they are responsible, not benefits to the general public that the municipality wants to dump on consumers. If cities can use specious traffic and nexus studies to make developers pay for develop-ment “impacts” that are not actually impacts of the development, then they can successfully shift the cost of public services and infrastructure to consumers in circumvention of the law.

Admittedly, an alternative view exists of the role of exactions—albeit a view that the Supreme Court has rejected. On this view, the purpose of exactions is not to mitigate the impacts of a development, but to enable the public to capture the publicly-created value of the develop-ment. This theory of exactions is rooted in the work of the economist Henry George, who argued that increases in land value are not attributable to the efforts of landowners, but are publicly created. For example, if a piece of land in an urban downtown becomes desirable for a restaurant or a shop, that occurs because public investment in the downtown area has made it a lively place for people to be, not because the landowner did anything special to increase its value. Therefore, when the landowner seeks to create a new commercial development on the parcel, the public fairly may ask the landowner to make a contribution to improve public roads, or schools, or affordable housing, even if no nexus exists between those things and the impacts of the development, because the developer stands to profit from the value the public has created, and therefore the public is entitled to capture the publicly-created value.

Setting aside the fact that the Supreme Court has completely rejected this view of exactions in favor of the nexus/rough proportionality test, the Henry George/value capture theory of exactions still suffers from a fundamental problem. Agencies can only demand exactions, and thus “capture value,” when a developer requests permission to build something. Those who leave their property unimproved pay nothing (in contravention of George’s own prescription that an efficient tax should tax idleness and not punish productivity). For example, when home-owners’ equity increases by 7% year over year, as it has in the United States over the past year, that equity gain is not at all attributable to the homeowners’ own efforts, but rather to public investments in schools and infrastructure. Nevertheless, that publicly-created value is rarely “captured” by the public because homeowners generally pay very little in property taxes. Indeed, the very reason exactions have become so popular is to reduce the burden of general taxation on homeowners by pushing that burden onto developers. Hence, the Henry George theory does not solve the basic problem that developers, and hence consumers of new housing, are being asked to pay to finance public services and infrastructure that, in fairness, the public as a whole should be required to pay.

None of what has been said changes the fact that heightened scrutiny—indeed, any scrutiny—of traffic studies will indeed make it harder for cities to raise revenue without increasing taxes. But there is no such thing as a free lunch, and it may finally be time for homeowners who have free-ridden on the contributions of developers and consumers for years to face the difficult tradeoffs involved in financing local government.

C. Applying Heightened Scrutiny to the Conventional Method Would Have the Unintended Effect of Making Project Approvals More Difficult

Arguably, if cities are unable to use the ITE method, it will lead to cities denying projects more frequently. Assuming that courts were to follow our argument to its logical conclusion and invalidate the ITE method, cities would likely have a hard time finding any other method to replace it that would survive heightened judicial scrutiny. To date, no method for predicting traffic impacts has been proven to be reliable. Therefore, cities might be unable to predict traffic impacts at all. If that were to happen, cities could respond by simply denying projects rather than dealing with the uncertainty of whether the project would have a significant traffic impact. Furthermore, if cities are unable to assess exactions based on traffic impacts for the reasons discussed in the previous section, cities would be even more inclined to reject projects on the basis that they would not be able to mitigate traffic impacts.

These concerns may be overstated, however. As discussed in Section III. A, courts have frequently held that cities cannot deny projects based on fears of traffic impacts absent some empirical evidence to support such fears, and courts have typically looked skeptically at project denials rooted in traffic concerns to ensure they are adequately supported. The courts are thus amply equipped to monitor whether cities are denying projects due to ambiguously-founded fears about traffic. Undoubtedly, if the ITE method were invalidated and cities responded by denying projects more frequently, courts would have to take a more active role in scrutinizing local land use decisions than they are accustomed to, but in our view that potential burden on the courts is outweighed by the need to address the pernicious effects of the conventional ITE method on overestimating traffic demand and raising housing costs.

A related practical concern about unwinding the conventional ITE method is that, in many states, cities are required to report the environ-mental impacts of projects, which generally includes traffic impacts. If no reliable way exists to predict traffic impacts, then projects could be rejected, or challenged in court, on the grounds that traffic impacts have not accurately been reported. This concern may also be overstated, however. Normally, a city only has to do a full environmental review, called an “Environmental Impact Report,” if a “fair argument” exists that the project has a significant environmental impact. If there is no reliable method for measuring traffic impacts, then neighbors will not be able to meet the fair argument standard.

This answer does raise some thorny additional questions about how to manage the transition from a world with the ITE method to one without it. Suppose a city found, using the ITE method, that a project would not have a significant traffic impact, and it approved the project on that basis, but then the ITE method was overturned judicially? Would the approval have to be overturned as well? We think not. As already discussed, the conventional ITE method is likely to significantly over-predict traffic in most contexts. Therefore, absent empirical data correlating traffic counts with predictions, courts should normally assume that trips will be substantially lower than what ITE predicts. For that reason, if a project is approved based on a finding, using the conventional ITE method, that the project will not have a significant traffic impact, or that adopted mitigation measures will reduce the impact to an acceptable level, the approval should be upheld because it is unlikely the actual traffic will be any worse than what ITE predicts. This process is admittedly a workaround, but as discussed previously this type of workaround is standard practice by practitioners who understand ITE’s limitations.

In any event, if courts take the position we advocate for and it becomes more difficult for agencies to rely on dubious traffic predic-tions—which, again, we think is a good thing—that will hopefully spur legislatures and administrative agencies to develop new methods and workarounds for ITE, such as those addressed in the previous section.

A possible model of legislative reform is California’s Housing Accountability Act, which prohibits local governments from disapprov-ing eligible housing projects on health and safety grounds unless the local government can prove that the project violates a written, objective, quantified health and safety standard, and that the project cannot be modified to satisfy the standard. A city may not reject a project simply because it thinks the project will generate “too much” traffic. Even if the city could quantify the project’s traffic impacts, it would have to demon-strate that those impacts cause the project to violate an objective standard that is necessary for health and safety and cannot reasonably be miti-gated. A legislative standard such as this one makes it very difficult for cities to deny projects based on ambiguous traffic fears.

D. Given their Attachment to the Status Quo, It’s Unlikely that Any Developer Will Sue to Invalidate the ITE Method

Courts in the United States do not make decisions unless they are presented with a live controversy. For these purposes, that means the only way the ITE method will be invalidated by the courts is if someone sues to have it invalidated. That someone is most likely to be a developer, as developers are the ones who are usually financially injured by studies that over-predict traffic when projects are delayed, denied, or shrunk, or massive “mitigation” payments demanded. Also, developers are the most likely to have the resources and motivation to bring such a lawsuit, especially to the point of getting a published appellate decision that would create precedent.

However, it might be thought unlikely that developers would bring suit to challenge the conventional ITE method. Developers are very attached to the status quo. As Combs’s study reports, efforts by planning staffers to move away from ITE have actually met staunch resistance from developers, who reasonably prefer an arbitrary and burdensome process they know to one they don’t know. Furthermore, developers tend to have relationships with cities that they don’t want to burn by litigating against them. Partially for that reason, the exactions jurispru-dence discussed above has not been particularly effective.

There’s reason to be optimistic, however. Developers are reluctant to bring litigation when the odds of success are unsure, and to date there has been little indication that the courts would second-guess the ITE method. But this Article has given courts ample reason to second-guess that method, so perhaps it will spur developers to take their chances. They have good reason to do so: developers can save substantial development costs if they move away from a method that over-estimates traffic. Furthermore, if the fear that ITE may be invalidated pushes administrative agencies to develop new methods of estimating traffic, developers will have the tools to push agencies to use those new methods, at the risk of potential litigation if they do not.

What’s more, developers are becoming bolder in confronting municipalities, and it only takes one with the nerve to challenge ITE to set a precedent for the entire industry. Over the past generation, bigger development companies had a monopoly of sorts on the development process because of their superior ability to navigate byzantine approval processes. But in the course of the last decade, as many states have begun reforming restrictive land use regulations to facilitate housing develop-ment, smaller, more nimble developers have emerged to challenge that monopoly. These newer developers have shown a willingness to be somewhat more adversarial with cities. Hence, it may be that this new breed of developers will be more willing to challenge the flawed ITE method.

Once again, California’s Housing Accountability Act provides something of a model here. Recognizing that developers in a “repeat player” situation with municipalities are often unwilling to burn relation-ships by filing suit to enforce their rights, the HAA permits any housing organization to file suit to challenge the disapproval of an eligible housing project, and to recover attorney fees if they prevail. Legis-latures could likewise consider permitting third party housing advocacy groups to challenge excessive fees or exactions imposed on an approved project, thereby undermining the repeat player dynamic that deters developers from doing so.

V. Conclusion

It has long been an open secret among planners and traffic engineers that the conventional method of predicting traffic impacts is inconsis-tently accurate and biased toward over-estimating traffic demand, leading to a host of undesirable consequences for our cities. While reform is germinating, inertia has prevented any large-scale movement away from the conventional method. Meanwhile, the judicial system has rarely been presented with any reason to second-guess the conventional method, content that the method is the “industry standard.” As this Article has shown, traffic studies and trip generation demand more scrutiny than the courts have typically applied, and the conventional method is unlikely to survive such scrutiny. Agencies charged with transportation planning should make it a priority to invest in new methods that more accurately predict traffic impacts.

    Authors