chevron-down Created with Sketch Beta.

Public Contract Law Journal

Public Contract Law Journal Vol. 51, No. 1

Data Scarcity in Bid Protests: Problems and Proposed Solutions

Will Dawson

Summary

  • Provides overview of bids protests and describes problems created by a lack of protest data
  • Discusses bid protest misperceptions and the RAND Report analyzing the bid protest system
  • Describes research findings relating to bid protests
  • Proposes GAO report specific data annually and analyzes potential arguments against the proposal
Data Scarcity in Bid Protests: Problems and Proposed Solutions
tataks via Getty Images

Jump to:

Abstract

The bid protest process is a vital part of a well-functioning government procurement program. While the United States’ federal bid protest system has supporters and detractors, the government does not release enough information to publicly resolve foundational questions about how well the process is working. The few studies which have been carried out have yielded some interesting insights, but they are not sufficiently detailed to inform meaningful change in the long term. The author has written a program that enabled the computer-aided analysis of over 7,000 bid protests at the GAO. This Note seeks to articulate the data uncovered by the research and proposes a data collection program that would greatly expand the information currently available. The Note presents a proposed data reporting sheet, designed by the author, which would generate the data necessary, year over year, to effectively monitor the bid protest system’s health with a minimal amount of added work for government personnel through a program that would automate the majority of the necessary collection. The Note concludes with a discussion of the costs and benefits of generating so much data for the government, attorneys, and contractors.

“It was once observed that every man is different, but all husbands are alike.The same could likely be said of bid protests.”

—W. H. Taft (almost)

I. Introduction

The bid protest process provides a low-cost monitoring mechanism for federal government procurement, allows the private sector to confront perceived issues with the government’s procurement process, and helps assure the general public that their tax dollars are well spent.

Like any complex system, the bid protest process is imperfect, but, to meaningfully improve it, the government must release more data each year to allow for intelligent improvement through targeted policy reform. While bid protests may be brought in multiple fora, the Government Accountability Office (GAO) is the most popular and data-friendly protest forum; therefore, it is the primary focus of this Note. However, the recommendations made herein are applicable to all three federal bid protest forums (the GAO, the Court of Federal Claims (COFC), and agencies).

Currently, arguments both for and against bid protests rely heavily on anecdotes and personal perceptions as opposed to hard data. Federal agencies release no information about the protests that they receive. Conversely, the GAO is relatively transparent with raw information (such as releasing merits decisions), but only discloses nine datapoints in their Annual Reports to Congress. That data provides some information, but fails to provide insight into many important facets of the bid protest process, thereby rendering analysis of the system’s efficiency either constrained and heavily caveated, or broad but inconclusive.

Without more comprehensive data, trends cannot be analyzed; if trends cannot be analyzed, problems cannot be identified, and the effect of implemented policies cannot be evaluated. This Note will articulate how critical it is for the public to have more data available for analysis, by focusing on three interrelated subjects: (1) the fallacies of trying to identify problems, craft policies, or measure their impact, without more data; (2) the author’s efforts to extrapolate data from currently available materials; and (3) a proposed data-reporting process that would aggregate and analyze data from the government, via a computer program, to facilitate the publication of more detailed raw datasets. As currently envisioned, this dataset would provide over twenty datapoints on each protest filed at GAO and extrapolate additional metadata, which would exponentially improve the analytical potential for legal practitioners, policymakers, and academics.

II. An Overview of Bid Protests

This section will present a broad explanation of the bid protest process to better frame the problems that this Note seeks to address. Any bid protest begins with a solicitation: when the federal government determines that the private sector can provide goods or services more efficiently than the government, the government solicits bids from private companies that compete with each other to provide those goods and services. If a contractor believes that malfeasance, error, or otherwise prohibited behavior occurred in the course of a procurement, the contractor can file a bid protest. Following a successful bid protest, an agency may undertake a range of remedies, from correcting the issue and moving forward with the contract otherwise unmodified to restarting the competition from scratch. Bid protests may be filed at three different protest fora: the agency that issued the contract, the GAO, or the COFC. The GAO is, by a large margin, the most popular protest forum. Agencies release no protest data, and the data released by the GAO and the COFC is not uniform enough to meaningfully compare the statistics that they individually make available. Given the three different protest fora, meaningfully detecting and addressing problems with the overall bid protest process requires at least comparable, if not uniform, datasets from each forum.

Interested parties perceive bid protests as a double-edged sword. Some feel that agency-level protests are unfairly biased and that the GAO entertains “weak” bid protests, wasting taxpayer time and money. Others feel that the bid protest system functions quite well and is important because it allows disappointed offerors to serve as “private attorneys general” for the procurement process at no charge to the government. Currently, not enough hard data exists to conclusively resolve this debate, leading all involved parties to rely on their personal impressions, rather than empirical data, when evaluating the bid protest system.

III. Problems Created by the Lack of Protest Data

This section will address the many problems created by the small amount of protest data made publicly available by the GAO. The lack of reliable data on bid protests impairs procurement personnel, practitioners, academics, and legislators alike, who definitionally must carry out their work without the robust and concrete data necessary to improve the protest system over time.

A. The GAO’s Annual Reports to Congress

The GAO publishes Annual Reports to Congress summarizing the past year’s bid protests, though the nine datapoints that they contain are not particularly informative for assessing the bid protest system’s health because they are insufficient for in-depth analysis. For example, the disclosure of Effectiveness Rate (the total number of protests resolved in the protester’s favor) is a consolidation of raw data, rather than a disclosure of unique details. Dismissals are likewise reported as a consolidated figure without any categorization by underlying reasoning. If a protester reads an agency report, realizes the folly of their protest, and withdraws it, for example, the GAO records the action as a “dismissal,” and that figure is commingled with all the protests which the GAO itself dismisses on their merits. The GAO’s Annual Reports to Congress report these two outcomes as a single figure; however, they are two causally unique protest outcomes, and separate reforms are needed to meaningfully remedy them. These are only some examples of the inconsistencies and gaps in the currently published government data.

While the GAO lists the top five grounds for sustained protests each year, and provides a sample specific protest for each ground, the GAO fails to elaborate on how these categories are broadly defined and fails to provide context for the stated figures. It is unclear what percentage of overall meritorious decisions those five issues represent. What protest grounds are so rare as to be a non-issue? More data is needed to answer these basic questions.

B. Complexity of the GAO Report

The GAO’s Annual Report is more complex than it first appears and contains traps for the unwary. While the number of Agency Voluntary Corrective Actions (AVCAs) are not listed, the Annual Report states that the Effectiveness Rate combines the Sustain Rate and the AVCA Rate. It is tempting to subtract the Sustain Rate from the Effectiveness Rate to arrive at the AVCA Rate, but that approach would lead to a grossly incorrect figure. The two percentages are derived from significantly different denominators. Thus, to accurately calculate the AVCA Rate, the Number of Sustains needs to be divided against the Cases Closed and then subtracted from the Effectiveness Rate. Using this method of calculation demonstrates that, in 2019, eighty-nine percent of effective bid protests were resolved through AVCAs (though, as discussed above, AVCAs, as currently tabulated, capture voluntary withdrawals as well). This is a single example of potential pitfalls that current bid protest reporting creates, and reveals the need for more raw data, because processed data can be misleading.

C. The GAO Effectiveness Rate

The GAO’s Annual Reports to Congress reveal that the GAO’s Effectiveness Rate remains relatively stable year over year. This consistency suggests that there are likely recurring problems that are not being addressed in the bid protest system. Data that could isolate and articulate what those recurrent problems are would likely have a large, and beneficial, impact on reducing the number of bid protests year over year, by enabling targeted trainings and policies for government procurement personnel. Further, detailed data could also help identify those agencies and departments who saw few protests and help promulgate their approaches to procurement as naturally emergent best practices.

As Daniel Ramish’s 2018 Note “Midlife Crisis: An Assessment of New and Proposed Changes to the Government Accountability Office Bid Protest Function” lays out, policy choices inspired by perception, instead of data, as exemplified by Section 827 of the 2018 National Defense Authorization Act (NDAA) (Section 827), are not effective. Section 827 was intended to deter frivolous bid protests by charging losing protesters the cost of the protest, in response to an unfounded perception that the procurement process was severely encumbered by frivolous protests. Ramish also found that the modern bid protest system has made government procurement a far more competitive marketplace than the U.S. economy overall, suggesting that the system is functioning well in at least some significant ways. However, Ramish argues that the “loser pays” provision of Section 827 tried, but failed, to effectively deter frivolous protests. Ramish also notes that Section 827 fails to explain how the government would calculate the costs incurred responding to the protest. This is a problem because the fine amount is assessed by the costs the government incurs while resolving the protest, but those costs are not defined in detail. Further, defining, which does not even go as far as specifically accounting for, the “full cost” of something in an organization as complex as the federal government is an extremely nebulous and challenging objective. The passage of Section 827, and its revocation two years later in Section 886 of the 2021 NDAA (Section 886), demonstrate the instability created by policies predicated on little to no substantive data analysis.

D. Impact of Minimal Protest Data on Relevant Stakeholders

This lack of clear and robust data about bid protests has detrimental impacts on almost all stakeholders in the government procurement process. Reducing the number of valid protests has a different set of policy goals (i.e., reducing the incidence rate of government or contractor malfeasance) than reducing the number of invalid protests (i.e., reducing the incidence rates of protests brought for improper reasons). The bid protest process cannot be improved unless the available data differentiates between, and independently analyzes, those two outcomes.

Lack of data harms procurement personnel because a significant amount of time and taxpayer money is wasted trying to avoid protests, despite the fact that no party is conducting systematic analysis of what causes protests to be brought in the first place. This lack of data harms academics and legislators alike because it hampers their ability to identify actual issues with the bid protest process. Consequently, their policy recommendations or legislative efforts definitionally must be imperfectly informed and potentially misguided because they fundamentally lack an empirical foundation. Lack of data harms practitioners because they do not have hard data to help accurately inform their clients as to when their protests will be sustained.

When analyzing the success of a bid protest, the critical question is not whether a protest was sustained. The more important question is, once the protest is sustained, was the protesting contractor ultimately awarded the contract? It does a contractor minimal good to have a protest sustained and the solicitation or contract remedied without recompetition because the original awardee retains the contract and the protester receives no tangible reward.

IV. Asking the Wrong Questions: Misperceptions About the Bid Protest System

A lack of data creates an inexact information environment, making it difficult to articulate problems with the bid protest system and even more difficult to articulate solutions to those problems. This section reviews some examples of ill-conceived questions (either unasked or misguided) that arise due to the lack of hard data in the bid protest system.

A. Question 1: Are There Too Many Bid Protests?

One chronically misunderstood question is whether there are too many bid protests. Any policy that simply drives down the volume of protests filed, without addressing the reasons behind those filings, is fundamentally contrary to the bid protest system’s objectives. The bid protest process is meant to provide a regulatory mechanism at private expense. Fewer bid protests may indicate a healthy procurement system if the reduction is caused by protesters having fewer claims. If the reduction is caused by policies that deter the filing of legitimate protests, however, then those policies would not be fulfilling the system’s objectives. In isolation, the number of bid protests is largely meaningless. To render this statistic more meaningful, robust supporting data, which is presently unavailable, is needed to create a useful taxonomy of bid protests. Only once that foundation is developed can useful analysis be performed. A dataset like the one proposed in this Note could provide the necessary information to empower the government to more accurately apply resources and trainings to cure the deficiencies that are articulated. A program similar to the one proposed here would expedite and standardize this process, minimizing the financial and temporal costs for the government.

B. Question 2: Is the Protest System Too Expensive?

Like the patent system and the False Claims Act’s qui tam provision, bid protests use private self-interest to serve the public good at private expense. The protest process incentivizes private companies to monitor each procurement in which they are involved and bring attention to any issues that arise. The correct question is not “how expensive is the bid protest system?” but instead “how expensive is the bid protest system, compared to the alternative regulatory mechanism which would need to replace it?” The answer can be approximated by looking at other government entities that serve as market regulators. While an entire analysis comparing the costs and benefits of the bid protest system to other federal regulatory systems could (and should) be written, that is not the purpose of this Note. As such, only a brief comparison will be provided here.

The Securities and Exchange Commission (SEC) regulates a $97 trillion securities market with an annual budget of $1.75 billion, effectively spending 0.000018% of the market’s value to regulate it. In FY 2019, $586.2 billion was spent on government procurements. The comparable cost to regulate the government procurement ‘market’ (applying the SEC’s percentage), therefore, would be approximately $10.58 million. The GAO and the SEC are sufficiently dissimilar that a direct comparison between them would not be meaningful; however, if a Procurement Integrity Commission were hypothetically to replace the bid protest system, it would require a budget large enough to monitor the millions of contracts that the federal government issues each year. More data must be analyzed to make such a comparison, but it seems likely that a government body replacing the current private reporting process would be demonstrably more expensive than the current cost of running the adjudicatory bodies currently in place in isolation.

C. Question 3: How Are Bid Protests Counted?

When compiling statistics, bid protests can be counted in many different ways. This variation leads to very different and contradictory statistics, all of which are technically “true.” As such, the government should release data on the individual protest level to avoid confusion. A 2017 Congressional Research Service report found that the GAO’s Annual Report to Congress was “an accurate reflection of the [GAO’s] work load . . . [although the report] over-represents the number of procurements protested.” Khoury, Walsh, and Ward’s A Data-Driven Look at the GAO Protest System found three distinct ways to count protests, each resulting in three quite different final figures. The GAO indexes protests through their “B-Number” system. For each solicitation or contract protested, the GAO assigns a six-digit code (e.g., B-456789); if a solicitation or contract receives multiple protests, decimal points are added to that code, representing each distinct protest (e.g., B-456789.1, B-456789.2, etc.). The GAO constructs its reports based on the individual protests received, not the number of solicitations protested. If one structurally deficient solicitation received ten protests, the GAO’s Annual Report documents ten protests, even though all ten protests could have been avoided by remedying a few solicitation defects, and all ten protests are only delaying a single procurement. A more detailed GAO Annual Report would help to delineate between the number of protests filed and the number of solicitations and contracts delayed and otherwise impacted.

The GAO should report five distinct sub-categories of protest outcomes to properly measure efficacy: (1) Sustained Merits Decisions (SMDs)—protests that were, definitionally, rightly brought before the GAO; (2) Denied Merits Decisions (DMDs)—protests that the GAO felt required adjudication instead of outright dismissal; (3) Agency Voluntary Corrective Actions (AVCAs)—protests that agencies remedied before receiving a recommendation from the GAO; (4) Contractor Voluntary Withdrawals (KVWs)—protests that were dismissed because the contractor voluntarily withdrew its protest, after examining the Agency Report; and (5) Deficient Protest Dismissals (DPDs)—protests that were dismissed because they contained procedural errors or were otherwise deemed facially invalid. The GAO currently only reports one of these categories, SMDs, directly, and two others, DMDs and AVCAs, can be extrapolated from the current GAO Annual Reports to Congress. Differentiating the causes for protest dismissal is critical to understanding what those latter two values represent. Publishing the DMD and AVCA figures directly would make the data more accessible to practitioners and contractors. This transparency is especially important because those analyzing bid protest data must differentiate between the various types of protests—as discussed above in Section (c) of Part III, Problems Created by the Lack of Protest Data—to avoid calculation errors.

The actual figure of how many protests “should not” have reached the GAO is the number of AVCAs, KVWs, and DPDs combined. While protests in all three of these categories should not be before the GAO in theory, each category should be targeted for reduction by different means. While the GAO does not release dismissal figures in total, AVCAs (which should hypothetically be resolved through agency-level protests) and KVWs (which could be reduced through more comprehensive debriefings) would likely reduce the number of bid protests before the GAO by thirty-four to fifty-three percent. DPDs are the only sub-type of protest which should be considered for punitive deterrence, though there is little evidence to suggest that these are regularly filed in bad faith. The SMDs and DMDs are both properly before the GAO, since they are, definitionally, close enough calls to merit GAO analysis. SMDs and DMDs should be reduced through better Contracting Officer training, which would mitigate instances of protest-worthy behavior rather than penalizing protesters. Any comprehensive reporting system must clearly delineate between these outcomes.

V. The RAND Report: What Were Its Strengths and Weaknesses?

This section describes the commission and impact of the RAND Report requested in Section 885 of the 2017 National Defense Authorization Act (2017 NDAA), which sought to definitively answer long-standing questions about the DoD’s bid protest process. The RAND Report was the first comprehensive government-sponsored analysis of the modern bid protest process. The Report dispelled several common criticisms about bid protests and revealed that, from FY 2009 to FY 2016, less than 0.3% of bids were protested each year, and an average of 13.9 protests were filed per billion dollars spent on DoD procurements, demonstrating that bid protests do not represent an unconscionable burden on the procurement system. However, RAND was unable to access sufficient information to fulfill nine of the fourteen congressionally mandated objectives or to secure data on agency protests. RAND also recommended that more data on bid protests be collected and released by all government stakeholders.

A. The Positive Impacts of the RAND Report

The value of the RAND Report was limited in four main ways. First, even with a congressional mandate, RAND could not access enough information on GAO, COFC, and agency protests to completely answer the majority of Congress’s questions. Second, agencies did not make data on their internal bid protest adjudication processes available to RAND. The RAND Report noted the lack of data from all sources, but never claimed any of the forums would be actively adverse to publicizing the information, giving the impression that there was no active resistance to the idea of releasing more data. Third, the RAND Report, being a singular report, only represents a fixed moment in time. It becomes a less accurate representation of the bid protest process with each passing year, whereas the annual reports proposed in this Note would be current each year that they are published. Fourth, the RAND Report presented the conclusions of data scientists, not the underlying data they generated. While the RAND Report provided incredibly helpful insight, data is far more valuable than conclusions because data allows independent researchers to answer a wider range of questions over time.

B. Section 827 of the 2018 NDAA

After the RAND Report was commissioned, but before it was published, Congress included Section 827 in the 2018 NDAA, which implemented punitive measures meant to deter frivolous bid protests. When it was later published, the RAND Report revealed that, contrary to Congress’s expectations, the issues that Section 827 addressed—namely frivolous protests encumbering a significant proportion of overall government contracts—were not significantly impairing the procurement process; moreover, Section 827 failed to address the issues that the RAND Report found most significant. In a tacit admission of just how misguided Section 827’s policies were, Section 886 of the 2021 NDAA completely revoked Section 827’s requirements, just two years after Section 827’s passing. In Section 886’s Conference Report, Section 827 was described as “unlikely to result in improvements to the bid protest process, given the small number of bid protests captured by the pilot criteria and lack of cost data.” Troublingly, there is no evidence that, without the RAND Report, the inherent problems with Section 827 would have been realized, or that Section 827 would have been overturned.

C. Limitations of the RAND Report

While the RAND Report was extremely helpful and set a high-water mark in bid protest analysis, it also revealed a need for more thorough annual reporting. Thorough annual reports would make much of the information that the RAND Report “discovered” common knowledge, permit even more information to be uncovered through independent research, and, most importantly, prevent Congress from crafting misguided and ill-informed legislation, such as Section 827, in the future. The question that remains is: what data should be collected and how would such data be generated in an efficient manner?

VI. Previous Research Efforts

Prior research efforts to generate information about bid protests confirm Professor Ralph Nash’s conclusion that “[the bid protest process] works pretty well.” This section will review research efforts to collect data outside the scope of the GAO’s Annual Reports. It will focus first on non-governmental research efforts by practitioners and academics, then on government research efforts. Both sections reflect the need for more data as a core prerequisite to meaningful improvements.

A. Previous Non-governmental Research Efforts

This section evaluates two data-intensive reviews of the bid protest process. Maser, Subbotin, and Thompson’s The GAO’s Bid-Protest Mechanism: Effectiveness and Fairness in Defense Acquisition? thoroughly compared data from FedBizOpps.gov and FedMine.us. The authors found that the bid protest system was effective overall but that the data needed to make their research findings more robust was not available. Khoury, Walsh, and Ward’s A Data-Driven Look at the GAO Protest System, discussed above, also provided interesting insights into the potential utility and limitations of the current data released in the GAO’s Congressional Reports. Both efforts focused on generating data from disparate government databases, which were not designed to have their data compared or integrated with each other. Because the databases were not meant to be cross-referenced, it took a great deal of time and effort to uncover a fraction of what could be made readily available by the proposed datasheet.

Additionally, Dan Gordon published two studies of GAO protests examining whether sustained protests resulted in an actual contract award afterwards. The studies examined a relatively small number of protests, presumably because these were also manual, labor intensive, reviews. Gordon made many important observations about deficiencies in the current reporting process. The datasheet proposed by this Note specifically targets the deficiencies that Gordon identified in his report. Gordon concluded that companies file protests when there is a genuine belief in their protest’s validity and that willfully frivolous claims are exceedingly rare.

In 2018, after the passage of the 2018 NDAA but before the publishing of the RAND Report, Ralph Nash wrote an article describing Section 818, which pushes for enhanced debriefings, as being “smart,” and Section 827, which implements a punitive “loser pays” rule for some protests, as being “dumb.” The RAND study later affirmed those assertions, albeit in more diplomatic terms.

In Federal Bid Protests: Is the Tail Wagging the Dog?, Professors Hawkins, Yoder, and Gravier made many valuable discoveries and recommendations by surveying Contracting Officers. They found that Contracting Officers did not feel confident in their ability to determine what did and did not cause a protest to be filed and that this lack of confidence led to a greater use of Lowest Price Technically Acceptable (LPTA) contracts where this contract type was otherwise not desirable to mitigate the risk of receiving a protest.

Recently, Professor Christopher Yukins recommended that the government improve the agency protest process to provide a more viable and attractive forum for protesters, explaining that agency protests are “by far the leading means of handling vendor complaints internationally.” While Professor Yukins’s work did not deal with novel statistics, his proposal is a good example of a modification to the bid protest process that could be implemented and thoroughly analyzed with the proposed data.

B. Previous Government Research Efforts

The government has made many efforts in recent years to gain more insight into the bid protest process. The RAND Report included several observations and suggestions to improve the bid protest process, but cautioned against sweeping reforms or reduction of the current system’s operational scope. Section 809 of the 2018 NDAA called for the creation of an advisory panel (the 809 Panel) to provide a report on improving acquisition regulations. The 809 Panel Report contained a subsection discussing potential improvements to the bid protest process. The Panel made some interesting findings and recommended both material changes to the debriefing process and improved access to protest forums. The 809 Panel relied heavily on the RAND Report’s findings in its conclusions, which reaffirmed the utility of detailed data, but failed to discuss the drawback that, in the future, the RAND Report would need to be updated or fully repeated to remain useful in later analyses.

Additionally, Section 822 of the 2019 NDAA requested that more data on bid protests be collected and reported to Congress and mandated the creation of a protest data repository that would be available to government personnel. However, Section 822 covered only protests of DoD contracts filed at both the GAO and COFC, and Congress did not request that the underlying data be made publicly available. The current results from the Section 822 reports may have contributed to the inclusion of Section 886 in the 2021 NDAA, repealing Section 827. The Committee Report for Section 886 expressed a great deal of support for an attempt to uncover the data that the RAND Report was unable to find. The Committee Report, however, does not detail how such data would be collected. The author’s solution, proposed below, would efficiently collect a dataset for public release, provide a wide array of information directly, and unify identifiers (such as a contract’s number and status, as well as a contractor’s DUNS number) to empower researchers to cross-link items across datasets.

While laudable in principle, the public research efforts discussed above are less valuable than releasing raw data. If the government focused on releasing more detailed data, market incentives would drive private analysis in order to demonstrate efficacy of representation for attorneys, support for industry-supported policies, and conceptual mastery for academics. But only a finite amount of government research funding can be allocated to investigating the bid protest system. As discussed previously in the context of the RAND Report, the government will only ultimately investigate the topics in which they are most interested. If the government has a monopoly on data, their limited bandwidth to furnish reports will unnecessarily constrict the amount of information released for public consumption. Collecting and releasing raw data is fundamentally faster and cheaper than collecting raw data and then analyzing and publishing reports on it. Releasing raw data, as proposed in this Note, would allow the broader bid protest community to publish research on any facet of available data they see fit and, more importantly, would enable the publication of research at no cost to the government. This is especially attractive for stakeholders who are motivated to expend resources to publish novel research on sub-topics and thereby establish themselves as unique authorities on specific matters.

VII. Data Generated by the Author and the Potential Future Utility of Such Data

The author has programmatically analyzed over 7,000 of the GAO’s Merits Decisions from bid protests between 1990 to 2019 to extrapolate underlying data about those bid protests. The formulaic nature of GAO decisions allowed for reliable data extraction: the detected error rate in the author’s data was below 0.1%. The resulting compilation of data on the bid protests analyzed, provided in Appendices I, II, and III, is an honest presentation of the author’s research. The data cannot be considered definitive, however, until the government releases comparable data based on official records.

The objective nature of large-scale data analysis may go against a lawyer’s instincts. Legal research traditionally values specificity, whereas large-scale data analysis focuses on macroscopic trends, in the context of which examining specific instances can be misleading. While programmatically reviewing thousands of documents necessarily has some degree of error, before dismissing the resulting data, there are a few questions that must be answered: First, is the information otherwise accessible? Second, could attorneys plausibly carry out a manual review of the same information? And, finally, would attorneys make more errors from exhaustion and repetition during a manual review than a computer would, due to coding and processing errors? While the presented data is imperfect, it provides insights that are not otherwise available. Consequently, readers should appreciate the author’s data for what it is, while being mindful of its drawbacks.

The fundamental problem with private efforts to generate data in the current information environment is that, because less than twenty-five percent of the GAO’s decisions are fully published each year on average, any research efforts based only on these published decisions must be heavily caveated and is inherently incomplete. Because the public has access to only twenty-five percent of the GAO decisions, any private data collection efforts will be time-consuming and require some degree of skill to execute and still only interface with twenty-five percent of the actual protests being filed. It is not worth most professional researchers’, much less practitioners’ and policy makers’, time to carry out such labor-intensive collection of resources when the final product can only reflect twenty-five percent of extant protest decisions. While the author’s prior efforts generate useful data, it is unlikely that professionals, with greater demands on their time, would find taking the time to duplicate that process worthwhile, because only general conclusions can be drawn from the data. Conversely, the dataset proposed below offers more definitive data on the entire bid protest process and presents the data in a format that facilitates interaction by researchers. The proposed datasheet would expedite and facilitate research while providing more reliable results. Additionally, the information would facilitate the identification of contractors and contracts across databases through unique identifiers, exponentially increasing the data’s utility.

VIII. What Data Should the GAO Publish Year over Year? A Proposed Solution

The GAO’s Annual Report to Congress contains just nine datapoints each year, whereas in 2019 alone, USASpending.gov publicly released information on more than 6.4 million federal contracts, including over 280 cells of data for each contract. It is hoped that a “Goldilocks zone” for an improved annual report on bid protests can be found, containing somewhere between 9 and 1.8 billion datapoints.

A. What Information Should Be Published by the GAO?

An improved annual dataset should minimize compilation workload for the government while maximizing the amount of information published in the final report. The author recommends that the government provide an annual report consisting of two documents: an annual summary sheet with annual totals, similar to the current Annual Report to Congress but with more datapoints reflected, and a far more detailed spreadsheet providing metrics for individual protests. The spreadsheet on individual protests should contain the following information: (1) Protester’s name; (2) Protester’s DUNS Number; (3) Protester’s Representation (Pro Se or Legal Counsel); (4) Short Citation (B-Number, Docket Number, or Agency Case Number); (5) Solicitation or Contract Number; (6) Solicitation Issue Date; (7) Protest Filing Date; (8) Protest Decision Date; (9) Ultimate Contract Remediation Date; (10) Originating Agency; (11) Protest Outcome (numerical representation); (12) Modification of Bids upon Recompetition (numerical representation); (13) Ultimate Awardee (numerical representation); (14) Whether the Protester Is a Small Business (and what type of small business they might be); and (15) Procurement Value.

B. How Would the Information Be Collected Efficiently?

This dataset is meant to provide meaningful data to policymakers and practitioners without unreasonably burdening government employees with additional work. To that end, of the fifteen rows required, ten could be filled by copying and pasting from other sources (1, 2, 4–9, and 14–15). The rest could likely be collected programmatically as well, but, in the event they could not, three could be filled by typing a single digit (3, 12, and 13) and only two, Originating Agency and Protest Outcome (10 and 11), would potentially need to be manually entered using full words. The data entry process would take approximately six to ten minutes per protest if all the data were manually logged. The author has written a simple program, included in Appendix VI: Programming, to extract information from the GAO’s Merits Decisions and generally available data. Such a program could be modified to extract at least eleven of the datapoints (1–9, 11 and 15) and auto-populate a spreadsheet if more government records were available during compilation. It is likely that extant government materials could automatically provide the information for the remaining four datapoints (10, 12–14). If the government implemented such programming aids, the actual time-cost to log the statistics on a protest would likely be shorter than one minute. Formulas applied to the spreadsheet could automatically extrapolate an additional eight datapoints for each protest to create a total of twenty-three datapoints per protest. At the end of the year, a series of formulas would be applied to the completed dataset of individual protests, automatically generating eighteen pieces of macro-data that could be added to the nine figures currently reported annually to Congress without requiring additional data collection.

The datapoints selected for entry were chosen because they are definitive, applicable across all GAO protests, and easy to enter. Because the author’s objective is for these reports to provide as much information as possible, the primary focus in selecting datapoints was capturing characteristics that are definitive, consistently available, and yet also likely to vary across contracts. The extrapolated datapoints for each protest are meant to make trend analysis easier and could be easily modified to provide a variety of information. Including the additional data in the Annual Reports to Congress would help track trends year-over-year instead of protest by protest.

C. What Is the Purpose of the Specific Data Collected?

The purpose of collecting the proposed data falls into three main categories. The first data category encompasses protester information (1, 2, 3, and 14), or data that helps to provide information on the Protester, enabling analysts to track them across these datasets and others like those at USASpending.gov, exponentially multiplying the information that could be aggregated on each protester. The second data category encompasses Solicitation Characteristics (6, 9, 10, 12, 13, and 15), or underlying traits that can ultimately be used to find indices that a solicitation with a certain collection of traits is likely to receive a protest. The third and final data category relates to Protest Characteristics (4, 5, 7, 8, and 11), which are important to researchers seeking to understand likely outcomes for various fact patterns and anticipate the chances of success for a protest that they may wish to file.

The specific purpose of each datapoint is discussed in Appendices III and IV, along with the formulas to extrapolate the eight derivative datapoints and the eighteen pieces of macro-data added to the Annual Report to Congress. There is no single question that the author intends for these datapoints to answer. The goal is to create a robust dataset that can provide new research opportunities and be integrated with other datasets. The proposed dataset focuses on data that could either directly identify a protest in various databases (1, 2, 4, 5, and 10) or would articulate the protest’s impact on the procurement schedule (6–9, 11, 13, and 14). While this section has primarily focused on the GAO, agencies could include their protest data in the agency’s annual reports, opening the door for creation of a single publicly available database of bid protest information.

IX. What Problems Would This Proposed Solution Solve?

A government-published, annual list of all bid protests would provide an authoritative source of information for the government and the private sector to identify problems, analyze how policies could resolve those problems, and determine how implemented policies perform over time. Until more specific data is released, making specific policy recommendations with regards to the bid protest process would be premature. Because the GAO employs conscientiously consistent reasoning across the protest decisions that they issue, past years’ statistics would likely be reasonably representative of future outcomes.

A. Benefit of Proposed Solution to Practitioners

For practitioners, the type of data proposed will likely never provide determinative answers. It could, however, be extremely enlightening. What information is valuable to each practitioner will inherently be a matter of what question they are attempting to answer. The proposed data could provide a constellation of small weights, to be grafted onto the decision-making process of filing a bid protest. It could indicate the likelihood that an asserted issue will be present in an Agency Report and could help articulate whether protest grounds would be sustained or denied in a final decision by the GAO.

B. Benefit to Policy Makers

For policy makers, instead of trying to guess at effective remedies for unsubstantiated concerns, as Congress did in § 827, more data could illuminate exactly where and why protests were occurring. More data would provide more information about what the quantity of bid protests represented, without implementing counterproductive policies that only guess at the real issues in the bid protest process, as § 827 did. For researchers and academics, the benefits of more data are as apparent as they are numerous, allowing more questions to be answered, more problems to be discovered, and better articulated proposed solutions than they can with currently available resources.

C. Benefit to Government Agencies and Contracting Officers

Were agencies to provide more thorough post-award debriefings, the proposed dataset could become much more impactful. Combining the proposed dataset and the information made available through improved debriefings would not only better articulate a protester’s chances of having their protest sustained but could further enable them to calculate their chance of securing the final contract award as well. Before filing a protest, a contractor could accurately measure their anticipated potential profit against the protest’s cost and make a well-informed decision as to whether or not the protest is worth pursuing.

The author’s data suggests that the vast majority of bid protests are sustained because of the government’s behavior during the procurement process. Agencies definitionally have control, if not responsibility, over the causes of all protests that are remedied by an AVCA and therefore could resolve underlying issues through agency protests. Initially, agency-level protests were meant to be inexpensive and provide expeditious protest resolution. Currently, however, a perception exists that agency protests are biased against the protester, which depresses the intended use of the agency-protest option. A comparison of thorough agency and GAO protest data would demonstrate whether protesters fare equally well at the GAO and agencies. For agencies whose protest decisions were more frequently overturned, improved data could articulate what needs to change to bring the results into equilibrium. Data indicating that agencies are treating protesters fairly would likely encourage the use of agency protests, which would, in turn, reduce the burden on the GAO of deciding a high number of protests.

The author suspects that agency protests would be shown to function well, given that, as discussed in Part III: Problems Created by the Lack of Protest Data, the majority of positive outcomes for protesters result from agencies taking voluntary corrective action. This suggests that agencies are comfortable admitting fault where there is a clear error with the procurement. If the public data can demonstrate to protesters that agencies are conducting protests fairly, the efficiency of an agency protest will hopefully be seen as an attractive and viable alternative for those protesters who feel that their protest grounds are clearly meritorious. If a given agency or department was found to be frequently overturned at the GAO or COFC, other agencies whose protest processes were not being overturned as frequently could potentially be imitated to help underperforming agencies to improve their internal protest processes.

X. Potential Criticisms and Downsides to the Proposal

A. The Drawbacks to Additional Transparency

The government and contractors have disparate goals regarding bid protests: the government wants to be informed about malfeasance, fraud, or errors, whereas contractors are concerned with their chances of winning a contract. There is a legitimate question as to whether more transparency would undermine the ultimate purpose of the bid protest system by making these two objectives more easily severable. With more information, protesters who are either challenging their exclusion from the competitive range or the bid process after awardee selection may stop bringing protests that, when sustained, resulted in solicitation or contract modifications instead of recompetition. This result would undermine the policy objectives of the bid protest system if it led to definite procurement areas which contractors were not “monitoring” by bringing protest actions. Conversely, the proposed data could better articulate where such gaps were, if they existed. If researchers identified under-reported areas, alternative remediation mechanisms could be implemented. It cannot be known whether the costs incurred to specifically address those issues would offset the cost efficiencies created by this data; currently; however, no system is in place to measure deficiencies at all.

B. Will More Data Actually Solve the Problems in the Bid Protest System?

There is also uncertainty regarding whether GAO-generated data would create a sufficiently robust dataset to identify problems in the procurement process with actionable specificity. The concern is reasonable, but this Note does not propose a unilateral solution. Rather, the intent is to create a scaffolding from which deficiencies in both the procurement process and the data that describes that process could be identified. The application of additional data would incrementally improve the protest system, year after year, more reliably than the current method of shooting from the hip to fend off anecdotal problems.

For attorneys, there is some risk that enterprising and motivated contractors may use the proposed data to deduce their chances of success without seeking counsel. That said, this data can never be determinative and is meant to be a positive supplement, not a replacement, for the advice of experienced bid-protest counsel. It is unlikely that contractors, who lack the expertise possessed by experienced attorneys, can make equivalently well-informed decisions about whether to bring bid protests.

Whenever data analysis is involved in problem solving, there are traps for the unwary. Data can always be manipulated, mishandled, or misinterpreted to provide information or guidance that appears clear but, that is, in reality, incorrect. Additional analysis, data, or context can reveal data manipulation. While intentional and unintentional data manipulation is a persistent problem, it can be mitigated if the government releases more raw data. More publicly accessible data would allow a variety of groups to analyze data and discover trends in the bid protest process. More data would maximize transparency and would force analysts to show their work and articulate how they arrived at their conclusions. Increased transparency would also allow for “correct” interpretations or, at the very least, necessary caveats, to be appended to any conclusions as they gain traction.

C. Additional Work for Government Personnel

Finally, an obvious problem with this proposal is that it creates more work for government personnel. That said, the author has attempted to target data that is uniform enough to be programmatically collected and is likely to already exist in databases, enabling automatic collection and collation with minimal human processing. Additionally, for DoD protests, the data recommended by this Note will likely overlap with data already being collected for Section 822(c) reporting, further decreasing the burden on personnel. Collection will necessarily create some additional work, creating a legitimate question of whether the collection is worth the additional effort.

One study found that, on average, the government spends 7.7% of the cost of contract formation on bid protest mitigation efforts. If this figure is true, the government is spending 7.7% of the overall procurement cost to prevent something that happens 0.3% of the time. While this is not inherently wrong (so called black-swan events exist in all facets of life and business), these figures in the bid protest process likely represent a suboptimal cost-to-impact ratio. If the proposed data could inform more targeted training for government personnel, the amount of protestable behavior would likely be reduced. If would-be protesters had a better sense of when they did, and did not, have a legitimate chance of ultimately receiving a contract, their inclination to file would likely become better aligned with meritorious protest grounds. Therefore, it seems likely that the additional effort spent to generate this data would be offset by its ability to make the procurement process more efficient, reducing the overall quantity of bid protests filed significantly. That said, to make a final determination, more data is required.

XI. Conclusion

Like any complex system, the bid protest space is imperfect and should, where possible, be improved. The current amount of data released by the GAO is insufficient to meaningfully identify specific problems, which means that they necessarily cannot be solved. While the author’s current data sheds some light on this quandary, a definitive government report would be much better. The RAND Report was helpful, but iterative data is needed to track the longitudinal health of the bid protest system. Blanket deterrence is too blunt an instrument to address problems in the bid protest system. The number of bid protests filed should accurately reflect the number of problems in the government’s procurement process. Until policymakers and procurement personnel have access to accurate data that clearly identifies the current bid protest system’s inefficiencies, successful policies will only be passed by luck. Releasing a more detailed dataset can improve Contracting Officers’ training and contractors’ understanding of when filing protests is in their best interests. The dataset proposed by the author can be compiled with a computer program detailed in Appendix VI: Programming, which would mitigate the vast majority of the labor required by the government. There is no reason to continue having policy discussions reliant on conjecture and personal opinion when a clear solution exists that would provide hard data at a marginal additional cost.

“A statistician will drown crossing a riverthat is three feet deep on average.”

- Anon

Thanks

This project was only possible because of the thoughtful feedback, encouragement, guidance, and discussions that the author had with a number of people.

At the George Washington University Law School: my eternal gratitude to Professor Cronin and Professor Papson for a lot of hours, and more patience, helping to take the rough edges off this project in both its conception and expression; thanks to Spenser Dettwyler for lots of helpful feedback as well; thanks to Professor Schooner for mentioning the need for more data in government procurement, and bid protests in particular, thereby launching this research; thanks to Professor Yukins for early guidance and enthusiasm; and thanks to Professors Ries and Lyon for early discussions.

Outside of the George Washington University: thanks to Jonathan Aronie, Townsend Bourne, Umer Chaudhry, Jeffrey Chiow, Tim Hawkins, Paul Khoury, Kevin Mullen, Daniel Ramish, Patrick Staresina, and Gary Ward. Thanks to Sarah Drabkin and Dylan Mooney for the assignment that gave rise to this larger project.

While Dan Gordon was not contacted, I cannot help but thank him for laying the groundwork for writing about these issues over the years.

Appendix I: The Overall Incidence Rates of Sustained Protest Grounds 2000–2020

For technical reasons, the density (and therefore quality) of the bid protests analyzed declines from 2000 back to 1990. Because that data is less reliable and cannot be easily disaggregated within percentages, it is omitted from this Appendix. That data is included in Appendix II: The Annual Incidence Rates of Sustained Bid Protest Grounds 1990–2019, however, because the reader can observe the data from the 1990s in isolation with the caveat that it is less reliable. There are twenty-six causes here in this Appendix, instead of the twenty-seven in Appendix II: The Annual Incidence Rates of Sustained Bid Protest Grounds 1990–2019, because the catchall category “other” was omitted here. The incidence rate of each protest ground is in square brackets at the end of each description. These rates are rounded to the nearest integer and therefore do not add up to 100%.

Sustained Protest Grounds with an Incidence Rate of 2% or More

  1. Evaluation criteria was unreasonable, or reasonable but unreasonably applied (includes unfair / disparate treatment during evaluations and not properly implementing cost modifications (e.g., HUBZone / SB discounts)). [20%]
  2. Contract was awarded or rejected for reasons not stated in the solicitation. [16%]
  3. The agency’s conclusion regarding a proposal (award or reject) was unsupported by the record (can be lack of documentation in evaluations or claims in proposals) or an agency official ignored the advice of advisory boards. [14%]
  4. Improper costs awarded (which is only relevant as a subsequent protest after having a sustained protest remanded to the agency, to determine the specific cost award, which is then brought back before GAO to resolve disputes about the amount to be awarded). [6%]
  5. There was a lack of meaningful discussions with some or all of the bidders, or the agency failed to disclose pertinent information (separated from the failure to inform category, in that there was, or should have been, a discussion of some sort, whereas failure to inform deals with one-way communication issues). [5%]
  6. The wrong solicitation type issued by the agency (usually failed to issue a Small Business Set Aside) or in the face of a need to modify the solicitation, modified it incorrectly. [4%]
  7. The agency failed to remedy a clear issue after receiving sufficient notice. [3%]
  8. The agency’s evaluation was not thorough, ignored factors, or was incomplete. [3%]
  9. The solicitation was outside the scope of the underlying IDIQ, FSS, or MAS contract, in which the agency attempted to purchase something not covered in the contract. [3%]
  10. The solicitation was ambiguous, not properly disclosed, or otherwise stillborn upon issue. Also covers agency clarifying terms of the solicitation in a way that confuse parameters and other problems with how the solicitation was issued. [3%]
  11. The terms of the solicitation were unreasonably restrictive. [2%]
  12. The improper communications for the benefit of only one bidder or the agency allowed awardee to modify their bid but did not allow other bidders to modify theirs. [2%]
  13. There was conflict of interest present in the solicitation or award process. [2%]
  14. The agency committed a statutory, regulatory, or FAR violation in the course of the contract solicitation or competition. [2%]
  15. The bidder was prejudicially favored or harmed in the course of competition. (e.g., the awardee was allowed to modify parts of a bid or agency displayed favoritism or granted unequal access to information). [2%]

Sustained Protest Grounds With an Incidence Rate Below 2%

  • The agency improperly determined that a submission or protest was untimely and rejected it or the agency accepted an untimely bid as timely.
  • The agency failed to communicate with the relevant agency (usually failed to contact the SBA regarding Certificates of Competency).
  • The agency failed to inform bidder of relevant information, failed to respond to a valid request, criticism, or otherwise failed to act in furtherance of competition.
  • There was an improper modification of offer by contractor by inserting new terms into the contract while finalizing the award or accepting the contract (or otherwise bungled post-award paperwork).
  • The agency modified terms of a bid (usually price adjustment, though sometimes to help awardee) or failed to modify a bid where required.
  • There was an improper modification of solicitation by the agency (which may include a failure to modify, but generally, it is a modification of which no bidders or only awardee was informed).
  • Blatantly erroneous behavior by the Contracting Officer occurred that was unacceptable.
  • The contractor made misrepresentations in their bid, or their bid contained otherwise untrue information (whether or not it was willful).
  • The agency’s failure to preplan led to an otherwise unjustifiable outcome (usually the improper issuance of a sole source contract).
  • Solicitation terms were either not satisfactory to the end user of what was being procured or do not otherwise satisfy the stated objectives of the acquisition.
  • The agency unreasonably prohibited revisions to proposals.

Appendix II: The Annual Incidence Rates of Sustained Bid Protest Grounds 1990–2019

The following charts represent the author’s findings about sustained bid protest grounds year over year, from 1990–2019. For technical reasons, as noted supra, the density (and therefore quality) of the bid protests analyzed declines from 2000 back to 1990. Despite the decrease in reliability, the data is still included in this Appendix for the sake of completeness. Some protests were sustained on multiple grounds, so these charts represent about 1,500 sustained causes from approximately 1,000 sustained protests. For legibility’s sake, they are broken up into seven charts, clustered around their incidence rates. The grounds in each chart are listed in the chart’s legend. There are twenty-seven causes here, instead of the twenty-six in Appendix I: The Overall Incidence Rates of Sustained Protest Grounds 2000–2020, because a catchall category “other” was included on this graph.

[See PDF Graphs.]

Appendix III: Miscellaneous Datafrom the Author’s Research

The following statistics are gathered from the author’s research and are accurate reflections of the data generated by the author from available government records. To the extent possible, this information has been cross-referenced with statistics published by the government and appears to be accurate. For example, the GAO’s average sustained protest rate, from 2001–2019, was 17.938%, the author’s data generated a sustain rate of 17.878%, a difference of 0.006%. That said, the author makes no representations that the information presented is unassailably correct. It is new data from an implicitly imperfect dataset because of the issues with how the government presents its data (discussed in Part III: Problems Created by the Lack of Protest Data and Part IV: Asking the Wrong Questions: Misperceptions About the Bid Protest System). The figures stated are all rounded to the nearest integer, as decimal point accuracy would be an over-confident presentation.

A-1: Percentage of GAO Protests Sustained when an intervenor joined in a protest: 19%.

A-2: Percentage of GAO Protests Sustained when an intervenor did not join in a protest: 14%.

Note: While figures A-1 and A-1 may initially appear counterintuitive, where a protest’s outcome seems clear, intervenors may decline to spend money on counsel. It was also pointed out to the author that intervenors usually only join post-award protests, which further skews the data.

B: Percentage of GAO Protests alleging an Organizational Conflict of Interest that were sustained: 24%.

C: The average sustain rate was 18%.

D: The Sustain Rate for protests involving Variable Quantity Contracts (e.g., FSS, IDIQ, or TDO): 22%.

E-1: Percentage of pro se protests filed at the GAO that were dismissed: 16%.

E-2: Percentage of protests filed at the GAO with representation that were dismissed: 13%.

E-3: Difference between the two dismissal rates: 23%.

F: Twelve protest grounds were responsible for 84% of sustained protests, suggesting that targeted Contracting Officer training could reduce chronic issues.

Appendix IV: Sample of Proposed Data Sheet

This Appendix contains descriptions of the proposed data to be collected. The next five pages contain each column’s name and an explanation of what it represents, the variables that will be in its cells, and the source / method of entry. Most can be extrapolated directly from other documents using a search function or Excel tether. These will be referred to as “auto-population” below. Where the spreadsheet terms are unclear, a more complete name follows the name on the spreadsheet.

Data to Be Entered or Retrieved:

This is the data that would have to be inserted into the spreadsheet from external sources.

1. Company Name – Can be auto-populated from a number of locations.

  • Purpose: The Company Name is the most readily recognizable means of identification for humans, but it is also the most prone to variation through typos, omissions, or formatting differences (e.g., is The F. Reichelt Corp., LLC the same as Reichelt Corp., Riechelt, The Reichelt Corp., L.L.C., or The Franz Reichelt Corporation? Maybe so, but there are slight variations in each which would not be matched by Excel, a critical consideration for data collection).

2. DUNS No. – Can be auto-populated from a number of locations.

  • Purpose: This is the government’s method of applying a serial identification number to each discrete contractor, which is more reliable and more error-free for the computer to interface with, though it is less accessible for a human reader. For example, the number of protests filed by a single DUNS number is relatively meaningless to a reader unaware of the frequency with which the company the number represents bids on contracts.

3. Prot Rep – Did the Protester appear pro se or did they have legal representation? This value can likely be auto-populated, but, since it would be a binary, it would alternately be easy to enter the number value.

  • Purpose: This will indicate whether the protester was represented by an attorney, which is particularly important for agency protests, which were partially designed to be sufficiently informal that representation was not necessary.
  • 0 = Pro Se
  • 1 = Represented by an Attorney

4. Short Citation (B-Number, Docket Number, or Agency Case Number) – This can be auto-populated from a number of locations.

  • Purpose: The value of listing the Citation is two-fold: not only would it provide a citation reference to the protest that was filed, it would also allow researchers to identify where the protest was filed (the GAO, COFC, or the agency) using deduction (e.g., if the citation does not start with “B-” it is not a GAO protest; if it does not contain the citation conventions of the COFC’s reporters, it is not a COFC case; and if it is neither, it is an agency protest).

5. Solic / K No. – Solicitation or Contract Number – This value contains the Solicitation or Contract number, or says CANCELLED if the contract was cancelled. This can be auto-populated (except for cancelled contracts). Cancelled contracts may be able to be auto-populated depending on government records and programmer competency.

  • Purpose: This would be useful for tracking a given solicitation across datasets.

6. Solicitation Issue Date – Date the Solicitation was published on the Government-wide Point of Entry (SAM.gov) – This can be auto-populated from a number of sources.

  • Purpose: This would provide a datapoint to measure how long a solicitation had been issued before it was protested, which could be cross-referenced with many variables to derive significant information.

7. Protest Filing Date – This can be auto-populated from a number of locations.

  • Purpose: The Protest Filing Date is an important benchmark that could be compared against other relevant dates to create a variety of statistics, including Days Between Solicitation/Award and Protest Filing, Days Between Protest Filing and Resolution, and many others.

8. Protest Resolution Date (Decision Date) – This can be auto-populated from a number of locations.

  • Purpose: This could be combined with the filing date to determine how long it took for the protest to resolve and then compared to the Contract Determination Date to identify which part of the protest process overall had the most impact on delaying the procurement.

9. Ult K Awd Date - Ultimate Contract Remediation or Award Date – This can be auto-populated from a number of locations.

  • Note: This would either be the date the remedied contract was awarded, the date the solicitation was re-issued, or would be marked “CANCELLED” to indicate that the solicitation had been cancelled entirely.
  • Purpose: This could be combined with the filing date to determine how long it took for the protest to resolve, or the date of the final contract award, to determine how long the recompetition or corrective period lasted or to determine the entire period of time the protest consumed.

10. Originating Agency – This can be auto-populated from a number of locations.

  • Purpose: This would allow researchers to quantify how many protests were received by each agency. This is important because if the incidence rates of protests change, this would inform researchers whether it was a uniform change across agencies, suggesting a government-wide shift, or if it is attributable to a single agency, suggesting a more specific issue. Additionally, agencies that saw below-average protest numbers could be studied to better understand their best practices and help other agencies emulate their success.

11. Protest Outcome – This may be able to be auto-populated depending on the government’s databases; alternatively, if this must be manually entered, it would only require typing 3–4 characters.

  • “SMD” – Sustained Merits Decision.
  • “DMD” – Denied Merits Decision.
  • “AVCA” – Agency took Voluntary Corrective Action.
  • “KVW” – Contractor Voluntary Withdrawal Before Corrective Action.
  • “DPD” – Deficient Protest Dismissal.
  • Purpose: The value of breaking down the outcomes in more detail was discussed in Part III: Asking the Wrong Questions: Misperceptions About the Bid Protest System, under Question 3. These five categories are critical to ensuring that those examining the data can meaningfully understand the outcome of a given protest.

12. RcmpMod – Was the Recompetition Modified? If the protest was sustained, were the bids allowed to be modified? – This would be manually entered or possibly auto-populated, depending on data available. There would be three different values:

  • 0 = Not recompeted, the original solicitation or contract modification solved issue.
  • 1 = Recompeted but no bid modification was allowed.
  • 2 = Recompeted and bid modification was allowed.
  • Purpose: This is critical to understanding the “true” outcome of a bid protest. While having a bid protest sustained is a major victory, it is meaningless on its own to a contractor. Because the contract may or may not be recompeted, this datapoint is critical to understanding whether there are protest grounds wherein there is not even an opportunity to win, because the defect which led to the protest ground was simply remedied without recompetition. It is also important to factor in whether recompetition occurred with or without modifications, because there is likely to be a major disparity in outcomes between those two types of recompetition, and the marginal time-cost of differentiating should be minimal.

13. UltAwdRec – Ultimate Award Record – was the protesting contractor awarded the final contract? This may be auto-populated or efficiently manually entered, with three potentially different values:

  • 0 = The original awardee won the final contract.
  • 1 = The protester won the final contract.
  • 2 = A third party won the final contract.
  • Purpose: This is critical to understanding the “true” outcome of a bid protest. While having a bid protest sustained is a major victory, it is meaningless on its own to a contractor. If the contractor does have their protest sustained and then is not awarded the contract, the protest held no value for them. Understanding which protest grounds resulted in the contract being awarded is a critical part a more informed decision to move forward with a bid protest.

14. Was K An SB/DG? – Was the contract on offer for a Small Business or Disadvantaged Group? – This would be manually entered, and the range of entries would be determined by how specific the GAO or Agency wanted to be in their reporting.

  • Purpose: The specific value of this datapoint is not determined, but it is likely an influential factor in a number of different parts of the protest process, especially whether the protester was appearing pro se and the forum of choice.
  • 1 = Yes, the contract was a Small Business or Disadvantaged Group Set Aside.
  • 0 = No, the contract was not Small Business or Disadvantaged Group Set Aside.

15. Prcrmnt Val. – The dollar value of the procurement – This could likely be auto-populated from government data, although it is not clear how Cost Reimbursement Contracts would be represented, though the author would recommend that they be filled out at their closure, which would obviously delay the reporting, but it would be extremely valuable for historical analysis.

  • Purpose: This would be critical in helping to understand the relevant dollar value at which different trends emerge in bid protests, especially choice of forum and volume of protests received.

Data Which Could Be Extrapolated

This is data that can be automatically extrapolated from the above data after it was entered into a spreadsheet and would not require additional work to generate.

Please note that the necessary Excel formulas are in sub-bullets where needed (see sheets in Appendix IV: Sample of Proposed Data Sheet for column references). When reading the formulas, [s] means the start of a data column, [e] means the end of a data column, and [x] means only the current row’s version of the cell.

1. Prot p/ Sol. – Protests per solicitation or contract – This counts how many times the same number appears in the “Solic / K No.” Column.

  • Purpose: This would track which contracts received the highest and lowest numbers of protests, illuminating the factors that contributed to the decision to file a bid protest.
  • Excel Formula: “=COUNTIF($D$[s]:$D$[e], D[x])”

2. Iss2Prot – The days between the solicitation being issued and the filing of the protest.

  • Purpose: This would automatically calculate how long the solicitation existed before the protest occurred, which would inform how quickly the problem was identified by a contractor.
  • Excel Formula: “=G[x] - F[x]”

3. Prot2Dec – Days from when the Protest was filed to the date on which the Protest was decided.

  • Purpose: This automatically calculates how long the Protest adjudication process took.
  • Excel Formula: “=H[x] - G[x]”

4. Prot2UltK – Days from the filing of the protest to the ultimate contract award.

  • Purpose: This would automatically calculate the length of time during which the protest actually impacted the procurement process.
  • Excel Formula: “=I[x] - G[x]”

5. FY - Fiscal Year – Inclusion would save duplicative work during data entry.

  • Purpose: This would help to sort the data in a way that was aligned with the government’s reporting practices, while avoiding duplicative data entry work.
  • Excel Formula: “=IF(MONTH(J[x]) < 10, YEAR(J[x]), (YEAR(J[x]))+1)”

6. Pre-Awd / Post-Awd – Whether the protest occurred pre-award or post-award – This could be automated by keying for the different suffixes in RFIs and RFPs versus the suffixes in contracts, however more examination would be necessary to articulate exactly how the keying would be established.

  • Purpose: This would help to further group the protests into more distinct categories and identify where specific issues where appearing.
  • Excel Formula: T.B.D.

7. Prot by Ktr – How many protests a given contractor has filed.

  • Purpose: This would help to find and compare recurrent protesters and identify what motivates them. Are certain solicitation or contract types more likely to be protested? Are there industries where such gamesmanship is seen as part of doing business? Are there certain Contracting Offices within an Agency that publish chronically deficient protests?
  • Excel Formula: “= COUNTIF($B$[s]:$B$[e],B[x])”

Appendix V: Example of Modified Congressional Report Data

The following information could be added to the macro data reported in the Annual Report to Congress. All of it could be automatically generated except for the summary charts, which would only take a few hours to compile at the end of each year.

The end of this section includes a potential template for the annual report, with a text block for the metadata and two pages with examples of what the proposed protest-by-protest data sheets would look like. Because this is only meant to be an example, the actual datapoints are randomized. It is only meant to be representative. The two sample pages of the datasheet would be one continuous row of columns. Because that would not be legible in this Note, the columns are separated by whether data is entered and or is extrapolated. The dates used are randomly generated so may not be reasonable approximations. The Ultimate Contract Award dates may need to be left blank in reality and backfilled from other sources, but that would be manageable to automate with the identification information provided.

1. Total Protests Filed

  • Purpose: This counts how many individual protests are filed overall and is not the same as merits decisions resolved. This is simply a tally of the list overall, but the COUNT function in Excel must be tethered to a column with integers, not text.
  • Excel Formula: “=COUNT(C[s]:C[e])”

2. Total Solicitations Protested

  • Purpose: This would track how many distinct solicitations or contracts were protested each year, which is critical in identifying and understanding what an increase or decrease in protests year-over-year represents: specifically, whether the number of solicitations or contracts being protested is increasing, suggesting a systemic issue, or which individual solicitations or contracts receiving more protests, suggesting specific incompetent contracting officers, or which specific contractors are filing higher numbers of protests, suggesting bad actors.
  • Excel Formula: “=((SUM(IF(E[s]:E[e] <>””,1/COUNTIF(E[s]:E[e], E[s]:E[e]), 0)))-1)”

3. Average Number of Protests Filed by a Contractor

  • Purpose: This would help identify contractors who lodged a high number of complaints. Alternatively, this could be used to create lists of groups of contracts who had high rates of protest, which may lead to the identification of genres of work that received uniquely high numbers of protests to find types of contracts which were particularly protest prone across the government.
  • Excel Formula: “=AVERAGE(Q[s]:Q[e])”

4. Most Protests Filed by One Contractor

  • Purpose: This would help identify protesters who file chronically, but its true value is in understanding means, medians, and modes. This value could also represent the ten, twenty, or even fifty most “active” protesters. This would not name a given contractor, only present the numerical value. The objective would be to get this number as low as possible, demonstrating that Contractors were “behaving themselves” and not filing repeatedly in bad faith.
  • Excel Formula: “=MAX(Q[s]:Q[e])”

5. Average Number of Protests per Solicitation

  • Purpose: Providing the average number of protests per solicitation would provide an important baseline to understand what a remarkable number of protests for a given solicitation is. Ostensibly, if solicitations are receiving a high number of protests, it suggests an issue with the underlying solicitation or a normative business practice of aggressively protesting a specific contract type, and not a bad faith action by a single disgruntled contractor.
  • Excel Formula: “=(COUNTIF(A1:A20, “<>CANCELLED”)) /((SUM(IF(E[s]:E[e] <>””,1/COUNTIF(E[s]:E[e], E[s]:E[e]), 0)))-1)”

6. Most Protests for a Solicitation or Contract

  • Purpose: This would help identify which solicitations attracted the most issues, as a way to find the agencies with the most recurring problems. As with Most Protests Filed by One Contractor, this could be quickly modified to count the top ten, twenty, or fifty most protested contracts. Conversely, this also shows the value of releasing the underlying data, so that different researchers could quickly find the values that were most important to them.
  • Excel Formula: “=LARGE(E[s]:E[e],((MAX($E$[s]:$E$[e]))+1))”
  • Note: This is a more complex formula because it has to cull the highest value, which will be for the number of cancelled contracts.

7. Frequency of Protest (Chart - by Month)

  • Purpose: This would help identify trends in protest filings more closely, to improve budgeting and response rates.
  • Excel Formula: No formula; Excel’s Insert Chart function would be relied on.

8. Overall Agency Voluntary Corrective Action (AVCA) Percentage

  • Purpose: This would provide a percentage of the overall protests filed each year that resulted in an AVCA and therefore indicate how many bid protests were sufficiently meritorious that the Agency did not want to contest them. While this number can be extrapolated from the current Annual Reports to Congress, it is an unnecessary step with many pitfalls. (See supra, Part IV: Asking the Wrong Questions: Misperceptions About the Bid Protest System.)
  • Excel Formula: “=(COUNTIF($K$[s]:$K$[e], “Dismissed - AVCA”)) / [Number of protests overall]”

9. AVCA by Agency

  • Purpose: Operating on the assumption that AVCAs represent an admission by agencies that they were in the wrong, this list would allow agencies to compare themselves against other agencies.
  • Excel Formula: “=COUNTIFS($K$[s]:$K$[e], “Dismissed - AVCA”, J[s]:J[e], “Sample Agency”)”

10. KVWD – Contractor Voluntarily Withdrew their protest.

  • Purpose: This would be useful to quantify how often the protester was convinced by access to the Agency Record that they had no valid grounds for protest. This would provide a datapoint to indicate what percentage of protests may be prevented by increasing a would-be-protester’s access to information.
  • Excel Formula: “=COUNTIF($K$[s]:$K$[e], “Dismissed - KVWD”)”

11. Sustained & No Recompetition

  • Purpose: This would quantify how often a protester won their protest, but the remediation of the issue did not involve recompetition, meaning that the protest did not create a chance of eventual award to the contractor. This would be valuable in helping to inform Contractors that a given protest, if sustained, was nonetheless unlikely to result in them being awarded the contract.
  • Excel Formula: “=COUNTIF(L[s]:L[e], “0”)”

12. Sustained & Won Recompetition (Protester Won New Contract Award)

  • Purpose: This would provide the Contractor with a counterpoint to “Sustained & No Recompetition” to understand what protest grounds were likely to result in a recompetition in which the protester won the eventual contract.
  • Excel Formula: “=COUNTIF(M[s]:M[e],”1”)”

13. Sustained & Lost Recompetition (Protester Lost New Contract Award)

  • Purpose: This datapoint would show which protest grounds the protester succeeded on and but then lost the subsequent recompetition of. This would obviously be meant to be studied in concert with the two Sustained and Recompetition variables, to render more detailed insight through specific studies.
  • Excel Formula: “=(COUNTIFS(L[s]:L[e],”1”,M[s]:M[e],”0”)) + (COUNTIFS(L[s]:L[e],”2”,M[s]:M[e],”0”)))”

14. Average Days from Filing to Resolution

  • Purpose: This would provide a useful metric year over year, to track the impact the protest process was having on procurements overall and whether the average number of days was going up or down over time.
  • Excel Formula: “=AVERAGE(T[s]:T[e])”

15. Four Graphs generated:

  • Sustain Rate by Agency (Graph) – How many Sustained protests in total were filed for each agency (may also be broken down by protest ground for each agency).
  • Sustained Grounds by Type (Graph) – How these will be defined will be subjective and take extra work, as such they may be best left to individual researchers to define.
  • Sustained Protests by Agency (Graph) – This will demonstrate how agencies are faring against one another.
  • Protests per Contract Value (Graph) – Distribution plot of how many protests occurred at different contract value ranges.

DO NOT USE ANYTHING IN THE REPORT OR SPREADSHEETS BELOW FOR RESEARCH. THE FIGURES THEY CONTAIN ARE ONLY EXAMPLES.

[See PDF Tables]

Appendix VI: Programming

Below is a sample of a basic program the author wrote to extract information from the GAO’s Bid Protest Merits Decisions, which outputs reports similar to the one proposed. The author has completed the far more advanced version capable of outputting the data detailed in the prior Note and spreadsheets. It would rely on access to government data to be fully functional, but all of the processing and outputs have been finalized. As such, it only needs to be “pointed” to the right information, a relatively straightforward task. It is not included here, because it would be an additional twenty-six pages of code. The sample below is simply a proof of concept to give those interested a sense of the strategy used to extract and isolate the information..

Many of the proposed pieces of data, especially surrounding the numerically represented datapoints, can be deductively auto-populated. This is to say that the code is written to check for key characteristics of text indicating that it is the unique piece of information. Additional information can be gleaned from combining disparate information across government databases post-processing that could make this information infinitely more useful. For example, the information obtained here could be cross-referenced with the data from USAspending.gov to create novel metrics that would not be available from either dataset in isolation.

The code below is written in the Python 3 programming language and was executed on a Linux OS. This text has been compressed for economization of space. It contains all the commands needed to execute the code. Note that all comments and command line prints have been removed to conserve space. This version is purely for the technical record, not for ease of interface. If you are interested in having the actual, actionable code file (which is much longer but more user friendly), please feel free to contact the author.

[See PDF code]

    Author