chevron-down Created with Sketch Beta.

Public Contract Law Journal

Public Contract Law Journal Vol. 53, No. 1

Speeding Up Services Procurements: Strategies and Tools to Award Quickly, Survive Protest, and Execute Efficiently

David Bodner and Per Midboe


  • The U.S. Government uses solicitations to procure its massive demand for services. 
  • Twenty-seven specific best practices for Level of Effort (LOE) service contracting and sample solicitation language to increase the government’s speed to award, reduce its protest risk, and capture the benefits of competition.
  • Competitive contracting for LOE services presents agencies with numerous strategic decisions that require a balance of interests and compromises.
Speeding Up Services Procurements: Strategies and Tools to Award Quickly, Survive Protest, and Execute Efficiently
Supersmario via Getty Images

Jump to:


The government relies on support services contractors to accomplish a myriad of critical government programs—ranging from major defense weapon systems to program management for the Social Security Administration. In fiscal year 2022, the government contracted for $435 billion worth of support services. The government uses solicitations to select from this vibrant, diverse, and competitive marketplace of contractors. The terms of the solicitation are immensely important to the speed of contractor selection, the defensibility of the selection, and the business value of the resulting contract. Most of the decisions that the government makes in setting up the solicitation fall into three broad categories: (1) what contract type to choose; (2) how best to describe the government’s contractor workforce needs; and (3) how best to evaluate proposals, including decisions on what proposal information to ask offerors to provide. In each of these broad categories, government source selection teams face a number of decisions about how best to balance the thoroughness of their review against the competing goals of increasing their speed to award and reducing any unnecessary work for both offerors and evaluators. This article explores these strategic decisions within Level of Effort (LOE) support services acquisitions to provide best practices and sample solicitation language designed to increase the government’s speed to award, reduce its protest risk, and capture the benefits of competition.

I. Introduction to the Federal Services Marketplace

The government market for support services is very large and highly competitive. As an example, in 2021, the Navy’s primary support services vehicle—the SeaPort-NxG multiple award Indefinite-Delivery-Indefinite-Quantity (IDIQ) contract—boasted 2470 unique contractors and anticipated awarding $5 billion of services work per year. In fiscal year 2022, the U.S. government obligated $694 billion in contracts, and sixty-two percent of its obligations were for services. The Department of Defense (DoD) spent $205 billion on services, and civilian agencies spent $230 billion. Moreover, the government’s need for such services spans huge sectors of the economy, from complex defense system engineering, to program management support, to administrative office support, and beyond. Within this bustling, competitive marketplace, agencies want to be able to identify the right vendor quickly and get a good price for the types of support they need. In pursuing that goal, government source selection teams face a broad range of strategic choices that can greatly influence the speed at which they contract as well as the quality and cost of the services in performance.

In many cases, government source selection teams structure deals for these services as competitively awarded Level of Effort (LOE) contracts, which allow the government to procure services an hour at a time. Beyond that commonality, however, there are a wide variety of contract types and evaluation strategies that government procurement teams employ in competing LOE services contract awards, each of which touch on a host of services-specific issues. The government procurement team’s strategic approach to addressing these choices and issues will greatly influence their speed of contracting, the defensibility of their awards, and the business value of the resulting contract.

This article explores these strategic decision-points and provides twenty-seven specific best practices for LOE service contracting and sample solicitation language to increase the government’s speed to award, reduce its protest risk, and capture the benefits of competition.

II. Strategic Decisions in Competitive LOE Services Contracting

Most of the critical strategic decisions that a government source selection team will make fall into three broad categories: (A) what contract type to choose; (B) how best to describe the government’s contractor workforce needs; and (C) how best to evaluate proposals, including decisions on what proposal information to ask offerors to produce. Within each of these broad categories, source selection teams face a number of specific decisions about how to best balance the thoroughness of their review against the competing goals of increasing their speed to award and reducing any unnecessary work for both offerors and evaluators. Furthermore, all of these choices impact the government’s ability to defend their evaluation record. As such, government source selection teams should consider each of these decisions carefully and understand how each element interrelates with the other elements of their procurement.

Importantly, this article will not delve deeply into the distinctions between Federal Acquisition Regulation (FAR) Part 15 procurements and FAR Part 16.5 “fair opportunity” task order competitions conducted under multiple award IDIQs. The strategic decisions that this article addresses apply to both avenues for acquiring LOE support services. As such, a detailed discussion of the differences between FAR Part 15 and FAR Part 16.5 is out of the scope of this article. Furthermore, this article focuses on LOE support services acquisitions that are not primarily performance based. Although the government purchases a wide variety of services using performance-based work statements, the recommendations in this article primarily apply to LOE knowledge-economy jobs, which are harder to measure through performance-based contracting tools. Finally, this article does not directly address the limits on contracting for personal services or inherently governmental functions. While important, these are generally hard limits, not strategic decisions about how to structure the solicitation, and thus fall outside of the scope of this article.

Instead, this article explores what issues a government source selection team should consider when (A) selecting a contract type, (B) describing the government’s contractor workforce needs, and (C) structuring an evaluation scheme.

A. Choice of Contract Type

One of the very first strategic decisions a government source selection team will make is to determine what contract type is most appropriate for the required work. This decision will have wide-ranging impacts on what types of contractor behavior the government incentivizes, how the government allocates performance risk with the contractor, and what actions the government source selection team must take to make an award under that contract type. As such, this decision may well be the single greatest determinant of how successful the government will be at controlling cost and/or adapting to unforeseen situations in performance and how quickly the government can move through proposal evaluation to make award.

1. Performance Incentives of Various Contract Types

The FAR divides the spectrum of various contract types into two broad categories of risk allocation—fixed-price type contracts and cost-reimbursement type contracts. Specialized contract types within each category provide a wide range of risk allocation options for government source selection teams. The following figure provides an overview of the contract types:

Figure 1. Overview of Contract Types with Corresponding Risk [see PDF p. 5]

a. Fixed Price Contracts

In fixed-price type contracts, the contractor bears all (or most) of the cost risk associated with performance, unless the contract includes some form of defined price adjustment. As such, at one end of the risk allocation spectrum, the FAR contemplates a Firm-Fixed Price (FFP) contract, which leaves essentially all of the cost risk with the contractor. Under an FFP contract, the contractor bears the risk that performing the work will cost more than the firm-fixed price agreed to. Therefore, even if it does cost the contractor more to perform, the government’s price remains the same. On the other hand, the contractor keeps the difference between its cost of performance and the government’s FFP if it performs below the contract price. Therefore, in a FFP contract, the contractor is incentivized to meet its obligations under the contract for the lowest cost. Moreover, since the contractor bears the majority of the risk, it has the primary responsibility for determining the approach that it will use to meet the requirement; as long as the contractor meets the contract obligations, the government has few avenues to direct the contractor’s performance.

If this risk allocation is too heavily weighted towards the contractor, however, the government can shift some of the cost risk back to itself using other types of fixed-price contract types. Some of the common fixed-price alternatives for competitive LOE services contracting are Fixed Price Incentive Fee (FPIF) and Fixed Price Award Fee (FPAF). In each case, the government accepts that the otherwise fixed price that it negotiated for the work will change somewhat in performance. With FPIF, the government is typically accepting that the fixed price may increase up to a ceiling if the contractor’s cost of performance increases. The government accepts sharing in some portion of cost increases in exchange for an opportunity to share in potential cost savings if the contractor can perform below the negotiated fixed price. Similarly, in an FPAF contract, the government is accepting that it may pay an increased fee for higher quality service in performance.

When agencies use the fixed-price contract types to acquire services on an hourly basis, they generally modify them into fixed-rate contract types. The FAR expressly acknowledges Firm-Fixed Price Level of Effort (FFP LOE) and Time and Materials (T&M) fixed-rate contract types, which function very similarly in performance. Under these two contract types, government source selection teams provide a maximum number of hours and a defined set of labor categories for which offerors then propose fixed rates. To determine the contract price, the contractor simply multiplies the number of hours it provides for each labor category by the applicable fixed rate for that labor category and adds these results for each labor category together. Since these contract types do not contractually lock in the mix of labor categories (i.e., the labor mix) that the government will actually use in performance, and often do not contractually lock in the total hours for any specific labor category either, these contract types give the government more flexibility to adjust to changed conditions in performance as compared to a simple FFP, FPI, or FPAF contract. Since this article addresses LOE services, it will focus on fixed-rate contracts as the most applicable fixed-price type contracts for LOE service contracting.

b. Cost-Reimbursement Contracts

Cost-reimbursement type contracts are the other broad category of contract-types that the FAR defines. In a cost-reimbursement contract, the government accepts a much larger amount of the cost risk in performance but gets more control over how the contractor will meet its needs. Unlike a fixed-price type of contract, in a cost-reimbursement contract the government is responsible for the actual costs of the contractor up to an established ceiling amount; moreover, the government is only entitled to receive the contractor’s “best efforts” to complete the effort, rather than a defined performance outcome. As with the fixed-price side of the spectrum, there are several variants of cost-reimbursement contracting. The most relevant for this article are Cost-Plus-Fixed-Fee (CPFF), Cost-Only, Cost-Plus-Award-Fee (CPAF), and Cost-Plus-Incentive-Fee (CPIF).

i. Cost-Plus-Fixed-Fee and Cost-Only Contracts

CPFF and Cost-Only contracts exist at the extreme other end of the contractor risk spectrum from FFP. Under these contract types, the government accepts all of the risk of essentially any cost increase in performance; the only difference between CPFF and Cost-Only is that the contractor receives a fee for its performance in CPFF. For CPFF contracts, the government negotiates a fixed fee prior to award. This fee amount, in terms of total dollars, will not change regardless of the costs the contractor incurs in performing the work. In turn, the government agrees to pay the full cost of performance up to a stated ceiling limit, at which point the contractor may cease providing the service. CPFF contacts typically provide little incentive to contractors to control cost, but also provide no incentive for exceptional performance.

ii. Cost-Plus-Award-Fee and Cost-Plus-Incentive Fee Contracts

Where performance incentives are important, government source selection teams can consider CPAF and CPIF contracts. These contract types still leave the government with much of the cost risk and meaningful control over the method of performance, but each uses fee increases or reductions to incentivize contractor performance in different ways.

In CPAF contracts, the government typically links the contractor’s fee amount to contractor performance; in this way, CPAF contracts encourage contractors to spend more in performance to ensure that the quality of the performance is high enough to capture the maximum award fee. This option may be an acceptable trade for the government where high-quality performance is a critical consideration, but high-quality performance can be costly.

In CPIF contracts, the government typically links the contractor’s fee to its cost performance, essentially creating a limited cost-sharing structure. Government teams often structure CPIF deals around five highly interrelated and critical elements: target fee; target cost of performance; maximum fee; minimum fee; and “share line,” which is an expression of the cost-sharing arrangement for underruns and overruns. For instance, the parties could agree to a CPIF deal with a target cost of performance of $100 million and a target fee of $10 million, which is the fee the contractor would receive if it performed exactly at target cost. Further, the deal could specify that the maximum fee was $15 million and the minimum fee would be $5 million, with (for simplicity) a 50%/50% share line for both overruns and underruns. Under these terms, if the contractor actually incurred $105 million in performance cost, its actual fee would be $7.5 million, since it would share 50% of the $5 million cost overrun it experienced above its target cost. Conversely, if it only incurred $90 million in performing the requirement, the contactor would earn its maximum fee of $15 million, since it would share in 50% of the underrun below its target cost. Although this fifty-fifty cost sharing relationship exists around the target cost, it is limited by the maximum and minimum fee amounts. For instance, once a contractor’s overrun causes it to hit the minimum fee, there is no further cost sharing; instead, the government is responsible for the full amount of overrun costs beyond that minimum fee point, which is sometimes called the “point of total assumption.” As such, a CPIF contract incentivizes the offeror to provide a low-cost solution that meets the requirements, but this incentive is limited to a narrower range, since the government receives all of the underrun benefits below maximum fee performance and all of the overrun costs above minimum fee performance.

c. Other Aspects of Contract Type
i. LOE contract versus completion contracts

Beyond the cost risk allocation, choosing to procure services on an LOE basis (either fixed-rate or cost-reimbursement)—as opposed to a completion basis—adds another level of complexity to the performance risk allocation between the parties. While completion contracts require the contractor to perform until the task is complete, procuring services on an LOE basis permits the contractor to demand payment for merely providing the required number of hours of “best efforts,” irrespective of whether that effort actually achieves any end goal or provides any value to the government. This “best efforts” aspect generally necessitates greater government oversight of the contractor performance to ensure that its work is continuing to benefit the government.

ii. Best contract type for LOE

Putting this all together, the intersecting issues between the various contract types and LOE contracting typically result in government source selection teams focusing on the following five contract types: T&M, FFP LOE, CPIF, CPAF, and CPFF (including Cost-only). The main reason for the focus on these specific contract types is that each presents substantial flexibility for the government in terms of changing the number of hours or the labor mix relatively easily during performance. The main difference between them is that the fixed-rate (T&M and FFP LOE) contract types lock-in hourly rates in competition, when the downward price pressure is highest on the contractors, while the cost-reimbursement types do not contractually lock-in hourly rates (although a CPIF contract forces the contractor to share some of the cost of underestimating its rates in performance).

Contractually locking-in hourly rates has advantages and disadvantages. On the one hand, fixed rates allow the government to capture the effects of the downward price pressure caused by competition and apply it for the life of the contract. In performance, it also permits the government to adapt to unforeseen changes quickly and with a clear understanding of the precise price impacts of the change because the fixed-rates will not vary. Essentially, the government only needs to determine the total number of hours and labor mix that it requires to address the changed conditions and that it can easily calculate a fixed price for that change.

On the other hand, fixed rates typically incentivize the contractors to pad the proposed fixed rates somewhat to account for a variety of risks associated with the deal. While competitive pressure often counterbalances this, the padding incentive can lead to a somewhat higher cost for the government, compared with cost-reimbursement type contracts. Moreover, fixed rates also prevent the government from capturing any benefit when there are decreases in the cost of providing the services; such decreases, however, are generally rare, and, if significant, the government could opt to recompete the work to receive updated fixed-rate pricing. Finally, fixed rates also incentivize the contractor to provide the least expensive personnel that meet all of the given labor category’s minimum qualifications, which can result in a race to the bottom of the category. Agencies can minimize this particular race-to-the-bottom risk, however, by defining a greater number of more narrowly spaced labor categories in the solicitation. This option limits the range of salaries that apply to any single labor category, which reduces the incentive to provide the absolute lowest.

Overall, in terms of performance incentives, the particular programmatic goals, risk tolerance, and funding will all play into the government’s selection of contract type, as will the typical practices of the industrial base supporting that program. FFP and CPFF are good starting points for making comparisons between these incentives.

Recommendation: Although a broad range of potential contract types may apply to specific situations, for LOE efforts, government source selection teams should largely focus on fixed-rate or cost-reimbursement contract types. FFP LOE/T&M and CPFF are good starting points for comparing options because their mechanics are easy to understand and administer and provide meaningfully different cost risk allocations. In making a final decision, however, government teams should also consider how this choice will impact their evaluation schemes.

2. Evaluation Considerations for Various Contract Types

Although performance incentives are an important consideration in selecting a contract type, choosing a cost-reimbursement type contract will significantly complicate a procurement’s proposal evaluation phase, since FAR 15.404-1(d) requires the government to conduct a cost realism analysis for all cost-reimbursement contracts. Compared to a simple price reasonableness analysis for a fixed-price type effort, conducting a cost realism analysis substantially increases the volume of information the solicitation must request from offerors, the complexity of evaluating the much larger record, and the potential areas a protester could challenge in the eventual evaluation record. As such, government source selection teams should carefully consider what performance advantages they hope to capture with a cost-reimbursement contract because, compared to a fixed-price effort, there are essentially no advantages to a cost-reimbursement contract in the evaluation phase.

For a fixed-price type contracts, the FAR only requires agencies to perform a price reasonableness analysis to ensure the agency is not paying too high a price. Generally, price reasonableness evaluations are quick and easy because “[n]ormally, competition establishes price reasonableness.” Where the government expects adequate price competition, the solicitation only needs to ask offerors to provide topline prices for each contract line item or labor-category rate, without asking for any additional, lower-level cost data from the offerors. Although FAR 15.404-1(b)(2) provides a variety of techniques, the only required analysis is a simple top-level comparison of the prices between offerors without any further scrutiny or adjustment of the proposed prices. This analysis does not require a lot of information or time from the offerors, and, in turn, the agency can quickly determine whether a proposed price is fair and reasonable.

Additionally, a price reasonableness evaluation can be very difficult to challenge in a protest. If an agency solicitation clearly sets forth how it will evaluate the total price, compares the total prices received either to each other or the historical prices, and documents this analysis, the Government Accountability Office (GAO) generally will find the agency’s analysis reasonable. Additionally, even when the proposed offeror’s price is significantly higher than the other offerors’ prices and the government cost estimate, GAO permits the agency to consider the price relative to the particular approach taken by the offeror. Thus, price reasonableness analyses present much lower protest risk to the government. In fact, one of the very few strategies that can gain traction with GAO is for the protester to assert that the agency failed to perform a price realism analysis, which is a distinct concept from a price reasonableness analysis. Since price realism analyses are not required for award of fixed-price type contract (or any other contract type for that matter), an agency can generally avoid this protest issue by expressly stating in the solicitation that it will not conduct a price realism analysis. Overall, a simple and straightforward price reasonableness analysis provides few avenues for a protester to challenge the government’s evaluation of its proposed price.

Compared to fixed-price contracts, the required evaluation landscape is very different for cost-type contracts because the government must conduct a cost realism analysis of an offeror’s proposed costs before it makes an award. As background, in a cost realism analysis, the agency evaluates all (or nearly all) of each offeror’s proposed cost elements against available substantiating data to determine whether each of the proposed cost elements is realistic for performance. Without a cost realism analysis, an offeror could propose unrealistically low cost elements to secure an award, and then, in performance, the agency would have to pay the contractor’s significantly higher incurred costs under the cost-reimbursement contract. Where an offeror proposes any cost element at a value lower than the available substantiating data or fails to provide substantiating data to support a proposed cost element, the government cost realism must adjust that element upward or identify cost risk associated with that flaw.

Cost realism analyses are extremely detailed and can implicate hundreds of individual cost elements across both the prime contractor and its subcontractors within every proposal. As such, the government source selection team must carefully draft the solicitation to require all of the proposed and substantiating data that it requires from the offerors to complete this complex analysis. Collecting the data necessary to substantiate proposed cost elements can take offerors months, and, even then, it can be incomplete or inconsistent with other parts of the proposal. Moreover, the government must document every aspect of this highly detailed analysis in reports that can balloon to hundreds of pages. It can take the agencies months or sometimes even years to evaluate all of the data and correctly document its findings. As such, a cost realism analysis vastly increases the amount of information that offerors must provide, which, in turn, vastly increases the amount of agency time and effort it takes to sift through that data, evaluate it, and document that analysis.

Furthermore, conducting a cost realism analysis substantially increases the risk of protest. Protesters regularly challenge their own adjustments, the magnitude of adjustments the government made to the awardee, and alleged missing adjustments to the awardee’s proposed cost. In fact, having to conduct a cost realism analysis increases the risk of protest loss to such a degree that GAO regularly includes “flawed cost realism analysis” amongst its top four reasons for the government losing a protest; this figure does not include the large number of corrective actions that also result from flawed cost realism analyses. Furthermore, litigating cost realism issues can be highly complex, since it potentially involves guiding the arbiter through those hundreds of cost elements, which are scattered across dozens of disparate spreadsheets, to show that the government reasonably evaluated the offerors’ submissions and properly calculated their total evaluated costs. This can quickly lead to confusion for even the most skilled advocates. Thus, overall, fixed-price and fixed-rate contract types are far superior to cost-reimbursement contracts from an evaluation perspective: they are faster to award, require substantially less evaluation work, and present substantially lower protest risk.

Recommendation: From an evaluation perspective, government source selection teams should favor FFP LOE or T&M contracts over cost-reimbursement type contracts. If business considerations lead the government to selecting a cost-reimbursement contract type, government source selection teams should carefully consider the cost-realism evaluation techniques and best practices discussed in this article to minimize the complication and work associated with conducting a defensible cost realism analysis.

B. Describing the Government’s Contractor Workforce Needs

One of the defining features of LOE services contracting is that it involves the acquisition of people’s skills for a specific duration of time. Although these skills could range from engineering services to truck driving services, the fact remains that LOE services contracting deals with people and their skills, as opposed to things and their features. Additionally, “[i]t is a fundamental principle of government procurement that a contracting agency[’s solicitation] must provide a common basis for competition” that allows for an “apples-to-apples” comparison of the offerors.

To provide all prospective offerors with this critical “common basis for competition” within the people/skill-centric world of LOE services contracting, government agencies typically define their LOE services needs using two related, but distinct concepts: the total number of hours and labor mix. Additionally, many agencies also opt to include a third concept, key personnel, to further refine their staffing requirements with respect to a subset of contractor personnel with highly specialized skills. In each case, the government’s choices about how to incorporate these three concepts into their solicitations will have wide-ranging impacts on the speed and defensibility of the source selection decision, as well as a meaningful impact on the business value of the resulting award.

1. Total Hours: An Essential Element of Any LOE Services Competition

Although the FAR does not independently define the term “level of effort,” there is little doubt that, in applying this term, agencies consider the total number of hours on each contract line item number (CLIN) as material requirements of an LOE service contract. Furthermore, GAO has acknowledged the importance of evaluating LOE service contracts using a similar labor hour baseline, finding that the Army could not reasonably compare offerors in an LOE competition without a common labor hour baseline. Furthermore, offerors can exploit ambiguities in the total required level of effort to artificially reduce their proposed cost/price (by offering fewer hours) or to artificially inflate their performance value under technical or non-price evaluation factors in the competition (by claiming to get more work done). In either case, these bidding strategies limit or preclude the government’s ability to evaluate the offerors on an apples-to-apples basis. As such, within the LOE environment, it is critical to provide all offerors a common understanding of the total number of hours required to perform the effort.

Specifying a total number of hours per contract period is not typically complex; it is typically as simple as stating a specific number of hours for each CLIN. For example, see Figure 2.

Figure 2. Department of Navy RFP (2019)

Agencies do, however, sometimes apply variations on the theme of simply listing a total number of hours. Typically, these variations fall into two major categories: using units other than hours or providing a range of hours. In both cases, the acid test for whether the solicitation’s description of the total level of effort is acceptable is whether it provides all offerors a “common basis for competition.”

a. Defining Level of Effort in Units Other Than Hours

In some situations, procuring agencies choose to specify their total required level of effort in units other than hours. For instance, agencies often describe their LOE requirements in terms of “Full Time Equivalents” or “FTEs.” Essentially, where an agency specifies its total level of effort in FTEs, it is specifying how many people it wants to show up at the jobsite for the year. Of course, this is a measure of the total effort that the government wants, but it introduces one additional complexity into the agency’s procurement—the solicitation must now define how many hours per year the government expects a person to work to be an FTE. Unfortunately, the contractor community does not have any consistency about how they define how many hours per year constituting “full time.” While many firms consider a year to be 1,920 hours, others use 2,080 or 1,880 as the basis for their full year. These differences can materially change the total number of hours the contractors estimate (i.e., 2,080 is approximately 10.6% more hours than 1,880), which can call into question whether the solicitation provided a common basis for competition. Therefore, if the agency’s solicitation specifies its required total level of effort in FTEs, it is critical that it then also provides a definition of how many hours it includes in an FTE to provide all offerors a common basis for competition.

Furthermore, agencies can also complicate their description of their total required level of effort by specifying the required total effort by team. For instance, an agency may require “1 Agile Development Team’s effort for 26 sprints.” Although the term “hours” does not appear in this call out, this is also a measure of total effort. This alone, however, is an incomplete description of the required effort because it does not provide critical information to determine the required total level of effort; specifically, it omits the number of people on the team, the hours each team-member is required to perform per sprint, and the duration of each sprint. Without this information, one offeror could present a three-person team with full-time personnel for a four-week sprint, while another offeror could provide a twelve-person team with six full-time and three half-time personnel on a two-week sprint. In evaluating each team, the first offeror would propose a total level of effort of 443 hours, while the second would propose 665 hours—approximately fifty percent more than the first. Without team size and sprint duration data, the offerors lack a common basis for competition, and the government cannot conduct an apples-to-apples comparison of them. Therefore, as with FTEs, it is critical that the agency provide sufficient data for offerors to clearly understand the total number of required hours in an LOE service contract, even if the agency chooses to specify those hours using some other units.

Recommendation: Agencies should avoid unnecessarily complex descriptions of the total number of hours under the contract. If possible, the agency should specify the total required level of effort in hours, instead of complicating the solicitation with other units, which require more data points and invite mathematical errors for both contractors and the agency. Where agencies use other units, they should ensure that they provide clear conversion factors in the solicitation to translate clearly and unambiguously those other units to hours.

b. Defining Level of Effort with a Range of Hours

Some agencies’ LOE services clauses contemplate some variation in the maximum number of hours required under the contract—such as providing a range of hours instead of a fixed value. Although these approaches typically focus on defining what in-scope post-award hours increases are not subject to fee adjustment, they can complicate what the maximum number of hours are under the contract for proposal evaluation purposes.

Where agencies use a variable hour clause, it is important that the solicitation clearly lay out an evaluation scheme that removes any ambiguity regarding the total number of hours that the offerors should propose. Typically, resolving this ambiguity is fairly straightforward. For instance, the agency could include the following statement in Section L of its solicitation: “Offerors shall propose the hours listed in each contract line item listed in Section B without deviation.” Moreover, the agency would also have to ensure that this statement aligns with the evaluation scheme in Section M, where it may impact both cost and non-cost/price evaluation factors. Alternatively, the agency could require all offerors to bid to some other percentage of Section B hours. The critical question is simply whether the solicitation is clear in providing a common basis for competition by specifying which set of hours the agency will use for its evaluation. That said, Section B generally takes precedence under the Order of Precedence clause, so it is likely the best candidate to present as the government’s total hour requirement for evaluation.

Overall, specifying a total required level of effort in an LOE service contract is necessary to provide a common basis for competition for all offerors and to create a common yardstick against which to evaluate all offerors on an apples-to-apples basis. It is not, however, sufficient. As the following section explains, the solicitation must go beyond simply describing how many hours it needs and must also describe the types of people/skills that it requires for those hours.

Recommendation: Agencies should use caution when using a range of hours. Ideally, the agency should fix the total number of hours by contract line item in Section B, instruct all offerors to use these Section B hours in developing their proposals, and trace all evaluation schemes back to these Section B hours.

2. Labor Mix: A Powerful Tool for Providing a “Common Basisfor Competition”

In addition to providing a total number of hours, GAO’s “common basis” standard also demands that the solicitation contain either 1) “a sufficiently detailed description of the work . . . to allow offerors to intelligently propose” or 2) a labor mix to give them a common target to shoot at. Without such guidance, one company could propose 10,000 hours of performance by Ph.D.-degreed nuclear physicists, while another could propose 10,000 hours of performance by high school seniors. Regardless of which mix was more appropriate to perform the solicitation’s Statement of Work (SOW), there would be substantial differences between the skills and capabilities of these two labor forces, as well as the cost of each.

Although agencies could provide this guidance with a “sufficiently detailed description of the work,” many agencies choose to rely on broad SOWs in their LOE service contracting that are intentionally flexible in performance. Nevertheless, these broad SOWs are inherently open to various interpretations of how offerors should propose to staff the effort. While the flexibility of a broad SOW is often a substantial benefit to the agency in performance, the lack of a detailed description of the work to be performed limits the SOW’s ability to define the agency’s requirements in a way that meet GAO’s “common basis” standard. On the other hand, providing a government labor mix (mandatory or recommended) circumvents the difficult questions of how offerors should staff a broad SOW by giving all offerors a common starting point for bidding; it also creates a common yardstick the government can measure each of the offerors against.

In fact, providing a government labor mix in a solicitation provides the agency several meaningful benefits. First, it allows the agency to sidestep the hard work of narrowly tailoring the SOW, which can be quite time-consuming for complex services requirements. This decreases the agency workload and associated schedule delays during the requirements development phase of the procurement. Second, where the solicitation only provides the government labor mix “for evaluation purposes only,” this strategy preserves nearly all of the post-award flexibilities the government is seeking when it drafts a broad SOW. Third, where the solicitation contemplates the agency conducting a cost realism analysis, including a government labor mix in the solicitation greatly simplifies the government’s cost realism evaluation of an offeror’s proposed hours and labor mix cost elements. Finally, providing a government labor mix is a well-tested and reliable method for meeting GAO’s common basis standard in LOE service contracting, so the agency avoids the very fact-dependent and unpredictable litigation risk associated with relying on a narrowly tailored statement of work instead. As such, providing a labor mix, as compared to relying on a narrowly tailored SOW, decreases proposal preparation time, decreases agency evaluation time, and reduces the overall risk of protest loss for failing to meet GAO’s “common basis” standard.

a. Example of a Government-Defined Labor Mix with Labor Category Definitions

So, what does a government labor mix look like and what best practices should agencies follow when incorporating one into their solicitations? Basically, a government labor mix for each CLIN has two primary constituent elements: 1) a set of government-defined labor categories, and 2) a distribution of the required labor hours between those labor categories. The following is an example of a government labor mix section in a solicitation to show the concept; the following subsections explore these concepts in more detail:

Section L.X: Government Labor Mix

The Offeror’s proposed staffing shall comply with the Section B hours and the below mandatory labor mix. The Government will treat offers that fail to propose the required Section B hours as nonresponsive. If an Offeror’s proposal deviates from the mandatory labor mix, the Government will adjust the Offeror’s proposed labor mix to the solicitation’s mandatory labor mix, provided the deviation is minor and immaterial (e.g. rounding differences in the proposal.) If the deviation is deemed material, the Government will treat such deviations as nonresponsive as well. Moreover, the Government will not make any labor mix adjustments that would result in a downward cost adjustment.

[see PDF p. 22]

Section L.X.1: Government Labor Category Definitions

1) Engineering Personnel (Senior, Mid-Level, and Junior)

Senior Engineers must have:

A) a high school degree, or a GED, and more than twenty (20) years of relevant experience, OR

B) a bachelor’s degree in a relevant field and fifteen (15) years of relevant experience, OR

C) a master’s degree in a relevant field and ten (10) years of relevant experience.

Mid-Level Engineers must have:

A) a high school degree, or a GED, and ten (10) years of relevant experience, OR

B) a bachelor’s degree in a relevant field and five (5) years of relevant experience, OR

C) a master’s in a relevant field.

Junior Engineers must have:

A) a high school degree, or a GED, and three (3) years of relevant experience, OR

B) a bachelor’s degree in a relevant field, OR

C) a master’s degree in a relevant field.

2) Administrative Personnel (Senior, Mid-level, and Junior)

Senior Administrative Personnel must have at least eight years of relevant administrative experience.

Mid-level Administrative Personnel must have at least three years of relevant administrative experience.

Junior Administrative Personnel must have an Associate’s degree, or higher, or at least one year of relevant administrative experience.

The Offeror shall provide a mapping of any labor categories it, or one of its subcontractors, proposes in the Staffing Plan to the labor categories defined above. This mapping must include a description, similar in detail to the Government labor categories, of the requirements/qualifications associated with each labor category contained in the Offeror’s Staffing Plan, including labor categories proposed by subcontractors.

b. Government-Defined Labor Categories

In developing a government labor mix, agencies often begin by defining the types of people that they require in terms of skills, capabilities, education, and years of experience or seniority. In other words, they begin by defining a set of government-provided labor categories.

Given that the entire purpose of providing a government labor mix is to give all offerors a common basis for competition, it is imperative that the agency include its labor category definitions in the solicitation to ensure that all of the parties are thinking about the same types of people when the government discusses, for instance, “Senior Administrative Personnel.” Without an explicit definition, offerors are left to interpret what constitutes “Senior” in this context, which could lead to very different assumptions among offerors. Compounding this risk of misinterpretation is the fact that each individual contractor has its own unique set of internal labor categories definitions to classify its employees. These contractor-specific labor categories may bear no resemblance to what the agency believes it needs, and there is virtually no clear standardization of these terms between offerors or within industries. Therefore, the agency must provide a clear set of labor category definitions in its solicitation to provide offerors a common basis for competition and to allow for an apples-to-apples comparison of offerors.

As the preceding example shows, these government-defined labor categories can be simple (such as minimum years of experience) or more complex (allowing different amounts of experience for differing levels of education). Despite this range of potential complexity, several best practices apply to drafting any government labor category definition to achieve the goal of presenting a clear and unambiguous lexicon of personnel. Additionally, several strategic decisions that agencies will make in defining their labor categories will affect how easy it is to develop the government labor mix and, ultimately, how easy it will be to evaluate the offerors’ staffing.

i. Overarching strategy for defining labor categories

In general, agencies should have a very strong preference for limiting themselves to a very small number of easy-to-understand and broadly defined labor categories. This approach will reduce the complexity of developing a labor mix for the agency, the complexity of proposal preparation for offerors, and the complexity of proposal evaluation for the agency evaluators. Therefore, when defining the universe of government labor categories for a procurement, agencies should consider three key strategic considerations. Specifically, agencies should 1) keep the number of labor categories as small as practicable; 2) simplify labor category definitions whenever possible; and 3) broaden labor category definitions whenever possible.

Keep the Number of Labor Categories as Small as Practicable: In an ideal case, an agency’s slate of government labor categories should be as small as possible to minimize the work for all parties. Nevertheless, it must also perform its twin jobs of giving all offerors a common understanding of the requirements and giving the agency the ability to conduct an apples-to-apples evaluation of the offerors.

In many cases, where the technical scope of the contract is fairly well defined, providing simple labor categories that generally align with meaningful breaks in the expected direct labor rates for the relevant industry is perfectly sufficient. For instance, in a business/financial management support contract, agencies should likely only define Senior, Mid-level, and Junior labor categories, since there is generally a steady increase in salary based on increases in seniority. There is no magic formula to this particular three-part division; instead, these divisions are a judgment call about how likely it is for an offeror to grossly misunderstand the government’s needs. That said, erring slightly on the side of too few labor categories is often the better bet. Having too many labor categories guarantees more complexity in verifying that all proposed personnel are properly classified, while having too few labor categories only slightly increases the risk that an offeror will misunderstand the requirement.

As the complexity of the effort grows, however, it will be important to differentiate between groups that get paid very differently. For instance, in an omnibus contract for IT engineering services, cybersecurity services, and administrative support, a simple Senior/Mid-level/Junior set of government labor categories presents a real risk of different companies interpreting the requirements differently. In fact, after taking into account competitive pricing pressure, it is likely that an offeror would propose a staffing approach that fills the Junior labor category with personnel from jobs with high salaries (such as IT engineering and cybersecurity), while filling the Senior category with personnel from jobs with lower salaries (such as administrative services). This bidding strategy would satisfy a basic three-category labor mix but would result in some offerors staffing the effort in the opposite manner than the agency likely intended (i.e., with senior engineers/cybersecurity personnel and junior administrative support). As such, agencies should consider delineating jobs that have materially different direct labor rates. In this example, the government may need to define five labor categories: Senior Technical, Mid-Level Technical, Junior Technical, Mid-level Administrative, and Junior Administrative. Such a set of government-defined labor categories would greatly decrease the likelihood that different offerors would interpret the government labor mix differently or opportunistically.

Agencies may also find it necessary to break out geographically specific labor categories to clearly describe their expected labor forces. Again, offerors may choose to staff Senior personnel as remote or in low cost-of-living locations, while providing Junior personnel for on-site support in higher cost-of-living areas to minimize their proposed cost. As with the job type example, this geographically diverse bidding strategy would meet a basic three-category Senior, Mid-Level, Junior labor mix, but could result in some offerors staffing the effort in the opposite manner than the agency likely intended (i.e., with senior personnel providing client-facing support and junior personnel providing offsite/remote support). Therefore, government source selection teams should consider the likely labor cost breaks for the required population of contractor personnel in drafting the government-defined labor categories.

Yet agencies should not go overboard with this subdivision approach. Agencies sometimes have dozens of overlapping labor categories, despite the fact that they all fall within the same basic skillsets and have the same general pay range. For example, in a solicitation for design services, an agency may have labor categories for senior mechanical engineers, senior electrical engineers, senior electronics engineers, senior systems engineers, senior logisticians, and senior test engineers. This added detail may provide marginally more technical detail about the government requirement. Nevertheless, if the personnel performing these jobs are paid roughly the same salaries and are generally available in the labor market, this additional technical detail is likely not necessary for achieving the labor mix’s twin goals of describing a common basis for competition and permitting an apples-to-apples comparison of the offerors. Since this number of labor categories would certainly complicate the development and evaluation of proposals without furthering the goals of providing a labor mix, this example should work to simplify its set of government labor category definitions. For example, it could consolidate all of those jobs into a single Senior Engineer labor category. Overall, determining the appropriate number of government labor categories is an important strategic decision that agencies should critically consider when developing solicitations, and, in general, agencies should aim to reduce the total number of government-defined labor categories.

Simplify Labor Category Definitions Whenever Possible: Beyond avoiding too many labor categories, agencies should also work to keep their government labor categories simple. Overly complex or unnecessarily restrictive labor category definitions increase the complexity of evaluating proposals, and the latter can increase the litigation risk associated with a solicitation.

Simplifying government labor category definitions benefits all parties. Simple definitions make it easier for companies to understand whether their proposed employees meet the requirements, and make it easier for evaluators to confirm that they do. As such, in most cases, agencies should work to limit their labor category definitions to a minimum number of years of relevant experience and a minimum level of education.

In some cases, however, personnel with higher degrees can move up in seniority more quickly, so the government can consider defining multiple ways to meet a single government labor category. In such a case, agencies may want to ensure that their government labor category definitions keep people with similar salaries together by describing different avenues for personnel with bachelor’s degrees versus master’s degrees; nevertheless, agencies should add this complication intentionally and strategically, while generally aiming to present simple, clear definitions.

Broaden Labor Category Definitions Whenever Possible: Agencies should also work diligently to keep their government labor categories broadly inclusive of various experience types and qualifications. Typically, this means that the government labor categories should not be too specific. In fact, unnecessary specificity presents four distinct risks to the government: it can discourage competition; it can invite pre-award protests for unduly restrictive requirements; it can complicate the evaluation phase; and it can increase the post-award protest risk of awarding to a proposal that does not clearly meet the overly specific requirement.

First, unnecessarily specific requirements can signal to industry that the government is building the requirement for a specific offeror, which drives off potential offerors. For example, instead of defining a Senior Radar Technician labor category as having “five or more years of Naval radar repair experience,” agencies should consider defining the labor category more broadly to require “five or more years of military radar repair experience,” provided that the skillsets necessary to work on Army or Air Force radar systems are transferable to the Naval radar repair space. Broadening the inclusivity of the labor category definitions, where appropriate, generally increases the number of offerors that can bid on the work and signals to industry that the agency is seeking meaningful competition.

Second, unnecessarily specific labor categories invite offerors to protest the solicitation as unduly restrictive. If “military radar repair experience” will meet the government’s minimum needs, specifying “Naval radar repair experience” as a minimum requirement will, most likely, exclude several potential vendors inappropriately. If any one of these vendors challenges the requirement, the agency would find itself embroiled in heavily fact-dependent pre-award protest litigation throughout much of its proposal evaluation phase. Moreover, if the agency loses the protest, it risks having to request new or updated proposals and start its evaluations over. These delays are typically very problematic for programs.

Third, unnecessary specificity also increases the complexity of evaluating proposals for the agency. For instance, an agency should avoid defining a labor category around a single type of degree or subject matter. If an agency were to state that a Senior Electrical Engineer had to “have a Bachelor of Science degree,” it might well find its evaluators wrestling with the thorny theoretical questions of whether a proposed individual’s Bachelor of Arts in Electrical Engineering meets a requirement for a “Bachelor of Science” degree. In actuality, the two degrees teach the same subject matter, but different schools alternatively classify the degree as a Bachelor of Arts or a Bachelor of Science. Instead, the agency should have defined its Senior Electrical Engineer labor category slightly more broadly as “having a bachelor’s degree in a relevant field.” Under this slightly broader definition, the agency would sidestep the evaluation confusion posed by the Bachelor of Arts title, as well as the litigation risk associated with it.

Finally, overly specific labor category definitions can complicate the government’s defense of an otherwise clear awardee in a post-award protest. For instance, if the government ultimately determines that a Bachelor of Arts in Electrical Engineering meets the requirement for a Bachelor of Science, a protester may well argue that the government should have considered this non-compliant degree to be an automatic deficiency for the awardee, and, therefore, the agency must reconsider its award. While a protester likely will not succeed on this argument alone, defending against such arguments saps critical litigation resources, such as agency counsel time to respond. Also, addressing several of these types of labor category definition arguments can greatly increase the complexity of the government filings, which can pull the arbiter’s focus away from the agency’s primary narrative or other more critical issues. As such, drafting broader labor category definitions favors the government by leaving some of the hard line-drawing questions up to evaluator judgment, as opposed to a specific turn of phrase in the solicitation. Litigating issues that turn on an exercise of evaluator judgment is generally much easier and more straightforward for the agency, since GAO typically affords the evaluators broad discretion in exercising their technical judgment and a protester’s “mere disagreement” with the government’s judgment cannot form the basis of a successful protest.

Recommendation: Agencies should keep its list of government labor category definitions as small as practicable, as simple as practicable, and as broad as practicable.

ii. Tactical best practices for defining each labor category

Beyond the general guidelines in the previous section, agencies should also consider four other specific aspects of the labor category definitions they create, as these details can increase the clarity of the government-defined labor categories and simplify the evaluation. In addition to being simple and inclusive, each labor category should i) have discrete minimum qualifications, ii) have no maximum qualifications, iii) have no desired attributes, and iv) exist on a continuum without gaps.

Discrete Minimum Qualifications: One of the most critical best practices to achieving clear and unambiguous labor category definitions is to ensure that they have hard-sided minimums for each labor category. It should be crystal clear whether an individual qualifies or not. For instance, defining Mid-level Administrative Personnel as having “at least three years of relevant administrative experience” is hard-sided. Someone with 2.9 years of relevant experience is not Mid-level Administrative Personnel, while someone with 3.0 years of relevant experience is. Compare this to a squishier definition, such as Mid-level Administrative Personnel having “substantial relevant administrative experience.” This vague statement does little to clearly indicate whether any individual qualifies as Mid-level Administrative Personnel, which undercuts its ability to communicate a common basis for competition and seriously complicates the government’s ability to conduct an apples-to-apples comparison of the various offeror proposed labor mixes. As such, agencies should avoid using vague government-defined labor categories by defining them with hard-sided, discrete minimum qualifications.

Although simpler definitions are typically preferable, it is possible to have multiple minimums for any given labor category, provided those minimums are hard-sided, so that they clearly show whether or not an individual qualifies for a labor category. For instance, in the example above at Section II.B.2.a, an individual can qualify as a “Senior Engineer” in any of three conditions: “A) a high school degree, or a GED, and more than twenty (20) years of relevant experience, OR B) a bachelor’s degree in a relevant field and fifteen (15) years of relevant experience, OR C) a master’s degree in a relevant field and ten (10) years of relevant experience.” In each alternative, this definition provides clear minimum qualifications in terms of years of experience in conjunction with a degree. Therefore, while more complex, this example also follows the best practice of providing discrete minimums.

No Maximum Qualifications: Agencies should avoid applying any maximum qualifications to any labor category. For instance, agencies should not define Mid-level Administrative Personnel as having “between three and eight” years of experience. The risks here are two-fold for the agency. First, if the agency defines the maximums inappropriately, these maximums can introduce problematic undefined gaps between labor categories, as discussed in more detail below. Second, applying maximum qualifications for a labor category creates a situation in which offerors could be precluded from offering more senior personnel to fill relatively junior labor categories. It can also force offerors to propose staffing changes in the middle of performance to comply with the government’s labor mix because its employees gain more experience through performance. For example, if an offeror proposes a Mid-Level Administrative Personnel with seven years of experience in the base year for a labor category with a maximum of eight years, then that individual will grow out of that labor category in the next performance year. Thus, the maximum qualification could force the offeror to swap out that individual in the later years of its staffing plan to comply with the government labor mix. These types of maximum qualification issues typically arise because a company does not currently employ sufficiently junior personnel to meet the government’s maximum labor category requirements or because one or more of its personnel will accrue so many years of experience over the course of performing the contract than they come to exceed their labor category maximum.

The work to resolve these types of “overqualification” questions wastes valuable time and resources for both the offeror and the government; additionally, the resolutions available to the government often run counter to the government’s interests in the procurement. Typically, the government values (or at least tolerates) a richer labor mix under the non-cost/price evaluation factors and can consider this value against the potentially higher cost/price impacts in its best value determination. Similarly, the government typically discourages otherwise unnecessary personnel changes over the course of performance, so proposing to change out personnel as they exceed their labor category maximums generally introduces a level of risk to the staffing plan for all but the most cost-conscious government evaluators. Including labor category maximums can also undercut the government’s interest in having robust competition. Specifically, in practice, competitive pressures under the cost/price evaluation factor limit the instances in which an offeror would want to propose more senior, more expensive personnel to fill a more junior role. Therefore, where offerors choose to propose more senior personnel in a lower labor category, this choice is likely the result of the contractor lacking sufficient junior personnel available to perform the work. As such, particularly for smaller businesses, precluding the use of more senior personnel in more junior roles may seriously limit a company’s ability to provide a proposal that meets the government labor mix.

Finally, where a solicitation includes maximum qualifications in a cost-reimbursement contract, agencies may also find themselves considering downward cost adjustments to adjust an offeror’s richer proposed labor mix back to the more junior government labor mix. Since the purpose of cost realism is to protect the government against unsubstantiated claims of cost savings, agencies should avoid making downward cost adjustments in nearly all situations.

Overall, maximum qualifications for government labor category definitions create proposal and evaluation issues that are best avoided to increase the speed of the evaluation and the overall clarity of the evaluation record. As such, agencies should define labor category minimums, but not include maximums. This results in a set of labor category definitions in which everyone who qualifies for a more senior labor category also meets the definition for each of the more junior labor categories within the same type of job. This also avoids creating any undefined gaps between government labor categories and allows offerors to assign more senior personnel to lower government labor categories, provided that, in cost-reimbursement contracts, those personnel costs are clearly traceable to their proposed cost.

No Desired Attributes: Agencies should also avoid encumbering their government labor category definitions with desired attributes. There are two primary reasons for this: first, it does not further the fundamental purpose of providing labor category definitions. If used without a clear minimum, these desired attributes are no better than the vague “substantial experience” example above. Offerors will not be able to unambiguously determine whether someone with three years of relevant experience qualifies for a labor category that “desires five years of relevant experience.” Depending on the evaluators, that individual may or may not meet the requirement; this ambiguity undercuts the critical need to provide a common basis for competition.

Second, even if these desired attributes are paired with an explicit minimum, the fungible nature of non-key contractor personnel in the LOE service contracting environment means that the government will not be able to contractually lock-in these desired attributes. As such, desired non-key requirements of any kind, including those tied to government labor category definitions, present evaluators with serious questions of whether to provide strengths for meeting those desired attributes when the contractor is contractually permitted to provide minimally qualified personnel without the desired qualifications at award. Nevertheless, choosing not to give strength to a proposal that claims to meet a desired attribute will increase the litigation risk of awarding to another offeror. Therefore, it is best to avoid this conundrum entirely by only listing desired qualifications in the solicitation for proposal elements that the government can lock in contractually at award. In other words, agencies should avoid including desired attributes in their government labor category definitions for non-key personnel.

A Continuum That Starts at Entry Level Without Gaps: Agencies should also try to have a clear continuum of labor categories within a job area from essentially entry level up through the most senior labor category (that does not include a maximum) without any gaps. This limits the instances where the government evaluators must consider how to view personnel who do not meet any of the government labor-category definitions. Of course, the agency’s solicitation does not need to include each of these categories; the goal is simply to provide a continuum of labor categories that grows out of a catch-all category for personnel who do not meet more senior requirements.

In the example in Section II.B.2.a, the Junior Administrative Personnel definition meets this need for a catch-all category, although not perfectly. It describes a Junior Administrative Personnel as having “an Associate’s degree, or higher, or at least one year of relevant administrative experience.” This definition provides a very wide aperture to capture personnel with very basic entry-level qualifications. It does not perfectly apply the best practice, because it is not a true catch-all, but, in the context of the labor market associated with this solicitation, this labor category definition functions as a de facto catch-all. Therefore, even if an offeror proposed a “Junior Engineer” with an Associate’s degree and no years of experience, the government evaluators could still classify this person as a Junior Administrative Personnel. This reclassification would allow the evaluators to treat the individual’s noncompliance with the Junior Engineer category as a proposed labor mix deviation (i.e., swapping the Junior Engineer hours for Junior Administrative Personnel hours) as opposed to having to find that the offeror was nonresponsive because this individual failed to meet any of the government-defined labor categories. As such, including a continuum with an entry-level catch-all can help resolve questions of whether a proposal is deficient for proposing a non-key individual that does not meet any government labor category definition. Therefore, agencies should work to include a continuum of labor categories from entry-level up to the most senior without gaps or maximums.

Recommendation: Agencies should draft each of its government labor category definitions to a) have discrete minimum qualifications, b) have no maximum qualifications, c) have no desired attributes, and d) exist on a continuum from an entry-level catch-all to the most senior without gaps.

iii. Relationship between government-defined and contractor labor categories

Government-defined labor categories do not need to align with or even resemble the various potential offerors’ labor category definitions. Nevertheless, particularly for cost-reimbursement contracts, agencies should ask for a clear mapping of any proposed contractor labor categories to the government-defined labor categories for any company labor categories that the prime or its proposed subcontractors include in a proposal. This will ensure that the government evaluators can clearly understand the various contractor labor categories in the offerors’ proposals and can trace the proposed labor categories to the government labor mix in the solicitation. Furthermore, many contractors rely on internal payroll screenshots marked only with contractor labor category designations to substantiate their proposed direct labor rates when proposing on cost-reimbursable contracts. As such, in these types of evaluations, it is even more critical that the government evaluators can clearly trace between the various proposed contractor labor categories and the government-defined labor categories to connect the contractor’s substantiating data to its proposed costs. Agency evaluators generally do not want to have to guess about such connections and typically view a lack of clarity as increased risk. As such, agency solicitations should ask for a clear mapping of any proposed contractor labor categories for the prime contractors and any proposed subcontractors to the government-defined labor categories.

iv. Key personnel are different and distinct

As discussed infra in Section II.B.4, key personnel are a distinct concept from government-labor category definitions. Government source selection teams should avoid conflating any aspect of the government-defined labor categories with the key personnel positions. In large part, this is to avoid accidentally porting minimum requirements from the government-defined labor categories onto the key personnel positions, which would defeat the recommended strategy of keeping the key personnel positions defined only by “Desired Attributes.” In fact, if an agency elects to use key personnel, it should explicitly and consistently describe its government-defined labor categories as its “non-key government-defined labor categories” and avoid any reference to these in its key personnel descriptions. The agency should also include a line for “Mandatory Key Personnel” in its labor mix table that is distinct from the non-key labor categories. Overall, agencies should diligently avoid blurring or confusing the line between key and non-key personnel in any way.

3. Government-Defined Labor Mix

Although drafting the government-defined labor categories calls for some detailed, strategic thought, once it is done, developing and presenting the government-defined labor mix is much simpler.

Essentially, the solicitation must provide a clear connection between the government-defined labor categories and the total level of effort. The government source selection team typically provides this connection using a table listing the government-defined labor categories along with hours for each labor category that add up to the applicable Section B hours. The agency develops the data in this table after considering current performance data on existing predecessor efforts, known upcoming changes in workload, and the potential for unforeseen changes in workload. Although agencies should endeavor to provide a clear picture of the labor force they ultimately need, most LOE services vehicles do not lock the agency into the solicitation’s labor mix during performance; instead this mix is generally for evaluation purposes only to give all offerors a common understanding of the work to allow the agency to compare offers on an apples-to-apples basis. As such, agencies should not invest substantial resources in perfecting their solicitation’s labor mix if they can get a reasonably accurate (but not perfect) government labor mix more quickly and easily.

a. Methods to Present Labor Mix

Agencies use two main methods to present these tables: hours and percentages of the total hours. While both methods work, presenting the hours by labor category is generally clearer than using percentages. When using percentages, offerors must make more calculations to convert the percentages to hours. These additional calculations increase the risk of a calculation error, potentially creating a situation in which the offeror is no longer bidding to the total hours in Section B, which would make the proposal nonresponsive and unawardable. Also, if these labor mix percentages apply to multiple contract line times with different total numbers of hours, the number of hours per labor category also varies by line item, which introduces further risk of calculation errors. This is particularly true where the agency includes a list of “Mandatory Key Personnel” in contract line items with different total numbers of hours between the [contract line items] because, in general, the number of key personnel does not change year to year. Nevertheless, presenting a constant percentage of an annual hours value that changes year to year will result in differing levels of key personnel support year to year, which agencies typically do not want. As such, although providing either hours or percentages of total hours to define the government labor mix will work, there is less room for miscommunications or miscalculations if the government simply provides this information in hours, as opposed to percentages of total hours.

Additionally, if time-phasing matters to the agency, the government labor mix should reflect that time-phasing to avoid offerors proposing to front-load the effort. Proposing a front-loaded staffing profile would allow the contractor to claim it would incur all of its proposed labor costs in the earliest performance period, which would limit the effects of labor costs escalation on its proposed costs. In many cases, however, this type of front-loading is unrealistic considering how the government actually intends to utilize the services—consistently from year to year. Therefore, the government should specify the time-phasing of its needs where it connects the government-defined labor categories and to the effort required in those categories. Luckily, in general, providing this time-phasing is easy to specify for LOE service contracts, since many internal government pressures encourage agencies to contract for these services on a twelve-month cycle already. In these cases, agencies typically provide a simple table that lists all of the labor categories and provides the proportion of them in each separately identified contract line item; section II.B.2.a provides an example of such a time-phased table.

Additionally, government source selection teams must consider how much flexibility they want to provide offerors in bidding to the government labor mix. At the extremes, agencies could provide offerors no flexibility (i.e., any error or deviation is nonresponsive), or they could allow offerors unfettered flexibility to justify a different (typically lower-cost) labor mix. Each of these extremes present risk for the government.

First, on the one hand, agencies may opt for providing no labor mix flexibility in the hopes of ensuring a level competitive playing field. This extreme inflexibility comes with some risks. With no flexibility, otherwise strong offerors that make small reasonable mistakes, such as incorrectly coding an individual as Mid-level versus Senior, become nonresponsive and unawardable. Eliminating these offerors from the competition wastes the offeror’s proposal preparation costs and the evaluators’ time by removing potentially valuable competitors from consideration.

On the other hand, agencies may believe that greater flexibility increases innovation in meeting the government’s requirement. With a high degree of flexibility, however, agencies often find that they require substantially more information from offerors to justify a divergent proposed labor mix and that often the offeror’s justifications are too weak to support a finding that the proposed deviation is realistic. Instead, agencies find themselves having to make cost realism adjustments and/or identify non-cost risks associated with insufficiently justified labor mix deviations. Documenting these findings is time-consuming and can increase the government’s protest risk by introducing new issues for a disappointed offeror to challenge.

b. Best Practices

Instead of these two extremes, agencies should consider applying a slightly flexible government labor mix that allows for some mistakes, but generally discourages offerors from proposing or benefiting from labor mix deviations. For instance, an agency could include the following language in its solicitation:

The Offeror’s proposed staffing shall comply with the Section B hours and the above mandatory labor mix. The Government will treat offers that fail to propose the required Section B hours as nonresponsive. If an Offeror’s proposal deviates from the mandatory labor mix, the Government will adjust the Offeror’s proposed labor mix to the solicitation’s mandatory labor mix, provided the deviation is minor and immaterial (e.g., rounding differences in the proposal). If the deviation is deemed material, the Government will treat such deviations as non-responsive as well. Moreover, the Government will not make any labor mix adjustments that would result in a downward cost adjustment.

Using such language, the government can maintain a clear level-playing field, while not having to remove offerors for minor errors. Moreover, this approach can discourage buy-in offers through substantial cost realism adjustments or non-cost risk findings that accompany such proposals.

Recommendation: Agency solicitations should provide the government labor mix in hours per labor category for each LOE contract line item. Moreover, agencies should discourage labor mix deviations generally, while retaining sufficient evaluation flexibilities to deal with minor proposal flaws or miscalculations.

4. Key Personnel: A Solicitation Option with Some Benefits and Greater Risks

In procuring LOE services, many agencies elect to further refine their staffing requirements by defining a subset of personnel with highly specialized skills as key personnel. Contractually, what distinguishes these key personnel from other people on the contract is that these key individuals are material terms of the contract. Key personnel are often material, and, “[i]n negotiated procurements, a proposal failing to conform to the material requirements and conditions of the solicitation should be considered unacceptable.” As such, GAO reviews key personnel under its fairly strict materiality case law, and, thus, key personnel challenges often figure prominently in protests.

As such, government source selection teams should carefully weigh the costs and benefits of choosing to designate key personnel in their solicitations. Importantly, these teams should remember that, while requiring key personnel is very common, this approach is a choice, not a mandatory element of LOE service contracting.

The primary benefit of requiring key personnel is that this approach allows programs to specify with much greater detail what attributes they want the contractor’s leadership or top experts to have. Requiring key personnel also gives the evaluators a chance before award to vet the specific personnel the agency can expect to receive in contract performance and compare their values among competitors. In general, government consumers of LOE services greatly value the enhanced control that requiring key personnel gives them over what top talent they receive.

Despite these benefits, including mandatory key personnel requirements in a solicitation also creates significant risks to the award decision. In particular, there are three major categories of risk that key personnel present; these situations occur when 1) the offeror proposes insufficient key personnel; 2) the offeror’s proposed key personnel become unavailable before award; and 3) the offeror fails to propose an individual for a key personnel position.

In each of these areas, flaws with the key personnel create a situation in which the offeror has failed to meet a material term of the solicitation, and, thus, its proposal is unawardable. As such, protesters typically seek out flaws in the awardee’s key personnel because they can quickly disqualify an otherwise strong proposal from award. Nevertheless, while each flaw is related to the materiality of the key personnel, each type of flaw requires the government to employ slightly different solicitation and evaluation approaches to avoid and overcome them.

a. Insufficient Key Personnel

The first major risk presented by the material nature of key personnel is that the government must find a proposal unacceptable if any proposed key personnel do not meet all of the applicable minimum requirements. Despite this risk to both offerors and the government, agencies often hamstring their evaluations by burdening the key personnel positions with long lists of mandatory requirements. Not only can these requirements be unduly restrictive, which drives away viable competitors and increases protest risk, but they become acid tests for the awardability of a proposal. Where evaluation teams do not recognize this fact and unreasonably accept key personnel who fall below the stated minimum requirements, such evaluation teams leave a powerful weapon for protesters to challenge the award. Therefore, burdening key personnel with a long list of mandatory requirements can damage the procurement more than it helps.

Additionally, evaluating key personnel mandatory requirements presents evaluators with challenging line drawing problems. As described in Part II.B.2.b.i with respect to drafting non-key labor categories, agency evaluators can struggle with evaluating very specific mandatory requirements; for example, evaluators can waste a substantial amount of time determining whether a Bachelor of Arts for Electrical Engineering meets a mandatory key personnel requirement for a Bachelor of Science degree. Furthermore, the evaluators must document their determination one way or another and then either choice could become the basis for a protest about the acceptability of the proposal. In other words, this one close call could put the entire award decision in jeopardy based on a potentially unnecessary requirement that would not significantly impact the individual’s actual performance.

Because key personnel minimum requirements create such substantial risks for both offerors and the agency, government source selection teams should actively avoid imposing any mandatory key personnel requirements; instead, they should seek to list only “desired attributes.” For instance, instead of requiring that a Program Manager have “a minimum of fifteen years of relevant experience;” the solicitation could, instead, state that “the government desires that the Program Manager have twenty years of Department of Navy engineering experience.” Presenting the key personnel positions as lists of desired attributes, as opposed to lists of minimum requirements, still clearly communicates to offerors that they should bid personnel with those desired attributes but does not make providing those attributes material.

Keeping key personnel requirements from becoming material provides agencies and the offerors several important flexibilities. First, it permits agencies to more clearly present the types of experience that they want. Since “desired attributes” are not minimum requirements, there is less risk that a solicitation asking for a higher level of qualifications or more specific experience would be unduly restrictive to competition. Second, it allows offerors to propose key personnel that almost meet the desired attributes without the risk that the government must automatically reject the proposal. This degree of flexibility can open the door to very talented individuals who have substantial expertise and experience in the solicitation tasking, but who would just miss a particular attribute if it were expressed as a minimum requirement. This option can expand the talent pool available to the agency in high-need, evolving fields that do not have specifically defined and robust staffing pipelines or clear well understood career paths, such as cybersecurity. Third, considering the offeror’s proposed key personnel against “desired attributes” retains much more evaluation discretion for the agency to determine whether a weakness, significant weakness, or a deficiency is most appropriate for not meeting a given key personnel attribute. For instance, not having a desired technical degree might be a non-issue for a key business financial management position, even if having it would have been a strength. For another position, however, missing that same “desired” technical degree may be a weakness (e.g., for a key technical writer position) or a significant weakness (e.g., for a Program Manager position on an engineering services contract) or even a deficiency (e.g., for a Lead Radar Engineering position), depending on the risk that the missing attribute poses to the government.

Finally, evaluating key personnel against “desired attributes” decreases the overall protest risk. By keeping the key personnel attributes from being material, the government reduces the allure of key personnel challenges for protesters, since key personnel evaluation errors are not automatically quick-kill issues for the government’s award decision. Instead, the protester must show both an error and show that it prejudiced them, which can be challenging depending on the arrangement of offerors. Furthermore, evaluating key personnel against “desired attributes” takes the risk out of most of the hard line-drawing problems, since GAO will review the evaluator’s exercise of their technical judgment against its “mere disagreement” standard, which is very favorable to the government.

Overall, where agency source selection teams opt to include key personnel, relying on “desired attributes,” as opposed to mandatory or minimum requirements, will likely increase competition, expand the pool of potential key personnel, greatly increase agency flexibility in evaluating proposals, and greatly reduce their risk of protest loss.

RECOMMENDATION: Agencies should actively avoid specifying any minimum requirement for key personnel. Instead, government source selection teams should aim to specify their key personnel using lists of “desired attributes” to increase competition, to broaden the pool of potential key personnel, to retain more evaluation flexibility for the agency, and to decrease overall protest risk. Of course, a small subset of key personnel attributes must remain mandatory, such as security clearance or certain IT-system access certifications, but this list should be kept to an absolute minimum and clearly distinct from the “desired attributes.” The following table presents an example specification for a key Program Manager:

Figure 3. Specification for a Key Program Manager

[see PDF p 43]

b. Unavailable Key Personnel

The second major risk presented by the material nature of key personnel is that the government must find a proposal unacceptable if any of its otherwise acceptable key personnel become unavailable during the course of evaluations. In fact, just a single key personnel departure, through no fault of the agency or the awardee, can result in a protest sustain. There are two main but somewhat distinct key personnel protest grounds: (1) bait and switch; and (2) material misrepresentation.

A “bait and switch” occurs when an offeror knowingly represents in its proposal that it will use specific personnel, with no intention of actually using the personnel to perform the contract. The agency gives credit to the offeror’s proposed personnel in the form of high technical or pass ratings, which contributes to the offeror winning the contract but, after award, the contractor notifies the agency of what the offeror knew all along: the proposed key personnel is unavailable to perform the work and the now contractor provides someone else. If a protester proves a “bait and switch” allegation, GAO will sustain the protest.

Material misrepresentation is similar but differs from “bait and switch” in that it does not require an intentional falsehood at the time of proposal submission. In a common case of material misrepresentation, the offeror proposes key personnel who they have a reasonable expectation would be available to perform the contract but, through no fault of the offeror, the key personnel unexpectedly become unavailable due to death, resignation, or illness. This turns into a material misrepresentation because an agency relies upon a material proposal representation that has become false after proposal submission. For example, in Greenleaf Construction Co., GAO found that offerors have an obligation to notify the agency during the source selection if changes arise in the availability of their proposed key personnel, even after proposal submission. Moreover, GAO found that an agency only has two options in response to receiving this required notice of an offeror’s key personnel unavailability: either reject the proposal as technically unacceptable for failing to meet a material requirement or reopen discussions to permit the offeror to correct the deficiency.

This rule is harsh for both agencies and offerors. Agency proposal evaluation can take months and, in some cases, years. With each passing day, the risk of a proposed key personnel becoming unavailable for any number of reasons—including death, illness, retirement, and resignation—increases. This possibility presents a huge risk for offerors that a single employee departure will cost them an award and a meaningful amount of bid and proposal costs; it also provides a perverse incentive for competitors to try to hire away individuals proposed as key personnel in a competitor’s bid. Furthermore, agencies can waste huge amounts of evaluation effort preparing a record to award to one company only to have to redo large portions of that work if that prospective awardee suddenly is unawardable because it lost one of its proposed key personnel. As such, government source selection teams should seek to structure their solicitations in a way to minimize this key personnel unavailability risk.

Government source selection teams can limit the key personnel unavailability risk through two main ways: 1) by limiting the number of key personnel positions; and 2) by preemptively including solicitation language that provides evaluation procedures that allow for award to proposals with key personnel that become unavailable before award.

i. Limiting the number of key personnel reduces risk

Agencies can significantly lower the risk of a successful key personnel unavailability protests by reducing the number of key personnel positions required in the solicitation and, instead, having non-key personnel perform those hours. For instance, rather than having a Program Manager, a Deputy Program Manager, and a designated key personnel for each of four Statement of Work (SOW) tasks, an agency could simply have a single Program Manager, which would reduce the number of key personnel that could become unavailable from six to one. With so few required key personnel for each proposal, there is much less chance that an offeror’s proposed key personnel will become unavailable during proposal evaluation. Moreover, the agency could still solicit the exact same tasking or qualifications of a Deputy Program Manager and each of the four SOW tasks but simply as non-key personnel labor category qualifications without resumes or materiality. Nevertheless, while reducing the number of key personnel reduces the litigation risk, it can increase performance risk as the agency cannot contractually lock-in as many strong individuals at award. Despite this shift from litigation to performance risk, limiting the number of key personnel is an important technique to protect a source selection through award and protest.

Recommendation: Agencies should attempt to minimize the number of key personnel positions identified in the solicitation to the extent practicable.

ii. Provide evaluation criteria for unavailable personnel in solicitations

Another tactic an agency should employ to limit its key personnel unavailability risk is to preemptively include solicitation language that provides evaluation procedures that allow for award to proposals with key personnel who become unavailable before award. Although there could be several potential ways to structure such procedures in the solicitation, one approach involves focusing the agency’s evaluation and source selection decision on the attributes presented on a proposed key personnel’s resume, as opposed to the actual living, breathing proposed individual. For instance, the solicitation could include the following language:

The qualifications listed in each individual proposed key personnel resume, not the specific individual, are the materially relevant aspects of the proposed key personnel partially forming the basis of award under the clause titled [Substitution of Key Personnel]. Therefore, even if a proposed key individual becomes unavailable to the Offeror between proposal submission and award, the government will evaluate and make its award decision based on the qualifications listed on the proposed resume(s). When the government awards a task order under those circumstances, the government will require the awardee to use the qualifications listed on the relevant proposed key personnel resume as the basis for replacing the individual under the Substitution of Key Personnel clause during task order performance. The Offeror shall make no substitution of key personnel without prior notification to and concurrence of the Contracting Officer (CO).

With this language, an agency could continue to evaluate the original proposed resumes without having to find an automatic deficiency, even if it received notification that a particular proposed individual had become unavailable for a key personnel position. The only material aspects of the proposed key personnel are the attributes listed on their resumes, and by extension what those resume attributes demonstrate about the offeror’s understanding of the personnel best suited to perform the work. Importantly, this language avoids inexorably tying the entire proposal validity to the question of whether that specific proposed individual remains available to perform under the contract throughout the entire evaluation phase. Overall, avoiding this unavailable key personnel issue greatly reduces the uncertainty and risk for both the government and the offerors during the proposal evaluation phase.

Recommendation: Agencies should preemptively include solicitation language that provides evaluation procedures that allow for award to proposals with key personnel who become unavailable before award. Agencies should consider using the language provided above as a starting point for such procedures.

iii. Unnamed key personnel

The third major risk presented by the material nature of key personnel is that the agency must find any proposal unacceptable if it does not include a specific individual for a required key personnel position. Often, a solicitation will require that offerors provide a resume for key personnel and, if the key personnel is not in their existing employment, a signed letter of commitment confirming their intention to serve in the position if the company wins the contract award. Where an offeror fails to provide a named individual for each key personnel position, it is, in essence, asserting that it has no specific approach to meeting a material term of the solicitation. In nearly all situations, the agency should consider such a general approach to be deficient based on this lack of a proposed approach to meet a material requirement. Rejecting proposals for such obvious flaws wastes time and effort for both the deficient offeror and the agency. Therefore, agency source selection teams should expressly warn offerors that failing to propose a specific individual for a key personnel position will lead the agency to reject the proposal as materially nonresponsive.

Recommendation:  Government source selection teams should include solicitation language that expressly warns offerors that failing to propose a specific individual for a key personnel position will lead the government to reject the proposal as materially nonresponsive.

C. Critical Evaluation Scheme Decisions

In developing an efficient and effective evaluation scheme, government source selection teams must make strategic decisions about two distinct types of evaluation factors: non-cost/price factors and cost/price factors. Non-cost/price type factors can assess a wide variety of technical, management, personnel, and past performance issues, which are largely within the government source selection team’s reasonable discretion to evaluate, prioritize, or ignore. As such, government source selection teams must use their reasonable judgment to strategically select non-cost/price factors to target meaningful potential discriminators between proposals. The cost/price factor, in comparison, is narrower, since major aspects of it are driven by the team’s choice of contract type. Nevertheless, even within this more constrained factor, government source selection teams should consider several strategic decisions and best practices when drafting a cost/price evaluation factor.

1. Non-Cost/Price Evaluation Strategies

The strategic decisions the government source selection team makes in defining the proposal’s non-cost evaluation factors will greatly influence the quality of the proposals they select, the speed of selection, and the defensibility of any eventual award decision. Agencies should carefully select their evaluation factors to consider no more than the amount of information necessary to make a wise business judgment and a defensible award. As mentioned above, the government has broad discretion in selecting the evaluation factors, but once the factors are solicited, agencies are “required to evaluate proposals based solely on the factors identified in the solicitation.” Therefore agencies should carefully consider how to tailor their evaluation to select the best value offeror as quickly as possible with the least work for both offerors and government evaluators.

a. Target Discriminators Prior to Releasing the Solicitation; Keep It Simple (for All Parties)

Many uninitiated government evaluators layer on evaluation factor after evaluation factor to get ever more granular insights into every aspect of offerors’ proposals. This forensic review strategy, however, generally does not yield a better best-value selection. Instead, it merely requires more proposal information from offerors, leads to longer evaluation times, and results in more complex award documentation, while still ending up with the same awardee that the team would have selected with a much narrower set of tailored evaluation criteria. In some cases, all of the offerors are similarly situated in certain factors, so their respective risks or benefits in those factors offset one another in the tradeoff decision and, thus, are not discriminators. In other cases, the offerors are not similarly situated, but the differences in the offerors’ proposed approaches do not create any meaningful difference in the magnitude of the risks or benefits each presents; again, these are not differentiators between proposals. Where the government evaluates these offsetting or immaterially different aspects of the proposals, it wastes the offerors’ proposal efforts and its own evaluation efforts.

Of course, figuring out where a solicitation includes a wasteful evaluation factor (or element) is easier done in retrospect, but the fact remains that government source selection teams can gain speed and shed work by critically considering where they expect to find high-value discriminators between proposals and narrowly tailoring their evaluation schemes to focus on those particular areas. Nevertheless, these critical discriminators will vary widely between source selections based on the differences between program offices, industries, types of work, and the relative importance of the required tasks. For instance, one program office may find a history of strong contractor employee retention highly valuable, while another may prioritize key personnel resumes, while a third may simply want to get someone to cut the grass for a reasonable price without caring if they use mowers or goats.

Despite this broad variability, two questions provide an acid test for potential discriminators: 1) Would I pay a premium for a benefit or to avoid risk in this area? and 2) Are there likely material differences between offerors in my industrial base in this area? If yes to both, that area is a strong candidate as a discriminator. If, instead, the evaluation area is simply a box to check (such as required certifications), or there is truly only one way to complete a task given the state-of-the-art, or it is immaterial (e.g., as long as the grass gets mowed), source selection teams should consider omitting a review of that area entirely or relegating it to an Acceptable/Unacceptable criteria.

Recommendation: Government source selection teams should actively work to avoid or remove any evaluation factors or elements that are unlikely to yield discriminators between proposals. For each evaluation element, the government source selection teams should ask: a) Would I pay a premium for a benefit or to avoid risk in this area; and b) Are there likely material differences between offerors in my industrial base in this area? If a factor or element fails either test, government source selection teams should consider removing it.

b. Adjectival Evaluation vs. Acceptable/Unacceptable Evaluation

At the same time that government source selection teams are identifying discriminators to evaluate, they must also decide how to rate them. Two of the primary methods are adjectival evaluation and Acceptable/Unacceptable evaluation. Each of these methods targets different source selection goals, but both are valuable tools in developing an efficient overall non-cost evaluation scheme.

Adjectival evaluations, on the one hand, typically rely on strengths, weaknesses, and significant weaknesses and deficiencies (collectively findings) to describe a range of potential benefits or risks that an offeror’s proposal presents, compared to the solicitation baseline. Often, agencies condense individual findings into one of several adjectival ratings applied to the evaluation factor, such as Outstanding, Very Good, Satisfactory, Marginal, and Unsatisfactory to help identify major disparities between the offerors. These adjectival ratings, however, are largely window dressing since the real trade-offs must be made at the findings level with the application of any relative weighting between different factors. While the adjectival rating system helps discriminate between offerors, it generally takes more time to evaluate and document.

Acceptable/Unacceptable evaluations, on the other hand, are much easier to execute for the evaluation team. Essentially, the only question is the following: has the offeror’s proposal shown a material failure to meet a government requirement or presented an unacceptable level of risk? If no, then the proposal is Acceptable. As such, instead of drafting strengths, weaknesses, and significant weaknesses, the evaluators ignore these complex questions of graduated risk and simply confirm that they can live with the approach and that it complies with the solicitation. Therefore, these evaluation sections are generally much shorter; often consisting of no more than a confirmation that the evaluators checked each of the Acceptable/Unacceptable elements identified in the solicitation and assigning a rating of Acceptable or Unacceptable. Deficiencies are the only type of finding the evaluators can document in this evaluation methodology, and even one deficiency leads to an Unacceptable rating. As such, this evaluation method is much quicker since it does not require detailed evaluation documentation; it does not, however, allow for discrimination or trade-offs between awardable proposals on any Acceptable/Unacceptable evaluation elements or factors.

Government source selection teams should also think critically about how they structure their requirements under an Acceptable/Unacceptable factor. For instance, putting desired approaches in an Acceptable/Unacceptable evaluation section would be a waste because the agency could not identify strengths for it and, thus, could not consider the benefits of the offerors stronger desired approach in its tradeoff decision. As such, the offerors have no competitive incentive to provide that desired approach. Therefore, agencies should primarily focus on drafting their Acceptable/Unacceptable factors to determine whether an offeror’s proposal meets its minimum needs and to ensure that it does not present an unacceptable level or risk in doing so.

Both of these evaluation methodologies provide critical tools for building an overall evaluation scheme for a procurement. Adjectival (or other variable value-based) methodologies are time-consuming to evaluate but are critical for identifying proposal differences in key areas of discrimination and for providing a strong competitive incentive for offerors to propose better, lower-risk approaches. They should be reserved for areas of critical discrimination. Acceptable/Unacceptable methodologies, on the other hand, are best suited to areas where the only meaningful discrimination is between offerors who can do the work and those who cannot. In these areas of negligible discrimination, agencies can decrease their evaluation workload and increase the speed of their evaluation, while still retaining a minimum check on acceptability, by using an Acceptable/Unacceptable methodology.

Ultimately, many evaluation schemes use a combination of factors—with some adjectival factors and some Acceptable/Unacceptable factors—to carefully tailor the bidding incentives that they place on offerors and to intentionally limit the evaluation workload the government wants to undertake.

Recommendation:  Government source selection teams should generally favor Acceptable/Unacceptable factors and reserve adjectivally rated factors only for those areas where it expects meaningful discrimination between proposals.

c. Overall Evaluation Strategy: Keep Separately Rated Adjectival Evaluation Areas Low in Number, Narrowly Tailored,and Distinct

After identifying discriminators and considering potential evaluation methodologies, government source selection teams must eventually collect their various considerations into an overall evaluation scheme. Since source selection teams should avoid unnecessary work, government source selection teams should pay careful attention to those evaluation factors, subfactors, or elements that they identify as adjectivally rated. Specifically, government source selection teams should aim to keep the separately rated adjectival evaluation areas 1) low in number; 2) narrowly tailored; and 3) distinct.

i. Keep the number of separately evaluated adjectival factors low

Having too many adjectivally rated factors or elements greatly increases the complexity of proposal drafting for the offerors and proposal evaluation for the government. Moreover, having too many areas to compete in can present a confusing message to the offerors about what areas the government wants them to focus on. As such, government source selection teams should minimize the number of adjectivally rated factors that they include in their overall evaluation scheme.

First, for each adjectivally rated evaluation factor that the government source selection team adds, it obligates itself to conducting a thorough and judicious review of that section to identify every strength, weakness, significant weakness, and deficiency that it contains and carefully gauge the level of risk each of these findings presents. Each of these findings must be drafted to clearly document the applicable aspects of the proposal and their relation to the solicitation evaluation criteria. Additionally, each finding can become the subject of a protest argument about whether the finding was reasonable, whether the agency applied the finding fairly and equally between the offerors, and whether the finding is adequately explained. If the solicitation requires adjectival factor ratings—such as Outstanding, Good, Acceptable, Marginal, Unacceptable—evaluators must also reach a consensus on which rating applies to the findings they have made and document that decision as well. This procedure generally requires a lot of work, resources, and time to develop a defensible record. As such, source selection teams should only choose to add an adjectivally rated evaluation factor, subfactor, or element after careful consideration.

Additionally, including many independently rated adjectival factors greatly complicates the government’s trade-off documentation if the solicitation applies different weighting to the different adjectivally rated factors. This complication stems from the fact that the source selection authority cannot simply compare two findings against each other on equal footing solely on the basis of the applicable benefits or risks that each presents; instead, the source selection authority must also consider each of those benefits or risks with the applicable difference in factor weighting. For more than a handful of findings, this difference in weighting, particularly across a large number of separately rated factors, becomes quite cumbersome and confusing.

To avoid both risks, government source selection teams should try to collect as many of their discriminators into a single adjectivally rated factor that does not have any further internal weighting. In this way, the evaluators will not waste time assigning a large number of adjectival factor ratings, and the source selection authority will be able to directly compare the benefits and risks of the respective findings on the basis of their impact alone, instead of differences in factor weighting.

Even if it is not feasible to collect all of the discriminators into a single factor, having fewer factors is beneficial. With fewer adjectivally rated evaluation factors, such as two, the relative order of importance is still relatively simple for the offerors to understand and cleaner for the agency to apply. Moreover, if evaluators still must examine certain aspects for acceptability but do not expect those areas to be discriminators, the source selection team can also include Acceptable/Unacceptable factors alongside its small number of separately rated adjectival factors.

Consider, for example, a solicitation for LOE services using the following evaluation factors in “descending order of importance”: technical approach, past performance, management plan, staffing, and total evaluated cost. Also, assume that the staffing factor has two subfactors that are listed in “descending order of importance”: key personnel and staffing of non-key personnel. Such a solicitation is a recipe for confused offerors, complex evaluations, and confusing tradeoffs. For example, where should the evaluators document a concern about managing new non-key personnel? Presumably, the management plan factor and the staffing of non-key personnel subfactor could both be implicated. Because they are differently weighted, the government evaluators must carefully consider where to put this weakness and consistently apply that determination across all offerors. Depending on the evaluation team, this plan could be contentious. Moreover, this decision may have important impacts on the trade-offs since documenting the risk in the management plan will be more detrimental to the offerors than documenting it in the staffing of non-key personnel factor. In fact, the proper classification of the weakness could become a whole separate protest issue beyond whether the government reasonably identified the weakness. As this single, simple example shows, more factors, particularly those with different ratings, greatly complicate the technical evaluation.

Instead, the source selection team should have limited the number of evaluation factors that it would consider. For example, it could have only had three evaluation factors, also listed in “descending order of importance”: technical, past performance, and total evaluated cost. Under this simpler evaluation scheme, all of the issues dealing with staffing would have a clear home and would fit unambiguously into the solicitation’s weighting scheme. This avoids the evaluators’ discussion about where to locate the weakness, greatly reduces the likelihood that the source selection authority would have to apply a separate rating to the finding in the trade-off documentation, and defangs any protest argument about misclassifying the weakness.

This more streamlined evaluation scheme also omits the management plan factor entirely, which is a labor saver for all. In many cases, this limiting may be appropriate where the management plans are not likely to be a discriminator between the proposals. Nevertheless, completely omitting a review of the management plans may be too uncomfortable for some evaluators or some requirements; if that is the case, the source selection team should consider adding management plan as an Acceptable/Unacceptable factor. This Acceptable/Unacceptable factor would not require drafting detailed adjectival evaluation findings and would not factor into the relative order of importance because each offeror is either acceptable or not. In other words, it would allow the government to determine whether the offerors meet the requirements or not without much additional work or complexity.

Recommendation: Government source selection teams should limit the number of separately rated evaluation factors. For instance, three evaluation factors (Technical, Past Performance, and Price) provide sufficient discrimination among offerors for the vast majority of LOE services source selections. Based on the needs of the agency, each agency can consider the appropriate relative importance to ascribe to each factor.

ii. Limit the use of subfactors and, if used, make subfactors equally weighted

Agencies should apply the same approach to subfactors as they do for evaluation factors, which is to limit the number of separately rated subfactors as well. While there is no FAR definition, the DoD source selection guide states that “[e]valuation factors and subfactors represent those specific characteristics that are tied to significant solicitation requirements and objectives having an impact on the source selection decision and which are expected to be discriminators or are required by statute/regulation.” Including separately weighted subfactors under a factor can result in the same risks as having too many evaluation factors: extra work, extra time, and increased protest sustain risk. However, to the extent that subfactors are necessary, an agency should strongly prefer to make those subfactors equally weighted and, in an ideal case, not separately rated. In fact, government source selection teams may find that using unrated subfactors (provided they are equally weighted within the factor) can help organize the offerors’ proposals and lend structure to the evaluation documents without creating additional work, complexity, or risk.

However, subfactors that each receive a rating or are weighted differently present all of the same risks as using too many evaluation factors. For instance, if the agency elected to include key personnel and staffing of non-key personnel as subfactors under the aforementioned technical approach, each receiving an adjectival rating and the key personnel subfactor as more important than the staffing of non-key personnel, this option would significantly complicate the evaluation and add risk to the award decision. The extra complexity and risk are that the agency will need to define and assign an appropriate rating to each subfactor and then would need to consider the findings and ratings within each subfactor, along with the assigned weighting in arriving at a factor-wide rating. In contrast, if key personnel and staffing of non-key personnel were equally weighted and unrated subfactors, then the agency could simply consider the merits of each finding without considering the weighting assigned to it in arriving at a factor-wide rating.

Recommendation: Government source selection teams should limit the use of subfactors. However, if subfactors are necessary, then they should be equally weighted within the factor and not separately rated.

iii. Keep all adjectival factors narrowly tailored

Agencies should also actively limit the breadth of their adjectivally rated factors. This limits the potential issues that the evaluators have to review and can allow the government to greatly reduce the amount of proposal information that it requests from offerors.

As a threshold matter, there is no requirement that an agency must evaluate every single SOW requirement in an offeror’s proposed technical approach. Even if a SOW Task area is not evaluated as part of the solicitation, that SOW task still remains and becomes part of the contract that the contractor is required to perform. Therefore, agencies should narrowly tailor their adjectivally rated factors to areas that are likely to identify discriminators between offerors.

For example, on an administrative support services solicitation with 150 SOW tasks, the government should not simply define its adjectivally rated technical factor as an evaluation of the offeror’s ability to successfully complete all of the required SOW tasks. Invariably, this approach will lead to offerors describing each of the 150 tasks in as much detail as the page limits allow, which will require the evaluators, in turn, to read all of that detailed discussion and document any instances where the offerors exceed the requirement or propose a risky approach. This can lead to dozens of findings about largely unimportant SOW tasks and about SOW tasks where there is little room for offerors to differentiate themselves. As such, much of this effort is a waste and serves only to increase the complexity of any protest litigation. Alternatively, the government source selection could have defined its adjectivally rated factor as an evaluation of “the offerors ability to perform SOW tasks 1.2, 8.2–8.7 and 9.11.” If the offeror picked tasks that were hard to perform or that had several meaningfully distinct ways of performing them, this more narrowly tailored approach would greatly limit the size of the proposal for the technical factor, it would decrease the overall evaluation workload, and it would focus the trade-off on areas that would provide meaningful discrimination between the proposals.

As this example shows, government source selection teams should narrowly tailor their evaluation factors. Nevertheless, in making these determinations, agencies must carefully balance the evaluation/litigation advantages of narrowly tailoring an evaluation factor against the technical/performance risk of awarding without confirming all aspects of an offeror’s proposed approach. In the context of LOE service contracts, which are generally designed to be fairly flexible, the technical/performance risk is generally less than in contracts where the government has less control over the awardee’s performance approach after award.

Recommendation: Government source selection teams should limit the scope of their adjectivally rated evaluation factor to areas that are likely to provide meaningful discrimination among offerors. Evaluating every single aspect of a solicitation is not always necessary or advisable, since contractors are bound to perform by the terms of the awarded contract.

iv. Keep all separately evaluated factors distinct

Agencies should also actively work to keep their evaluation factors distinct from one another in terms of what issues each factor accounts for. To do this, agencies should lump potentially connected issues together into a single factor. There are two main litigation reasons for this objective: 1) overlapping issues can lead to an evaluation record that can address the same issues in multiple places, which increases the risk that GAO will view a particular set of findings as double-counting; and 2) when issues can reasonably appear in two differently weighted factors, protesters can argue that the government applied the incorrect factor weighting to their finding. Beyond these risks, having overlapping factors generally just adds pages to the proposal and causes the evaluators to evaluate the same thing twice in separate factors, while having to carefully scrutinize the two sections for inconsistencies between them.

In terms of double-counting, GAO has stated that, “[w]here [a solicitation] contains separate and independent technical evaluation factors encompassing separate subject areas, with each factor assigned separate weights under the solicitation’s stated evaluation scheme, an agency may not double count, triple count, or otherwise greatly exaggerate the importance of any one listed fact.” As such, addressing a high-risk, very junior labor mix in a technical factor, a management factor, a staffing factor, and a transition factor would split the impact of that single labor mix proposal decision across four or more findings. While a team could likely draft these four weaknesses if they apply sufficient attention to clearly describing what aspects of the junior labor mix risk applies to which factors, this option still leads to a complex and potentially confusing record. Moreover, it is very easy for a mistake in separating these issues to look like double-counting. Instead, had the government source selection team structured its evaluation factors such that it evaluated staffing, management, and transition as part of a single technical factor, then the evaluators in this example could easily choose to write a single significant weakness for the high-risk labor mix that addressed the technical, management, staffing, and transition issues in a single finding. This would remove the risk of double-counting.

Furthermore, as described in Part II.C.1, overlap between differently weighted factors is even more concerning because of the following: it requires more evaluator attention to decide where to document findings and to do so consistently; it complicates the trade-off documentation; and it opens new protest arguments about whether a finding has been properly weighted under the solicitation’s description of the relative order of importance. Keeping evaluation factors distinct from one another further guards against these issues.

Moreover, government source selection teams should also work to keep their factors distinct where they include Acceptable/Unacceptable-rated factors alongside adjectivally rated factors. This allows the source selection team to narrowly tailor the adjectival factor to discriminators without a protester being able to argue that the government should have given it credit for an approach proposed under the topics it intended to evaluate as Acceptable/Unacceptable. For example, a source selection team could structure its solicitation with two evaluation factors: General Technical Approach (Acceptable/Unacceptable) and Technical Discriminators (Adjectival). Topically, these two factors are likely to have substantial areas of overlap. Nevertheless, if the solicitation provides a clear and discrete list of SOW areas or types of tasks that it intends to evaluate adjectivally, it could gain significant focus and speed in its evaluations. Therefore, the solicitation should include a statement similar to “The Government will evaluate the merits of the offeror’s proposed technical approach in the following discrete areas on an adjectival basis, while evaluating the majority of the offeror’s proposed technical approach under the General Technical Approach Factor, which is rated on an Acceptable/Unacceptable basis,” followed by a discrete list of those areas that it wants to rate adjectivally. Keeping these two factors expressly and clearly discrete will provide clearer incentives to the offerors, simplify the evaluation, and limit the risk of double-counting or misclassification arguments.

Recommendation: Government source selection teams should set up each separately evaluated factor to consider discrete information to prevent issues from bleeding areas across multiple factors.

d. Other Technical Factor Issues
i. Limits on lowest-priced technically acceptable

Although DFARS 215.101-2-70 generally prohibits the Department of Defense (DoD) from using the lowest priced technically acceptable (LPTA) source selection criteria in most cases, having a very limited number of adjectivally rated factors is still advantageous for the government in many of the situations in which use of LPTA is prohibited. Importantly, the GAO has permitted an evaluation scheme that closely resembled LPTA in Inserso Corp. In this case, the Department of the Air Force solicitation provided that the agency would rank the five lowest price quotations and evaluate them as technically acceptable or unacceptable. For the technically acceptable quotes, the agency would rate them under past performance, which received a performance confidence assessment rating (substantial confidence, satisfactory confidence, no confidence, or unknown confidence), and then would trade-off between past performance and price, which were equally weighted. Despite the protester’s assertions that the agency used LPTA criteria by not trading off between price and technical factors, the GAO found that using a tradeoff between price and past performance as the basis of the source selection did not violate procurement law. As such, in situations where LPTA is prohibited but agencies still want the speed and simplicity of LPTA, they should consider this “LPTA-lite” approach.

Moreover, while the Air Force used past performance as its adjectivally rated factor, other government source selection teams could choose other factors to use. The critical issue in selecting a good “LPTA-lite” adjectival factor is ensuring that it is simple, easy, and straightforward to evaluate. Two other potential candidates for an LPTA-lite adjectival factor in LOE service contracting are “the degree to which proposed key personnel meet or exceed the desired attributes” or narrowly tailored sample problems that address important areas of discrimination.

Of course, using an LPTA-lite evaluation structure with a single narrowly tailored adjectival factor does result in an evaluation scheme that is heavily weighted towards the cost/price factor. Specifically, with only one adjectivally rated factor, the number of discriminating findings is likely to be small, which can make justifying paying a premium, and particularly a large one, more difficult. If this is a concern, the government source selection team can mitigate this by calibrating the relative order of importance of the adjectivally rated factor against the price factor. Even then, however, it can be hard to justify paying a $15 million premium where the only difference between the two proposals is a single key personnel strength.

Nevertheless, as Inserso shows, agencies can still pursue an evaluation approach that is similar enough to LPTA by limiting the LOE service solicitation evaluation to a single adjectivally rated factor. Overall, if appropriately tailored to the agency’s requirement, this approach can be very advantageous to award quickly, survive protest, and execute efficiently.

ii. Using sample problems

While many LOE services solicitations focus on evaluating an offeror’s demonstrated approach, capabilities, understanding, and knowledge to accomplish all of the SOW tasks, sample problem factors are an alternative (or complimentary) approach that can give evaluators clearer insight into how offerors will actually solve technical problems in performance. This can reduce technical risk and, where it replaces a broader technical factor, can decrease workload and increase the speed of the award all parties. That said, drafting strong sample problems is very fact-dependent and can be challenging.

Generally, in a sample problem evaluation factor, an agency describes a hypothetical tasking in the solicitation and directs offerors to provide an example deliverable or approach in their proposals to see if the offeror can muster a proposed solution to that hypothetical tasking in a reasonable time. Nevertheless, drafting these hypothetical taskings well can be challenging for government source selection teams, since they must simultaneously test a sufficient portion of the SOW to make a reasonable assessment of the offerors’ ability to perform the contract, while remaining straightforward enough for offerors to be able to complete the task within proposal preparation timelines.

Moreover, agencies should try to avoid using sample problems that involve tasking that it has previously paid a potential competitor (usually the incumbent) to perform under another contract, since this choice gives that offeror a substantial and potentially unfair head-start. Furthermore, sample problems should present a challenge for offerors and sufficient trade-space to allow for a variety of proposed approaches to permit offerors to differentiate themselves. Adding an easy or single-approach sample problem does not demonstrate any discrimination between proposals and is, thus, a waste of time and effort for all parties.

Furthermore, government source selection teams should consider what supporting or explanatory information they must provide to the offerors along with the sample problem to give all potential competitors a level playing field. For instance, while agencies may provide a set of hypothetical facts, it may inadvertently omit other important facts; in such situations, offerors may make different assumptions about these facts, which could result in them proposing unacceptable or irrelevant approaches. For example, if the government’s schedule is unstated, one offeror may assume one year and another six weeks; this presumption will lead to two very different responses and both may be wrong if the government really only has four weeks. As such, leaving out critical information wastes time and effort for offerors and the government. Additionally, agencies should try to be clear about the depth of analysis or detail that they expect from an offeror’s sample problem response.

Finally, government source selection teams should also consider that, in an LOE services environment, a sample problem response is not binding on the offeror. As such, offerors have strong incentives to describe the most technically beneficial approach without any cost/price constraints. In some cases, government source selection teams are tempted to also ask the offeror to cost-out its approach to the sample problem as a check on this incentive to respond with the (possibly unaffordable) technical-best. While this costing approach might provide some check on the incentive to propose a technical solution that the government cannot afford in performance, government source selection teams should actively avoid this strategy because it imposes a mini-cost-realism analysis for each sample problem alongside the cost-realism analysis for the actual contract pricing. As discussed, cost-realism analyses are complex, time-consuming, and high-risk; adding additional cost realism analyses that are not necessarily connected to actual performance or to each other greatly increases all of these issues. Instead, the government may want to consider providing the offeror a defined subset of the contract’s total hours and labor mix to expend on each sample problem. While this information increases the complexity of the evaluation somewhat, it is considerably less complex than adding one or more (potentially conflicting) cost realism analyses. Overall, the use of sample problems is very fact-specific and, while there are some potential evaluation advantages, they often require a fair amount of effort prior to releasing the solicitation to set them up effectively.

iii. Differences in evaluating past performance

Although much of the above discussion focuses on non-past-performance evaluation factors, FAR 15.304(c)(3) also requires agencies to evaluate an offeror’s past performance records in most cases. Although past performance evaluation factors can be rated adjectivally or on an Acceptable/Unacceptable basis, the mechanics of both approaches differ from those of the non-past performance factors described above.

Past performance is a measure of how well an offeror has performed on active and completed prior contract efforts. Solicitations typically ask offerors to discuss several recent prior contract efforts and to provide information about these effort to demonstrate that they are relevant evidence that the offeror can perform the work under the solicitation. In addition to the offeror’s description of these prior efforts, the government will often collect additional customer inputs about the offeror’s prior work through CPARS, through customer questionnaires, or by contacting the customer directly.

Importantly, unlike other non-cost factors, the government may not assess strengths, weaknesses, significant weaknesses, or deficiencies in past performance evaluations. Instead, solicitations generally break an offeror’s past performance evaluation into two steps: one addressing each of the individual prior contract efforts and the other addressing the cumulative past performance record for the offeror.

In the first step of a past performance evaluation, the evaluators determine whether each prior contract effort in the offeror’s proposal is 1) recent, 2) relevant (i.e., whether the submitted prior contract efforts are similar in terms of size, scope, and complexity to the effort required in the solicitation), and 3) of a certain quality (an assessment of how satisfied the customer was). In general, evaluators document these assessments of each of the prior contract efforts by assessing when the work was performed, why it was (or was not) of similar size, scope, and complexity, and what ratings the customer gave the performance (if the evaluators received customer inputs).

In the second step of the past performance evaluation, the evaluators consider how much confidence the cumulative past performance record provides the government that the offeror will successfully perform the solicitation’s work. As with other non-cost/price factors, agencies can rate this confidence assessment either adjectivally or as Acceptable/Unacceptable, but again the evaluation mechanics are different for past performance evaluations. For example, the DoD Source Selection Guide suggests the following adjectival ratings for past performance confidence: Substantial Confidence, Satisfactory Confidence, Neutral Confidence, Limited, Confidence, and No Confidence. The major difference in the past performance adjectival scheme is the inclusion of a Neutral Confidence rating for offerors without any prior contract efforts, which allows room for new entrants into the government marketplace.

As with any other evaluation factor, government source selection teams should only use an adjectivally rated factor when it is likely to provide meaningful discrimination between the offerors. Otherwise, an agency should consider using an Acceptable/Unacceptable past performance evaluation. In an Acceptable/Unacceptable past performance evaluation, the agency performs the first step as it would for any other past performance evaluation to evaluate the recency, relevancy, and quality of each of the offeror’s prior contract efforts. In the second step, however, the government is only selecting between a Satisfactory/Neutral Confidence rating and a No Confidence rating. As above, the major benefits of an Acceptable/Unacceptable past performance factor compared to an adjectivally rated past performance factor are that they require less evaluation effort (particularly in the trade-off analysis), they present lower protest risk, and they facilitate a quicker award.

Regardless of whether agencies decide on an adjectivally rated or Acceptable/Unacceptable past performance factor, there are some other differences they should keep in mind in evaluations.

Other Information: Unlike other non-past performance evaluation factors, evaluators may look beyond the four corners of the proposal to consider past performance information that the offeror has not provided; moreover, in certain situations, the evaluators must consider certain past performance information that is outside of the proposal that is “too close at hand” to ignore. This is an important aspect of planning for a past performance evaluation, and government evaluators should carefully consider what past performance information they currently have on hand to assess what information that they may be required to consider in their evaluation.

Opportunity to Respond to Adverse Past Performance: In assessing other past performance information that they have on hand, government source selection teams must also determine whether an offeror has had an opportunity to respond to any adverse past performance information the evaluators will consider. If the offeror has not had an opportunity to respond to it, evaluators must provide the offeror such an opportunity. This rule, however, has several caveats. First, it does not apply to neutral or positive past performance; the government has no obligation to communicate with the offeror about non-adverse past performance information. Second, this obligation exists even if the government does not go into discussions. Importantly, FAR 15.306(a)(2) specifically exempts communications that give an offeror an opportunity to respond to adverse past performance information from triggering discussions; this is important to keep in mind where agencies want to make award on initial proposals. Third, in many cases, offerors have already had an adequate opportunity to respond to adverse past performance information, which limits the number of situations in which it applies. For instance, when using Contractor Performance Assessment Reporting System (CPARS) data, the CPARS process gives contractors ample opportunity to respond to adverse past performance information that the customer documents. Even if the offeror chooses not to respond, having that opportunity during the CPARS process is sufficient to avoid triggering the need to give the offeror another opportunity to respond. Overall, evaluators must carefully consider what information they have available, and the provenance of it, to assess whether they must use past performance information and whether they must give the offeror a chance to respond to that information when the evaluators intend to consider it. Regardless of how they decide these questions, the evaluators must ensure that their evaluation documentation provides sufficient contemporaneous discussion of their analysis.

Past Performance as a Potential SBA Nonresponsibility Issue: Small businesses can also present certain challenges in a past performance evaluation. Specifically, where an agency determines that a small business offeror’s past performance record provides No Confidence (under an Acceptable/Unacceptable factor), GAO could consider such a finding to be “a determination of nonresponsibility” for a small business, which requires the agency to ask the Small Business Administration (SBA) for a final determination using its certificate of competency procedures. In fact, GAO has sustained protests where an procuring agency fails to seek a determination from SBA using its certificate of competency procedures for a nonresponsibility determination. Evaluators applying a No Confidence rating under an adjectivally rated past performance factor as opposed to Acceptable/Unacceptable factor, should ensure that its past performance findings are based on responsiveness concerns or are clearly separate and distinct from analysis of the offeror’s responsibility.

Overall, past performance evaluations have some distinctive mechanics and issues that government source selection teams should carefully consider in building their evaluation schemes. Despite these methodological differences, however, many of the general strategic recommendations that apply to other non-cost/price evaluation factors are equally important in structuring a past performance evaluation factors. As such, government source selection teams should also aim to structure their past performance factors as simply, clearly, and efficiently as possible.

2. Cost/Price Evaluation Strategies

As with the non-cost evaluation factors, government source selection teams should carefully consider how broadly and deeply they want to commit to evaluating the offerors’ proposed prices. As discussed in depth in Section II.A.2, the most critical decision in this area is what contract type to use for the solicitation. In nearly all competitive situations, a fixed-price type effort will require substantially less evaluation time and effort than a cost-reimbursement effort to reach award successfully. The primary reason for this is cost-realism, which brings substantially higher litigation risk and complexities. As such, government evaluation teams should generally favor fixed-price type deals to minimize the pre-award work for all parties, to reduce the complexity of their evaluation documentation, and to improve the defensibility of their awards.

Where the government opts to use a fixed-price contract type, it only needs to conduct a price reasonableness analysis, which is a simple comparative analysis of top-line proposed prices where the government receives adequate price competition. For fixed-price-contract types, the greatest solicitation risk is accidentally triggering another type of analysis—price realism analysis—which is not required by statute or regulation and is significantly more complicated than price reasonableness analysis. For the most part, avoiding price realism involves deleting any solicitation language that requires the government to evaluate whether the offeror’s proposed price is too low to actually allow it to perform. The following section explores these concepts in more depth.

However, where a government source selection team selects a cost-reimbursement contract type, most of its strategic decisions involve intentionally limiting the scope of the cost realism analysis, which allows for a reduction in the cost realism data that the solicitation must require of offerors. These limits can include the following: limiting the subcontractor costs the government will evaluate in its cost realism analysis; setting a common escalation rate for direct labor costs; and removing Other Direct Costs from the government’s cost realism analysis. Additionally, the government source selection team must carefully scrutinize whether the solicitation asks for all of the information that the government will need to successfully conduct its cost realism analysis of those cost elements that it plans to evaluate without going into discussions.

a. Fixed-Price: Price Reasonableness and Avoiding Price Realism

Price reasonableness evaluations offer substantial benefits for quickly awarding a competitive fixed price source selection because “[n]ormally, competition establishes price reasonableness,” and, if the prices are based on adequate price competition, no additional data is needed from offerors. FAR 15.404-1(b)(2) provides various techniques to ensure a fair and reasonable price, but there is a preference for the first two, which include comparing the prices received to one another or a comparison of the prices received to historical pricing information. The benefits to a firm fixed price solicitation with a price reasonableness evaluation is that it can be accomplished quickly, it is simple, and the evaluation guards against paying too high a price for a contract.

While a price reasonableness evaluation is generally straightforward in situations in which the government receives adequate price competition, source selection teams should take care to ensure that they meet the limited requirements placed on price reasonableness analyses. In particular, agencies should avoid three major pitfalls.

First, agencies should carefully avoid asking for additional pricing information in the solicitation. In most cases, FAR 15.404-1(b)(2) permits the offerors to complete a price reasonableness analysis using only total proposed prices. Nevertheless, where an agency elects to request data from offerors to support their proposed prices, GAO has found that an agency cannot reasonably ignore additional information that it has chosen to request. As such, where an agency receives information that it requested in the solicitation, it must consider the impact of that data and document the analysis of this information. Because this additional analysis is not required, it is often a waste of time and effort for both the offerors and the government.

Second, an agency must ensure that it follows its solicitation and then contemporaneously documents its analysis. For instance, if the solicitation states the government’s total evaluated price will be based on the total price for all base requirements and options, the agency must evaluate and document all proposed periods of the contract and not just the base period. Moreover, this same principle—that agencies must diligently follow any the solicitation’s evaluation ruleset—applies to any other analyses that the agency tacks on beyond a simple comparative price reasonableness analysis. Therefore, agencies should actively avoid adding any language in fixed-price type efforts that complicates what should be a simple comparative price reasonableness analysis.

Third, the GAO has established that the mere receipt of multiple proposals does not establish fair and reasonable pricing; rather, the agency must compare the prices of the proposals received. Agencies can avoid this error through proper documentation of the agency’s analysis by simply showing a comparison of the offerors’ respective proposed prices.

Beyond these three issues that apply to all fixed-price contracts, another issue applies to FPIF contracts. Where the government source selection team opts to use an FPIF contract, it must determine whether it intends to evaluate the probable costs that the offeror will incur between the target price and the ceiling price a cost realism analysis, since there is some bounded price flexibility in FPIF contracts. In general, teams should avoid conducting a cost realism on an FPIF contract and, instead, notify all offerors in the solicitation that the government will evaluate all FPIF efforts at ceiling. This is permissible because the ceiling price for a FPIF contract will be the government’s maximum cost exposure under that contract type; as such, it will not bear the risk of cost increases beyond the ceiling price. This approach greatly simplifies the evaluation of FPIF contracts by avoiding all of the issues incumbent with a cost realism analysis.

Recommendation: Wherever possible, government source selection teams should actively pursue price evaluation schemes that are limited to a simple, comparative price reasonableness analysis that is unencumbered by additional data or unnecessarily convoluted calculations.

b. Avoiding Price Realism

Price realism is a distinct concept from price reasonableness. While price reasonableness focuses on whether an offeror’s proposed price is too high, price realism focuses on whether an offeror’s proposed price is too low to perform. Price realism is never required, yet there are several ways that poor solicitation drafting can accidentally trigger a requirement to perform unwanted price realism. Moreover, while the FAR does not use the term “price realism,” GAO frequently uses that term to describe the analysis in FAR 15.404-1(d)(3), which allows that “cost realism analysis can be used on competitive fixed-price incentive contracts or in exceptional cases, on other competitive fixed-price-type contracts.”

In general, price realism is an unwanted complication to an otherwise simple price reasonableness analysis. The FAR guidance on this analysis is scant, and the improper or inadvertent application of price realism analysis frequently leads to sustained protests at the GAO.

Nevertheless, price realism can be an important technique for certain exceptional procurements, such as instances where the agency needs to determine if an offeror was bidding so low that it jeopardizes successful performance. Nevertheless, agencies should carefully weigh the risk and benefits of requiring a price realism analysis.

i. Do not require a price realism analysis unless absolutely necessary

In the exceptional situations in which an agency intentionally decides to conduct a price realism analysis, it should clearly state that intention in the solicitation. Additionally, the agency must consider that, from an evaluation mechanics perspective, the government cannot make cost adjustments to a firm-fixed price, so it must plan for conducting a cost realism analysis that results instead in performance risk findings under the non-price factors or impacts the responsibility determination. GAO has stated that agencies can use a variety of methods to assess the price realism of an offeror’s proposal including a) analyzing pricing information proposed by the offeror; and b) comparing proposals received to one another, to previously proposed or historically paid prices, or to an independent government cost estimate (IGCE). Additionally, even if the offeror’s proposed prices are lower than the historical prices paid or the IGCE, agencies can reasonably determine that different quantities, performance conditions, or contract terms, etc. support a finding of technical competence or understanding despite the offeror’s lower prices, but agencies must document this analysis.

Recommendation: Since agencies are not required by statute or regulation to perform a price realism analysis, government source selection teams should not require a price realism analysis in their solicitations, unless absolutely necessary. In those “exceptional cases” in which an agency elects to perform a price realism analysis, the agency’s solicitation should explicitly describe conducting a price realism analysis to evaluate whether an offeror’s price is so low that it indicates increased performance risk or a technical misunderstanding. In these cases, the solicitation should also request all of the information necessary to support the agencies’ price realism analysis, which is the same information necessary to conduct a cost realism analysis, as described in Section II.C.2.b.i.

ii. Avoid inadvertently triggering a price realism analysis

On the other hand, where the government source selection team wants to avoid price realism, it should carefully scrutinize its solicitation to remove any language that may accidentally trigger a price realism analysis. Inadvertently triggering a price realism analysis is one of the biggest protest risks for fixed-price contracts; in these cases, the agency’s record almost always insufficiently documented because the agency never intended to perform a price realism analysis, and it likely lacked the detailed cost data necessary to perform such an analysis. Beyond the risk of protest loss, correcting this issue can force the agency revise its solicitation or enter into discussions to get the necessary information from the offerors.

One of the ways agencies can inadvertently trigger a requirement to perform a price realism analysis is inclusion of the Professional Employee Compensation clause (FAR 52.222-46) in the solicitation. This clause requires the government to compare the incumbent professional compensation to the proposed professional compensation because recompetition of services contracts may result in lower compensation that may impact performance. GAO has stated, “In the context of fixed-price contracts, our Office has explained that this FAR provision anticipates an evaluation of whether an awardee understands the contract requirements, and has proposed a compensation plan appropriate for those requirements—in effect, a price realism evaluation regarding an offeror’s proposed compensation.” GAO has consistently held that if FAR 52.222-46 is included in the solicitation and the government does not evaluate offeror’s proposed information, the GAO will sustain the protest. The government should aim to exclude FAR 52.222-46 from its solicitations wherever possible.

Another way the agency can inadvertently trigger a price realism analysis, without inserting a clause or specifically stating it will perform a price realism, is through the inclusion of certain terms and concepts. Generally, if the solicitation states that the agency will review prices to determine whether they are so low that they reflect a lack of technical understanding or provide that a proposal can be rejected for offering prices that are too low, the solicitation may accidentally trigger a requirement for the agency to conduct a price realism analysis. Similarly, where a solicitation states that the agency will evaluate whether “prices demonstrate a lack of comprehension of the technical requirements,” are “incompatible with the scope of effort,” are “unrealistically low,” or similar phrases, this language could lead the GAO to determine the solicitation requires the agency to conduct a price realism analysis.

Recommendation:  If an agency does not have a requirement or intend to perform a price realism analysis, it should not include FAR 52.222-46 or terms that require it to determine whether proposed prices are so low that they reflect a lack of technical understanding or increased performance/technical risk, as these statements could inadvertently trigger a price realism analysis.

b. Cost-Reimbursement: Cost Realism Solicitation Strategies—Ask for What You Need and Limit What You Must Review

When government source selection teams choose to employ a cost reimbursement type contract, the FAR requires them to conduct a cost realism analysis. The reason for this requirement is that, in a cost reimbursement contract, “an offeror’s proposed estimated costs are not dispositive because, regardless of the costs proposed, the government is bound to pay the contractor its actual and allowable costs.” As a result, an agency must conduct a cost realism analysis “to guard [the agency] against unsupported claims of cost savings by determining whether the costs as proposed represent what the government realistically expects to pay for the proposed effort.”

In general, the mechanics of cost realism are easy to describe, but complex to implement. At its most basic level, a cost realism analysis compares an offeror’s proposed costs against relevant, real-world substantiating data to make judgments about the accuracy of the offeror’s proposed costs. Despite this basic rule, cost realism analyses often involve hundreds of individual cost elements spread across numerous (potentially inconsistent) prime and subcontractor pricing spreadsheets, and individual historical records.

In general, these cost elements fall into several broad buckets: direct labor costs (which builds up from of labor hours, labor mix, direct labor rates, escalation rates, and (if proposed) uncompensated overtime rates), direct material costs, other direct costs (often incidental material and travel costs), and indirect rates. In the LOE services setting, proposals may not present any direct material cost; furthermore, where the solicitation specifies total hours and a labor mix, the government need not review the proposed hours for realism, as long as they confirm the offeror bid to the solicitation’s government labor mix. Nevertheless, cost realism analyses for LOE services involve the majority of these cost elements.

Despite this complexity, the FAR provides minimal prescriptive guidance for COs in setting up and conducting a cost realism analysis. As such, it is critical for government source selection teams to carefully ensure that their solicitations ask for the huge amount of information that is required to complete a cost realism analysis successfully. Furthermore, government source selection teams should actively exercise the agency’s discretion to simplify and document an analysis that can otherwise quickly become painfully convoluted and time-consuming.

Before addressing these best practices and options to simplify a cost realism analysis, a brief overview of the basics of cost realism law and regulation will lay the groundwork for the recommendations.

The purpose of a cost realism analysis is to guard the agency against unsupported claims of cost savings because, regardless of the costs proposed, the agency is bound to pay the contractor its actual and allowable costs. The FAR defines cost realism as:

[T]he process of independently reviewing and evaluating specific elements of each offeror’s proposed cost estimate to determine whether the estimated proposed cost elements are realistic for the work to be performed; reflect a clear understanding of the requirements; and are consistent with the unique methods of performance and materials described in the offeror’s technical proposal.

To conduct a cost realism analysis, the agency considers the proposed costs and technical approach of each offeror and develops what it determines is the “best estimate of the cost of any contract that is most likely to result from the offeror’s proposal.” Many agencies refer to this as the “probable cost” or the “Total Evaluated Cost/Price.” The agency arrives at this “probable cost” by adjusting each offeror’s proposed cost (and sometimes fee) up to a realistic level for any cost elements that are not supported by reliable substantiating data. However, regardless of any adjustments the agency makes to an offeror’s proposed cost for evaluation purposes, the agency will award the contract at the offeror’s proposed cost.

The exact methodology the agency uses for a cost realism analysis can vary and should consider each offeror’s proposed approach and costs. As described above in Section II.A.2, an agency’s cost realism analysis is a frequent protest grounds before the GAO.

GAO has provided a few examples of what cost realism methodologies that agencies should avoid. For instance, GAO has indicated that it is unreasonable for an agency to limit its cost realism evaluation only to assessing fully burdened hourly rates because a cost realism analysis should consider whether the proposed direct labor rates are realistic. Additionally, an agency may not “mechanically apply its own estimates for labor hours or costs—effectively normalizing cost elements of an offeror’s proposal to government estimates without considering the offeror’s unique technical approach.” Similar to other evaluation areas, a cost realism analysis will be found unreasonable where the agency fails to contemporaneously document its assessment of the realism of the awardee’s proposed rates. Nevertheless, GAO’s guidance about what to avoid is generally not a complete guide to efficient cost realism best practices.

Additionally, the regulatory guidance on conducting a reasonable cost realism analysis is very general. Neither the FAR, the Defense Federal Acquisition Regulation Supplement (DFARS), nor the DFARS Procedures, Guidance, and Information (PGI) provide practitioners with significant guidance about how to perform the cost realism analyses or what data to rely on. To help fill in this void, the following sections provide clear guidance and sample solicitation language about how to conduct an efficient and defensible cost realism evaluation.

i. Explicitly ask for the data necessary to complete a cost realism analysisin the solicitation

In a cost realism analysis, there are essentially two broad categories of data an offeror must provide in its proposal: proposed costs and substantiating data. In a perfect world, the offeror would support each proposed cost element with a corresponding substantiating data point that the proposal provides and clearly traces to the proposed cost. There is a wide variation in how well individual companies do in providing the required data and traceability; in fact, many are not aware of the breadth of information necessary to complete a cost realism analysis. As such, agencies should clearly identify the cost realism data that they need and exclude extraneous data that can complicate clear traceability of the data within the proposal.

With respect to proposed costs, the government should request a full cost build-up of the offerors’ proposed prices in the evaluated cost-reimbursement CLINs. Frequently, government source selection teams create a proposed cost build-up template that they include in the solicitation and require the offerors to complete. Since this Excel spreadsheet will be one of the primary methods for the government evaluators to calculate adjustments to the offerors’ proposed costs, the solicitation should explicitly require offerors to provide a spreadsheet that remains editable and functional if the individual cost elements are edited. The following is one approach to addressing this issue:

Offerors shall provide an Excel workbook that calculates its total proposed costs using the format provided in Attachment 1 [a government-developed Excel format attachment]. This spreadsheet must have all formulas visible and editable; it may not contain macros; and the spreadsheet’s calculations must comply with the Offeror’s proposed accounting and billing practices. If the proposed accounting and billing practices differ in any way from the Offeror’s current or approved practices, the Offeror must clearly note these changes.

Electronic copies of these tables shall be submitted in MS Excel format and shall have the ability to be edited (hours, rates, etc.) to immediately observe the impact to the Total Cost via links and formulas native to MS Excel (that is: not an embedded picture within MS Excel). If the Attachment 1 links to or draws information from another spreadsheet, this other spreadsheet must also be provided with all formulas visible and editable.

The cost/price data shall include all major cost elements (e.g., direct labor by category/rate/hours, fringe rate and amounts, overhead rate and amounts, G&A rate and amounts, cost of money factor/rate and amount, escalation, subcontracts, etc.) and fees.

Furthermore, the solicitation should clearly charge the offerors with linking these proposed costs to the substantiating data that they provide and the proposed technical approach. For instance, the solicitation could state:

The costs proposed in Attachment 1 shall be directly traceable to the staffing provided in the proposed Staffing Plan. Any inconsistency between the named individuals, labor categories, labor mix, time phasing, or individual hours provided in the proposed Staffing Plan and the Attachment 1 may result in a cost adjustment, an assessment of performance risk, and/or a determination that the Offeror is ineligible for award.

Furthermore, the historical direct rates included for each named individual or labor category in Attachment 1 must match the corresponding information provided in Substantiating Cost Information section. Where the Offeror must provide company-wide highest, lowest, and average direct labor rate actuals to substantiate a direct labor cost, it shall include the average direct labor rate actual for the historical rate column on Attachment 1. If any proposed direct labor rate lower than the historical rate provided, the Offeror shall explain the reason for the reduction in the narrative.

Beyond the proposed costs, the solicitation should clearly describe the substantiating data that the government needs to complete its cost realism analysis. Doing so in the solicitation gives all offerors the opportunity to provide the exact cost substantiation requested by the agency; this has several benefits. Most critically, it can avoid a situation in which the government is forced into discussions because none of the offerors provided sufficient substantiating data to survive a protest. It also reduces the likelihood that the government will have to make adjustments to an offeror’s proposed costs or identify additional cost risks for a lack of cost realism substantiating data in its proposal. Additionally, this suggested language gives all offerors a clear idea of the information the government will use to develop the offerors’ respective total evaluated costs. Nevertheless, while the agency may provide an explicit list of what cost substantiation that it requires of the offerors, the solicitation should still specify that the offeror has the burden of demonstrating the realism of its proposed costs.

The following recommended language clearly describes the substantiating data an agency needs to conduct a cost realism analysis of a typical LOE services proposal:

In its cost realism analysis, the agency desires to use the most relevant, reliable data available to capture the probable cost for each major cost element. Since each Offeror bears the burden of demonstrating the realism of its proposed costs, each Offeror must substantiate its proposed costs, as presented in its Attachment 1, with relevant, reliable data that demonstrates the realism of each proposed major cost element. The agency has already determined that certain types of information are necessary for its review, so each Offeror must provide the substantially all of the following information to be eligible for award:

(a) Current Named Individual Direct Rate Supporting Documentation: Offerors or major cost reimbursement subcontractors shall provide a screen-capture (or equivalent) from the employer’s payroll system for each current employee, Key and non-Key, named in the Offeror’s Staffing Plan. The Offeror shall fully explain all pertinent data on a sample screen capture. The Government must be able to derive the individual’s direct rate (both inclusive and exclusive of the impact of uncompensated overtime, if proposed) from the screen capture information provided by the Offeror.

(b) Contingent Hire Direct Labor Rate Supporting Documentation: Offerors or major cost reimbursement subcontractors shall clearly indicate named contingent hires, key and non-key, on its staffing plan and Attachment 1. The company intending to hire a contingent hire shall provide a signed contingent hiring agreement that explicitly lists the agreed upon annual salary for the named individual and the amount of uncompensated work required. The Offeror shall fully explain all pertinent data in the contingent hire agreement. The Government must be able to derive the individual’s direct rate (both inclusive and exclusive of the impact of uncompensated overtime, if proposed) from the contingent offer agreement information provided by the Offeror.

(c) Unnamed Direct Labor Rate Supporting Documentation: For any proposed labor category direct labor rates that are unsupported by either a screen-capture or a contingent hiring agreement, such as “To Be Determined” positions, the Offeror or its major cost reimbursement subcontractors shall provide the current, company-wide highest, lowest, and average direct labor rate actuals for the applicable labor category.

(d) Uncompensated Overtime Supporting Documentation: If any Offeror or any subcontractors (major or minor) propose uncompensated overtime, each must comply with [Uncompensated Overtime clause]. Moreover, if any Offeror or major cost reimbursement subcontractor proposes uncompensated overtime or direct labor rates decremented for the impact of uncompensated overtime, it must substantiate the cost reductions associated with its proposed use of uncompensated effort. This substantiation must include a description of the formulas applied to calculate the decremented rate (and/or decrement factor) and some form of historical data to demonstrate that the proposed level of uncompensated overtime is realistic. Such historical data might include the company’s historical average annual level of uncompensated overtime from preceding years and/or historical data demonstrating that the company’s proposed decremented rates are equal to or greater than historical actual incurred decremented direct labor rates for corresponding labor categories from preceding years, after adjusting them for annual escalation. In accordance with FAR 52.237-10 Identification of Uncompensated Overtime, if uncompensated time is included in the offer or any of the supporting cost data, the uncompensated time should be clearly identified with an explanation as to why it is needed.

(e) Indirect Rate Supporting Rate Documentation: Offerors shall provide five years of actual incurred rates for each proposed indirect and G&A pool, indicating the beginning and end dates for each fiscal year. Offerors shall provide this data for itself and shall ensure that the Government receives this information for any major cost-reimbursable subcontractors. If an offeror, or any of its subcontractors, proposes to cap any of its indirect rates, it shall identify each capped rate and shall propose a legally binding and enforceable clause capping the rates, which shall be included in the resultant task order award. The offeror’s legally binding and enforceable clause shall specifically identify the indirect rate category proposing to be capped and the associated rate category capped percentage. A proposed clause shall include a process for verification by the Government. NOTE: If a contractor does not have five years’ worth of actual incurred indirect data for any particular proposed indirect rate, it must provide the required information dating from the origin of the company.

(f) FPRA/FPRR Information: A list of all Forward Pricing Rate Agreements the Offeror or its major cost-reimbursable subcontractors have entered into with the Defense Contract Management Agency that apply to any of the major cost elements they propose or a statement that none apply to the proposal. This list should include contact information for the DCMA office that executed the agreement. Provide a current copy of any agreement contained on this list. Offerors should also provide contact information for any office that has issued an applicable Forward-Pricing Rate Recommendation for it or major cost-reimbursement subcontractors.

(g) Subcontractor Costs: Each major cost reimbursement subcontractor shall provide all of the information required of the prime contractor under the Supporting Cost Data sections of this solicitation (i.e., a complete Attachment 1, a corresponding Cost Analysis Narrative, and all necessary Substantiating Cost Information) for those portions of the work subcontracted to them. That said, subcontractors need not submit a separate Section B pricing; instead, the subcontractor costs should match the corresponding subcontract costs in the Offeror’s Attachment 1. The detailed information of subcontractors may be submitted separately to the Government if the subcontractor does not wish to provide this data to be provided to the prime Offeror. subcontractors must submit their information directly to the government via [instructions]. For cost/price summary data provided separately, subcontractors shall place the appropriate restrictive legend on their data and identify the company name, address, point of contact and solicitation number. Subcontractors are required to provide contact information for their cognizant DCAA branch office with the name and phone number of a DCAA point of contact who is familiar with their company. Failure to provide the required subcontracting data/cost may render the prime’s offer ineligible for award.

The above list of substantiating information is necessary for the agency’s cost realism analysis, but is not a complete list of the data that may be required to demonstrate the realism of the Offeror’s proposed rates. Therefore, the agency encourages Offerors to provide additional substantiating information as necessary to demonstrate the cost realism of their proposed costs. Nevertheless, as with any substantiating data, merely providing the substantiating data, without sufficient analysis and explanation of the relevance and reliability of that data in the Cost Analysis Narrative, will not demonstrate cost realism. As discussed in the solicitation, the Cost Analysis Narrative must clearly explain the reliability of all of the substantiating cost information provided and its relevance to the Offeror’s cost analysis. Providing substantiating information, without demonstrating its relevance, may indicate that the Offeror lacks an understanding of the costs involved in performing the solicitation’s requirements, which would indicate performance risks. NOTE: Offerors shall not rely on any Forward Pricing Rate Proposals or Provisional Billing Rates to provide any form of cost realism substantiation. These submissions, which lack meaningful Government realism review, are insufficient to demonstrate the realism of its proposed rates.

As shown in the recommended language above, the solicitation should also specify the potential actions that agency may take for missing substantiating cost information to clearly incentivize offerors to provide all of the required substantiating data.

For example, in AECOM Management Services Inc., the Army’s solicitation required at least one of four forms of indirect cost rate substantiation for proposed subcontractors. After receipt of proposals, the Army determined that, since the subcontractor’s proposal did not provide the solicitation defined subcontractor indirect information, the prime offeror was ineligible for award. GAO upheld this determination applying a well-established proposition of law that “[a]n offeror is responsible for submitting a well-written proposal, with adequately detailed information which clearly demonstrates compliance with the solicitation requirements and allows a meaningful review by the procuring agency.”

While finding an offeror ineligible for award based on missing information may not be the most advantageous approach in all procurements, specifying particular actions that the agency may take in response to an offeror’s failure to provide all of the necessary substantiating data is important to incentivize offerors so the agency receives the data it needs. Having all of the appropriate substantiating data results in the government identifying fewer cost risks with and making fewer adjustments to the offerors’ proposed costs. This reduces the time evaluators must spend documenting these findings and reduces the overall complexity of any cost realism protest defense.

Recommendation: Government source selection teams should ensure that their solicitations clearly require 1) a proposed cost build-up spreadsheet that evaluators can use to calculate adjustments, and 2) all of the necessary substantiating data required to support the offeror’s proposed cost elements. Moreover, the solicitation should specify that the offeror has the burden of demonstrating the realism of its proposed costs and charge offerors with ensuring traceability between the proposed costs, the substantiating data they provide, and their proposed technical approach.

ii. Explicitly limit the scope of a cost realism analysis in the solicitation

Beyond asking for the right data, agencies must define, at a high level, how they will calculate the Total Evaluated Cost/Price. This is a great opportunity for agencies to limit the scope of their cost realism analyses; such limits will save offerors and evaluators critical time and effort.

Typically, agencies specify that the Total Evaluated Cost will be the sum of all or most of the evaluated costs for each of the CLINs. In many cases, however, the solicitation includes a variety of contract types or evaluation schemes for different types of CLINs. For instance, if the solicitation includes both FFP LOE and LOE cost-reimbursement CLINs, the solicitation should specify that the Total Evaluated Cost/Price will be the sum of the FFP LOE CLINs (evaluated at the government labor mix and hours using the applicable fixed-rates) and the government’s evaluated cost resulting from its cost realism analysis for each cost-reimbursement CLIN. That said, for certain small or difficult-to-analyze CLINs, agencies should consider excluding them from the Total Evaluated Cost/Price or by using a CLIN-specific evaluation rule (such as a plug-number or evaluating FPIF CLINs at ceiling) that the agency expressly states in the solicitation. Irrespective of the combination of contract-types and CLINs, the solicitation must specify how the government intends to calculate Total Evaluated Cost/Price and should provide explicit guidance for all of the CLINs.

Agencies should also consider whether there are other opportunities within their cost-reimbursement CLINs to specifically limit what aspects of the proposal the agency will perform a detailed cost realism analysis. This will simplify and speed up the agency’s cost realism evaluation. GAO permits such tailoring as long as the government’s chosen methodology still provides a reasonable measure of confidence that the rates proposed are realistic. This tailoring does not need to include all of the proposed costs; in some cases, GAO has found that an agency’s cost realism was reasonable, despite the fact that it only evaluated eighty-six percent of the proposed hours. Furthermore, if the government specifies a particular cost realism evaluation methodology or approach in the solicitation, a protester would need to bring a timely challenge of the terms of the solicitation before solicitation close. Typically, offerors are hesitant to file a pre-award protest while they are still actively competing for the work. As such, agencies have a fair amount of flexibility in tailoring their cost realism analysis limit the scope of their cost realism analyses. Agencies should actively and aggressively pursue such limits on their cost realism analyses to save all parties time and effort, while reducing the government’s overall protest risk.

In particular, five approaches to tailoring the Government’s cost realism analysis are generally applicable to LOE service contracting. Specifically, agencies should 1) provide a mandatory escalation rate in the solicitation; 2) expressly exclude Other Direct Cost CLINs from the cost realism analysis by providing a plug number for these costs; 3) expressly exclude “minor” cost-reimbursement subcontractors from the cost realism analysis; 4) expressly exclude fixed-price or T&M subcontractors from the cost realism analysis; 5) expressly state that the Government will only make upward adjustments; and 6) use “even if” counterfactual trades in the award decision documents to limit the risk posed by complex cost realism issues.

a. Providing a mandatory escalation rate in the solicitation

One of the most common adjustments that the government makes in LOE cost realism analyses is to the offeror’s proposed escalation rate, which is the amount of salary growth that an individual may experience each year. This rate varies year to year but is generally a function of the broader labor market, as opposed to any particular action a specific company is taking. Despite this fact, many agencies permit each offeror (and its individual cost-reimbursement subcontractors) to use different escalation rates, while requiring each to provide substantiating data to support those proposed escalation rates. This company-specific escalation approach increases the information that the government must review from each company, and, in many cases, the government still adjusts all of the offerors using an industry-wide index, such as IHS Global Insight escalation rate projections where those rates are higher than the proposed rates. Because escalation applies to each direct labor rate in most or all of the contract years, it can require detailed updating of a large number of direct labor cost formulas in both the offeror’s and its subcontractors’ proposed cost build-up spreadsheets. This mandate can be a substantial undertaking for the evaluators and, given the large number of formulas implicated, is prone to error. Additionally, even if the government does make these adjustments correctly, clearly and efficiently demonstrating this fact in litigation could be challenging and time-consuming.

Instead of relying on this confusing approach to escalation rates, government source selection teams should instead rely on the fact that escalation rates are primarily driven by the broader labor market and set a mandatory escalation rate for all direct labor in the solicitation. With this approach, all offerors and subcontractors must price their efforts with the same escalation rate that the government would have likely adjusted them to in the previous approach. This reduces the data offerors must provide with their proposal, it saves substantial proposal evaluation effort, it reduces the number of adjustments (particularly for cost elements the companies have little control over), and it removes litigation issues.

The following recommended language is a good starting point for solicitation language to set a mandatory escalation rate:

Escalation: Offerors shall, at a minimum, propose the escalation rates provided in the table below:

[see PDF p. 83]

GAO has specifically upheld an application of this approach in Logistics Management Institute. In Logistics Management Institute, the solicitation instructed offerors to identify the labor escalation rate for each year, identify the source of the proposed rates, and provide a comprehensive description of the methodology and calculations used to establish the proposed rates. The solicitation specified a “minimum escalation factor of 2.75%” and put offerors “on notice that adjustments to the proposed escalation rates “may be made by the Government unless adequate justification is provided as to why the Offeror’s escalation rates are fair and reasonable.” Although the protester asserted that they substantiated the lower escalation rate, the GAO denied the argument and the protest. GAO specifically cited to the principle that “it is reasonable for an agency to adjust a proposed escalation rate where the solicitation indicated it would use a specified rate unless adequate justification for a different rate is provided.”

Recommendation: Government source selection teams should actively reduce the need for complex escalation adjustments by expressly setting the applicable escalation rate for direct labor costs in the solicitation using the language above.

b. Providing a plug number for odc clins in the solicitation

Other Direct Costs (ODCs) are another cost element that frequently appears in LOE service contracts. ODCs typically refer to costs associated with the contractor purchasing incidental materials or traveling. In most cases, these costs are a small portion of the overall effort, and many COs segregate them into cost-only CLINs to avoid paying fee on them. This is a sound strategy for contract performance and locks in a zero percent fee on these costs, but, if the solicitation remains silent on how to evaluate these CLINs, agencies may find it very difficult to provide all offerors a common basis for competition on these costs; in turn, this will make them extremely difficult to evaluate on an apples-to-apples basis.

Instead of remaining silent, government source selection teams should expressly exclude relatively small ODC CLINs from their detailed cost realism analyses. Instead, the solicitation should direct all offerors to bid a plug number that the solicitation provides for these CLINs; it should also expressly state that the government will not conduct detailed cost realism of the ODC CLINs where the offeror proposes the plug number.

The following recommended language is a good starting point for solicitation language providing a plug number for an ODC CLIN and accompanying evaluation language:

Government estimates of ODCs are provided below, which include travel and incidental material expenses only. The estimates provided below do not account for any burdens such as material handling or G&A. Each Offeror shall apply appropriate burdens in accordance with its disclosure statement. ODCs are not subject to fee.

[see PDF p 84]

The Offeror’s proposed ODCs shall be included in Section B of the offer against each appropriate ODC CLIN. The management of travel between the Offeror and any subcontractors shall be described by the Offeror within the Cost Narrative. In order for any additional expense categories to be allowed as a direct charge under the resulting Task Order, it must be identified and described by the Prime Contractor within the Cost Narrative and be reflected in the applicable CLIN. Reimbursement for Travel will be in accordance with the Joint Travel Regulation (JTR) and solicitation clause B-231-H001 Travel Costs (NAVSEA) (OCT 2018).

Providing a plug number in the solicitation provides a common basis for competition even though it does not provide discrimination among offerors. The overriding value of this approach is that it allows the agency to move forward with the solicitation even though it does not know its ODC requirements before award. Although this approach does not provide any real discrimination against offerors on ODCs, agencies generally do not want their LOE support services award decisions to hinge which offeror had a bolder guess about how low the agencies ODC needs would be. In addition, although the agency does not know what ODCs are required at the time of solicitation and award, the agency can set up its contract to mitigate risk by requiring CO and/or COR’s approval or require quotes before the awardee incurs ODC costs.

Although the above recommended language is clear that the plug number includes all burdens, some government source selection teams want to provide additional competitive pressure on such burdens. One way to do that is to modify the recommended language to provide an ODC plug number for the direct costs, but to still require offerors to burdens that plug number, with the agency evaluating the burdens for realism. The advantages to this permutation are that the evaluated cost for the ODC CLINs might more accurately reflect the costs in performance and that it places increased competitive pressure on companies to limit the burdens that they propose to add to ODCs. Yet this approach requires the government to evaluate the realism of the proposed burdens. Although the government likely evaluated the proposed indirect rates in its cost realism analysis of any other cost-reimbursement CLINs, the evaluators must still ensure that the offerors’ detailed cost build-up correctly applied the burdens and carry over any indirect rate adjustments from the LOE CLINs to the ODC CLINs. This additional evaluation work and documentation generally outweighs the potential benefits of this alternative approach.

Recommendation: Government source selection teams should actively reduce the need to evaluate the realism of hard-to-justify and generally low-dollar value ODC CLINs by providing a plug number in the solicitation for all offerors to use when bidding and expressly state that the government will not conduct detailed cost realism of the ODC CLINs where the offeror proposes the plug number.

c. Excluding “minor” cost reimbursement subcontractorsfrom evaluation

Cost-reimbursement subcontractors are one of the greatest multipliers of work in a cost realism analysis because, in general, the government must treat each as an individual nested cost realism analysis within the greater cost realism analysis for the prime. Each subcontractor comes with its own direct labor rates, hours, company labor categories, indirect rates, and fee structures. Moreover, the cost-reimbursement subcontractor must also provide substantiating data for each of these cost elements, which it typically does independent of the prime proposal to keep their business-sensitive information private. Often, this results in inconsistencies and disconnects between the hours and mix that the prime proposes for the subcontractor and the hours and mix that the subcontractor proposes. In addition to this additional proposal work, the government evaluators must carefully analyze these proposed cost elements for realism and document their evaluation. As such, each additional cost-reimbursement subcontractor that the government must assess for cost realism adds substantial work. In some instances, particularly where a contractor proposes providing a number of individual consultant subject matter experts on a cost-reimbursement basis, the number of cost-reimbursement subcontractors can balloon to more than twenty per prime offer. This is a massive undertaking from a cost realism perspective and creates an extremely complex record to defend. As such, government source selection teams should work to avoid this outcome.

One of the strategies that government source selection teams should seriously consider to limit this complexity and workload is expressly excluding a subset of relatively small cost-reimbursement subcontractors from the scope of its cost realism analysis in the solicitation. Typically, agencies do this by defining a class of “major” cost-reimbursement subcontractors in the solicitation, which the government will review for cost realism, and a class of “minor” cost reimbursement subcontractors that it expressly excludes from its cost realism analysis. The following provides a good example of these types of definitions.

Major subcontractors are defined as any cost-reimbursement subcontractor performing three percent or more of the total hours under the contract; however, where otherwise minor cost-reimbursement subcontractors cumulatively perform more than ten percent of the total hours under the contract, all cost-reimbursement subcontractors are considered major subcontractors and must propose as such. All major subcontractors must also provide a complete subcontractor cost build-up spreadsheet for its portion of the effort and the same types of substantiating data required of the prime contractor in Section XX.

All subcontractors that do not meet the definition of major subcontractor above are minor subcontractors. Minor subcontractor top-line hours and proposed costs must appear in the Prime Offeror’s Attachment 1 subcontractor calculations. The hours listed there must correspond to the hours included in the Prime Offeror’s Staffing Plan. Minor subcontractors, however, are not required to submit a separate subcontractor cost build-up spreadsheet for their proposed costs or provide substantiating cost realism data.

Although there are several ways to define a “major” subcontractor, the recommended definition above has two notable features.

First, it relies on a comparison of hours, not costs, to determine which subcontractors are major. Since cost-realism analyses treat proposed costs as inherently flexible, government cost adjustments to other parts of the prime’s proposal may change the percentage of cost that a particular subcontractor represents of the overall proposal, unless the solicitation is very specific that it is using proposed costs as the basis of that “major” subcontractor definition. This can be very confusing and hard to implement. Furthermore, to confirm that a particular minor subcontractor falls below a particular dollar threshold requires direct labor rates and indirect rates, which are precisely the types of data a major/minor subcontractor distinction is trying to avoid getting from minor subcontractors. Instead, relying on a percentage of hours makes confirming whether a particular subcontractor is major or minor simple and easy, without requiring any business sensitive information from the subcontractor.

Second, the recommended definition presents a two-part test; beyond the percentage of hours, it also looks at the cumulative population of otherwise minor cost-reimbursement subcontractors to set a maximum limit on the amount of proposed costs the government is willing to exclude from its cost realism analysis. Essentially, this second part of the test prevents an offeror from proposing eight subcontractors with each performing 9.9% of the hours, which would result in the agency only evaluating cost realism substantiation for 20.8% of the hours in the proposal. This second part of the major subcontractor definition avoids some of the more egregious gaming that can occur under simpler definitions.

Beyond defining major and minor subcontractors, solicitations should also expressly describe how they will evaluate them. The following provides recommended language for subcontractor evaluations. It addresses both the evaluation of major/minor subcontractors and fixed-price/T&M subcontractors, which this article considers in the next subsection:

In Section L XX, of this solicitation, the Government defines minor subcontractors; considering the small potential cost impact of variations in minor subcontractor costs, the Navy [or applicable agency] will not conduct a cost realism analysis of any Offeror’s minor subcontractor costs. Nevertheless, the Navy will review these proposed costs and hours for consistency with the rest of the Offeror’s proposal and may adjust minor subcontractor cost or hours for lack of consistency with the rest of the Offeror’s proposal. Similarly, the Government will not conduct a cost realism analysis of any fixed price or fixed rate (e.g., Firm-Fixed Price or T&M) subcontractors, as these subcontracting arrangements do not present a meaningful cost risk in performance. The Government will evaluate any proposed Fixed-Price Incentive Fee contractors at ceiling without conducting a cost realism of those subcontract costs.

Applying this recommended evaluation language permits the government to significantly limit its cost realism analysis even before it receives any proposals. This restraint greatly reduces the work and complexity for the offerors, proposed subcontractors, and the government evaluators. It also permits a much faster and easier to defend cost realism analysis.

Recommendation: The government source selection teams should actively define a subset of “minor” cost-reimbursement subcontractors that it will exclude from its cost realism analysis. In implementing this approach, agencies should ensure that they include both a clear definition of the terms, express limits on what information is required from minor subcontractors, and provide clear evaluation language applicable to major vs. minor subcontractors.

d. Excluding fixed-price or T&M subcontractors from evaluation

Another easy way for the government source selection teams to limit the work necessary to evaluate proposed subcontractors is to expressly exclude subcontract types that are fixed or for which the government can identify a maximum government cost exposure. Most commonly, these are FFP LOE, FPIF LOE, or T&M contract types. The following provides an example of the type of Section L instruction that agencies can include in their solicitations:

Non-cost-reimbursement subcontractors may provide fixed rates or fixed prices for each contract year on the Offeror’s Attachment 1, without breaking out direct labor and burdens, but the Offeror shall explicitly note that these costs or rates are fixed by describing the subcontract type (e.g., Firm Fixed Price or Time & Material).

These types of instructions simply recognize that the government cannot make cost-realism adjustments to these subcontract contract types. Since the government cannot adjust them and, generally, does not want to conduct a price realism on these subcontract costs, the government’s solicitation should ask for very little from fixed-price or fixed-rate contracts. Moreover, even in situations in which the government could make some cost realism adjustments, such as in an FPIF contract between the target cost and the ceiling cost, the government should still consider simplifying its cost realism analysis by evaluating these subcontractors at the ceiling cost, since the government’s cost exposure will not increase above this ceiling.

As described in the previous subsection, the solicitation should also include evaluation language to explain this methodology to put all offerors on notice that the government will be using it. The recommended subcontractor evaluation language in that subsection also addresses fixed-price or fixed-rate subcontractors and evaluating FPIF subcontractors at the ceiling.

Recommendation: Government source selection teams should actively limit the information it requires from non-cost-reimbursement subcontractors. Further, the government’s solicitation should expressly exclude these subcontractors from its cost realism analysis.

e. Expressly state that the government will only make upward adjustments

It is not the government’s responsibility to correct errors in an offeror’s cost proposal. Instead, GAO is clear that it is “an offeror’s responsibility to submit a well-written proposal, with adequately detailed information which clearly demonstrates compliance with the solicitation requirements and allows a meaningful review by the procuring agency.” Furthermore, the fundamental purpose of cost realism is to determine whether a proposed cost is too low, which guards the agency against unsupported claims of cost savings. As such, the government rarely makes downward adjustments to an offeror’s proposed costs. To further limit protest arguments that the agency should have made a downward adjustment, government source selection teams should consider expressly notifying offerors that the agency will only make upward adjustments in its cost realism analysis. This aligns with the purpose of cost realism and discourages frivolous protest grounds based on errors the protester introduced into its own proposal.

The following recommended language is a good starting point for solicitation language to notify offerors that the government will only make upward adjustments:

Offerors should note that the fundamental purpose of a cost realism analysis is to guard the agency against unsupported claims of cost savings by determining whether the costs as proposed represent what the government realistically expects to pay for the proposed effort. Therefore, the government will closely evaluate whether and to what degree each Offeror’s proposed costs are unrealistically low. In a competitive environment, the government will not evaluate whether proposed cost elements are unrealistically high. It is the Offeror’s sole responsibility to demonstrate that its proposed costs are realistic because they are substantiated by actual incurred data or are fixed/capped by contract. If an offeror or major subcontractor proposes capped costs or rates, the government will incorporate these caps into the resulting contract at award.

This recommended language also provides guidance for offerors that choose to propose cost caps on their proposed costs or rates. Although this approach is somewhat uncommon, some companies employ it to limit the government’s ability to make upward adjustments to its proposed costs. Furthermore, the incorporation language permits the government to incorporate any proposed caps into the resulting contract, so that they are enforceable in performance.

RECOMMENDATION: Government source selection teams should expressly notify offerors in the solicitation that it will only make upward cost realism adjustments.

iii. Using contemporaneous “even-if” statements to limit the live protestable issues

While the other five cost realism recommendations above rely on specific solicitation language to carefully define and streamline its cost realism analysis, the final recommended cost realism strategy focuses on how to reduce protest risk in agencies’ source selection decision documents and trade-off analyses. In the right situations, using counterfactual trade-offs—which we term “even if” analyses—in the source selection decision document can greatly reduce the complexity and risk presented by complex adjustments or cost risks. While, depending on the arrangement of competitors, this approach may not even be available in every tradeoff decision, it is a powerful tool to limit the agency’s protest risk in many situations.

Invariably, in conducting a cost realism evaluation, evaluators will have to make hard judgment calls about whether to make specific adjustments or identify specific cost risks in a cost realism analysis. Sometimes, particularly complex or questionable evaluation findings involve issues that will not have any impact on the award decision when considered against the competitive distance between the disappointed offeror and the awardee. Instead of waiting to make “no prejudice” arguments in litigation that GAO will view as mere post hoc rationalization, agency source selection teams should consider including analysis in its source selection decision documents that finds that the government would award to the same awardee irrespective of the problematic finding.

For example, consider the following situation: The government is selecting between two offerors, one Marginal and one Outstanding in the only non-cost factor. The Marginal offeror proposed at $100 million and the government’s cost realism resulted in an evaluated cost of $105 million; for the Outstanding offeror, the government’s cost realism resulted in an evaluated cost of $115 million. In this case, the government had determined that it is willing to pay the $10 million premium between the evaluated costs for each offeror to capture the non-cost benefits presented by the Outstanding offeror compared to the Marginal offeror. If the government would also be willing to pay a $15 million premium for these same non-cost benefits, it should seriously consider contemporaneously including the following statement in its source selection decision document: “Moreover, even if the government had not made any cost adjustments and had not identified any cost realism risks in the Marginal offeror proposal (i.e., it had accepted all costs as proposed), the Government would still pay the premium between Marginal’s proposed cost and Outstanding’s evaluated costs to award to Outstanding based on the strength of Outstanding’s non-cost advantages.” This contemporaneous documentation would provide a powerful limit on the Marginal’s ability to challenge any of its cost adjustments or identified cost risks successfully. Furthermore, if the government can get these non-prejudicial grounds dismissed, it can focus on defending any remaining non-cost findings that actually impacted the award decision.

As this relatively simple example shows, “even if” statements can be a powerful tool to disposing of complex or confusing protest grounds. Where the competitive differences between the awardee and one or more of the other offerors are great, agency source selection teams should carefully consider what issues present the highest litigation risk and work to limit their impact through contemporaneous “even if” statements in the source selection decision document.

RECOMMENDATION: Government source selection teams should strategically assess whether contemporaneous “even if” counterfactual award decisions can moot complex litigation issues. If so, they should expressly include these counterfactual decisions in their source selection decision document.

III. Conclusion

Competitive contracting for LOE services presents agencies with numerous strategic decisions that require agencies to balance interests and make compromises. Business realities, technological change, and developments in case law impact this balancing act and require government source selection teams to exercise thoughtful and informed business judgment to make these tough calls. This article has explored a wide variety of those strategic decisions across three broad areas to identify how various competing interests can influence the agency’s approach.

In applying these recommendations to their own procurements, agencies should actively and ruthlessly remove unnecessary complexity from their procurements and insist upon clearly articulating whatever is left. Agencies that aim for simpler source selection approaches using this timeless philosophy will generally wait less time for proposals, will evaluate them faster, and will produce clearer, easier-to-defend award decisions.