Companies are using adhesive clauses to impose mandatory binding arbitration on employees and consumers, and they are using their superior bargaining power to exercise control over the dispute system design for arbitration programs.[i] These programs can displace the civil justice system by precluding litigation in state or federal courts or through ad hoc court or agency dispute resolution. The field of dispute resolution makes claims that invariably include an implicit or explicit comparison: that dispute resolution is faster, cheaper or better at producing high-quality outcomes; better at improving disputants’ capacity to handle conflict in the future; and better at transforming conflict into something productive and creative.
To prove – or disprove – these claims, we need comprehensive, longitudinal data about our civil justice system and differing dispute system designs, including arbitration.[ii] Getting that data, however, requires two things: transparency and systematic, consistent information-gathering. Right now, we have neither. This article will review what we need and how it might inform policy on mandatory arbitration.
At the heart of the controversy over mandatory arbitration is the inescapable fact that one party unilaterally adopts a dispute system design that may work to the other party’s disadvantage. Before mandatory arbitration, consumers with small claims could band together to pursue them through a class action. Mandatory arbitration may bar consumer access to class action. With mandatory arbitration, employees cannot get to a jury to prove grounds for certain damages. However, arbitration programs vary widely in their design in ways that involve manipulation of many design elements. The elements of dispute system design can include, but are not limited to, structural features, context and administration (see Chart A).[iii]
The key question is party control over dispute system design in ways that affect both process and outcome in a case. For example, the designer may pick the location for an arbitration hearing and impose travel costs on the other side or limit discovery or class relief. We do not adequately understand how different dispute system designs function. Do dispute resolution systems designed by one party function as fairly and effectively for all participants as systems designed by third parties such as the courts and government? How does the power of the marketplace shape dispute resolution systems? We need the baseline: consistently collected observations about dispute resolution and the systems with which it is compared.
Under the Alternative Dispute Resolution Act of 1998, all federal district courts had to adopt a dispute resolution program, but there was no template; the thousand flowers bloomed. How do these designs compare? Without consistent data, there is no way to know. In 2003, the Research and Statistics Task Force of the American Bar Association Section of Dispute Resolution embarked on a project to help courts use their information technology systems to collect data for ADR program evaluation. Through an email survey to court administrators, the Section sought to create a list of data fields that all courts could collect. If state and federal courts collected the same data comprehensively and longitudinally, the Section’s Task Force members reasoned, researchers might be able to quantify more effectively the impact of different dispute system designs on the justice system. The survey asked respondents to rank the importance of 56 data collection fields on a scale from one (not at all important) to seven (very important). An initial version of the survey also asked respondents to indicate whether their court currently collected any of this data.
The survey responses identified 27 data collection fields as “important.” These fall into six general categories:
1) Dispute processing information (the timing of the ADR intervention)
2) Transaction costs
4) ADR use
5) Long-term implementation of settlement
6) Satisfaction of the participants with the process, outcome and neutral
However, more than 75 percent of the respondents indicated their court did not collect the information. The categories of information rated less important by respondents included demographics of the disputants and neutral, content of the dispute, and the identity of the attorney representing the litigants.
After analyzing survey results, the Task Force developed and the Dispute Resolution Section Council adopted a list, the Top Ten Pieces of Information Courts Should Collect on ADR (published on the ABA Section of Dispute Resolution Resources Page - http://ambar.org/disputeresources) and summarized in Chart B. At least in the public civil justice system, most of these fields do not represent information that is confidential. The final field of participant perceptions could be maintained as confidential and reported in the aggregate, as the Federal Judicial Center has done with its studies of ADR in courts.[iv]
The problem with evaluating the fairness of binding arbitration is compounded; not only do we lack consistent, longitudinal data collection on various dispute system designs, but much of the information itself is confidential and not available to researchers except through the voluntary cooperation of dispute resolution third-party providers or the companies that mandate these programs. There is no consistent transparency even for the program descriptions. Unlike ERISA’s requirement for plan booklets, there is no general mandate that companies publish the detailed dispute system designs of arbitration programs for employees and consumers. Moreover, we lack comprehensive case management data on arbitration. While some states, notably California, have adopted mandates that third-party providers publish summary case results, that data is incomplete and inconsistent.[v]
We need a law to give us some degree of transparency on arbitration, at least as to program and system designs and basic case management data. What could we do with that information? We could determine how widespread mandatory arbitration is. We could measure what impact its adoption has had on use of the civil justice system. We could compare differing arbitration system designs and see whether some programs produce a more level playing field than others. We could determine whether programs are fair and produce justice, in some form, for the disputants. This information would allow us to analyze current legal structures and program designs and discuss appropriate changes to the policy on arbitration.
ELEMENTS OF DISPUTE SYSTEM DESIGN
• Who initiates, how and when
• Who chooses neutral
• Information exchange/discovery
• Formality, character of proceeding
• Availability of aggregate claims
• Decision standard
• Available remedies and form of award
• Access to counsel
• Opportunity for access to other processes, appeal
• Type of dispute
• Sector or setting (public, private, nonprofit, hybrid)
• Characteristics of the participants eligible or required to use the system
Administration of System
• Stakeholder participation in design and redesign
• Selection and characteristics of pool of neutrals
• Payment of neutrals, administrative costs
• Mechanism for assessment, evaluation
Top Ten Pieces of Information Courts Should Collect on ADR
1) Was ADR used for this case (yes/no)? This indicator tracks whether ADR is used in civil litigation and provides a baseline for determining what percentage of civil litigation uses an ADR intervention. It is the fundamental minimum information necessary.
2) What ADR process was used in this case (mediation, early neutral assessment, non-binding arbitration, fact-finding, mini-trial, summary jury trial, other)? There is a great diversity of court ADR programs. The parties themselves elect from a variety of processes. This information permits examination of differences across courts in the type of ADR used and the frequency of use. Within courts, it allows for a comparison of the results of different processes and an examination of the kinds of cases for which parties use different processes.
3) Timing information (the date the claim was docketed; date of referral to ADR; date of first ADR session; date of close of ADR referral period; at what point in the docket duration did ADR occur (before suit, after filing suit, before discovery, just before trial) the final disposition date of the case; the date of post-trial motions). ADR is used at different points in time in the life of a case. This information will help determine what timing is most effective to use ADR and how early or late a case might be referred to ADR.
4) Whether the case settled because of ADR. If settled, whether the case settled in full or settled in part. Advocates claim that ADR settles cases or at least narrows the issues in dispute. This question helps examine that claim.
5) What precipitated the use of ADR (court order sua sponte, party consent to the process, party motion with one or more parties opposed and a court order for ADR following, automatic referral per court rule due to kind of case)? Court programs vary widely in how cases enter ADR. This question allows for a comparison of different methods for intake and an exploration of whether voluntary or mandatory programs are more effective.
6) Was there a settlement without ADR (yes/no)? If so, how was the case terminated – e.g., dispositive motion, settlement in ADR, settlement by some other process, during or after trial, removal to another court, etc. Some cases referred to ADR settle before the process – or after the process but because of factors other than ADR. Many argue that 90 percent of all cases settle anyway, so it is hard to identify whether ADR is making a difference. This information permits comparison of the outcomes for ADR and non-ADR cases.
7) Case type (general civil, criminal, domestic, housing, traffic, small claims).This information will permit examination of a number of claims and questions about ADR: For which cases is ADR most effective? Does ADR use and effectiveness vary by subject matter in dispute? Do more small claims cases settle in ADR than housing claims, for example? If the court has limited ADR funds, what kinds of cases should get priority for ADR?
8) The cost of the ADR process to the participants. Critics suggest that the ADR process simply adds transaction costs to litigation; advocates say ADR saves money. This question allows us to compare ADR costs to other studies on litigation costs.
9) Did the disputants use more than one form of ADR? If so, which? To know which form of ADR is most effective for which cases, we need to be able to separate cases by process and identify those with more complex sequences.
10) Satisfaction data: How satisfied are the participants with the process, the outcome and the neutral? A key value in the justice system is that people who use it believe it to be fair and to provide justice. These questions are ways to determine how people who use ADR feel about their experience.
Lisa Blomgren Bingham is the Keller-Runden Professor of Public Service at the Indiana University School of Public and Environmental Affairs. She is the former chair of the ABA DR Section Research and Statistics Task Force. She can be reached at email@example.com or http://www.indiana.edu/~spea/faculty/bingham-lisablomgren.shtml.
[i] Lisa Blomgren Bingham, Control over Dispute System Design and Mandatory Commercial Arbitration, 67 Law & Contemp. Probs. 221-251 (2004).
[ii] Lisa Blomgren Bingham, The Next Step: Research on How Dispute System Design Affects Function, 18 Negot. J. 375-379 (2002).
[iii] Lisa Blomgren Bingham, Designing Justice: Legal Institutions and Other Systems for Managing Conflict,24 Ohio St. J. on Disp. Resol. 1, 12-19 (2008-9); Stipanowich, T., Welsh, N., Bingham, L. B. & Mills, L. R., National Roundtable on Consumer and Employment Dispute Resolution: Consumer Arbitration Roundtable Summary Report (Working Paper 2012).
[iv] See publications website of the Federal Judicial Center, http://www.fjc.gov.
[v] Lisa Blomgren Bingham, Jean Sternlight & John Healey, Arbitration Data Disclosure in California: What We Have and What We Need (paper presented at the ABA Section of Dispute Resolution Conference, Apr. 15, 2005).