Chance Meyer
Adjunct Professor of Law
Nova Southeastern University
Shepard Broad College of Law
EdD Candidate, Vanderbilt University
Times of crisis force important decisions under circumstances that make good decisions unlikely. The COVID-19 pandemic has required law schools to make major changes in a rush, in distress, and in the dark as to what the future holds. The downstream consequences, for students and educators, will be significant. For years to come, law schools will need to continuously reassess and redesign programs and operations in the unstable conditions of the pandemic’s aftermath.
The customary approach law schools take when deciding what organizational changes to make and how to see those changes through will not lead reliably to success. To thrive in the challenging times ahead, law schools need a disciplined methodology for developing and implementing changes designed to have the most beneficial—or least harmful—impact on outcomes, based on the unique characteristics of individual law schools.
When a law school faces a new challenge, the customary approach is to form a committee or task force to gather information and brainstorm ideas. Ideas generally involve adopting the latest, most touted teaching methods, products, or resources. The favorite ideas of the people with the greatest influence win the day. Those ideas are implemented to find out whether and how they work for the law school.
This approach is common and long-standing. It is also deeply flawed, biased, and—from an improvement standpoint—backwards.
The following example of how a law school might respond to a timely and common challenge will help illuminate the problems with the customary approach, why improvement science is far more effective, and how the improvement science process works.
Imagine a law school task force is formed and charged with redesigning an ineffective distance learning program cobbled together during the pandemic. The chair invites ideas. Professor A recently read a law review article suggesting that student-student interaction promotes engagement in online learning. Professor A recommends a new virtual classroom platform that enables breakout groups. Professor B recently heard a student complain that professors do not call on students or invite student participation in online classes. Professor B recommends faculty training in online teaching methods. Professor C insists based on forty years of teaching experience that online teaching does not work. Professor C recommends scrapping the online program and arranging for social distancing in live classes of reduced size. Other members add to the blizzard of ideas.
How will the task force members know which idea is best? How will they determine which idea will have the greatest positive impact on outcomes, such as bar passage?
Quite simply, under the customary approach, they won’t. The ideas that receive serious consideration will be determined by the proclivities, intuitions, experiences, and politics of the group. Decisions reached by vote or consensus will boil down to who has the most influence over the most decision-makers in a contest of personalities and power dynamics.
One critical problem with the customary approach is that it results in law schools, as organizations, making important decisions based on biases. In Judgment in Managerial Decision Making, Max Bazerman and Don Moore explain how heuristics and biases guide organizational decision-making in the absence of scientific methodology.
Recency bias is common. For instance, Professor A tends to feel that the law review article she read recently is more important than other articles and ideas she encountered in the past. Educators accustomed to applauding innovation may be especially likely to regard old ideas as bad ideas, whether they are or not.
Insensitivity to sample size is also common. For instance, Professor B tends to feel that the student complaint she heard is representative of what the entire student body thinks. Educators accustomed to sweating student evaluations may be especially likely to overvalue and overreact to each student complaint.
The bias of overconfidence is prevalent among those with extensive experience and high intelligence. In Why Smart People Do Dumb Things, Mortimer Feinberg and John Tarrant call it “self-destructive intelligence syndrome.” In Reframing Organizations, Lee Bolman and Terrence Deal explain why smart leaders can sometimes be “too smart for their own good.” Essentially, knowing a lot tends to make people feel like they know everything, so they discount other perspectives and opinions. For instance, Professor C tends to feel he knows what’s best based on knowledge and experience, even when someone else may know better.
Law schools that allow biases to dictate the organizational changes they make wind up jumping constantly from one idea to the next in an endless frenzy of new initiatives, burning through resources and people, without consistently or measurably improving outcomes.
In Learning To Improve: How America’s Schools Can Get Better at Getting Better, experts at the Carnegie Foundation for the Advancement of Teaching explain why the customary approach fails:
"[C]hange too often fails to bring improvement—even when smart people are working on the right problems and drawing on cutting-edge ideas. Believing in the power of some new reform proposal and propelled by a sense of urgency, educational leaders often plunge headlong into large-scale implementation. Invariably, outcomes fall far short of expectations. Enthusiasm wanes, and the field moves on to the next idea without ever really understanding why the last one failed."
Under pressure to act quickly, legal educators rush to select and implement promising ideas for change. Even if they have the right ideas, reckless implementation causes disappointing and often untraceable results. Educators then misattribute the poor results to their ideas, rather than the vagaries of slapdash implementation. So they keep looking for the next good idea, without giving good ideas a chance to work.
Legal educators have the change process backwards. Like Professors A, B, and C, they begin the process by presupposing they know the best solutions and end the process by discovering whether their solutions work when shoehorned into the pre-existing organizational system of a law school.
Improvement science puts the process in the right order. Improvers marshal the collective and diverse knowledge of organizational members to identify and test ideas for change that will have the greatest impact on outcomes once implemented at scale. Through this process, the system tells the educators what ideas it needs, not the other way around.
That role reversal is critical, because, contrary to popular belief, law school systems and the problems they encounter are inevitably too complex for the human mind to fully conceptualize.
Each law school is a complex organizational system. When a change is made in the system—even if the change is based on a great idea that works at other schools—the system churns in unpredictable ways. Interrelated processes reorient. Resources redistribute. Influences realign. Sensemaking kicks in among organizational members. New narratives and meanings arise. Attitudes shift. Technical and normative conditions reform and reintegrate. Through churn, the impacts of a change are mediated, modulated, diluted, mitigated, eliminated, even reversed, in ways no one is capable of foreseeing.
Because of the complex nature of system change, having good ideas is not good enough to improve organizational outcomes. Improvement science is the way to get on top of all the complexity.
The improvement process is daunting, but still less so than the never-ending rollercoaster of new initiatives that results from the guesswork of the customary approach. Most importantly, improvement science works. There are numerous studies and examples of successful improvement science initiatives in education and health care.
Many faculty and staff rushing to build capacities in distance learning during the pandemic have remarked that there is no playbook for this situation. But, in a sense, there is. In fact, there is an entire scientific discipline, packaged into a practical, step-by-step process, recommended by experts for schools struggling with complex problems in rapidly changing environments. Legal educators across the country sprung into action with admirable commitment when suddenly faced with the need to move online. But, in the months and years to come, a more disciplined and prudent approach is available.
Improvement science is how organizations get serious about getting better. Returning to our example, if the distance learning task force were to embrace systems thinking and adopt an improvement methodology described in resources like The Improvement Guide, its process would look very different. In broad strokes, over a period of weeks or months, it would involve the following.
The task force would begin by painstakingly defining the problem it was facing, without assuming that the problem is fully understood or leaping forward to proposing solutions. Causes, or drivers, of the problem would be identified and arrayed in a diagram. The diagram would align the understandings of the participants and moor their discussion to a common progression, rather than allowing it to pinball randomly among discordant ideas. Tools like fishbone diagrams would help standardize the process.
Perhaps the task force would determine that one driver of the online program’s poor results is the undermining of learning experiences by technology glitches. Perhaps another driver would be a lack of student engagement. Behind those drivers would be deeper causes to explore and diagram.
The task force would keep the process user-centered, meaning it would focus on knowledge and insights from the daily, ground-level workers who experience, live with, and interact with the organizational features being scrutinized. Users might include online teachers, ASP professionals, IT professionals, and student affairs professionals.
Opinions from top administrators and senior faculty members would not get higher billing in the diagram. Rather, the strength of an idea as a function of collective judgment would determine the idea’s import.
Tools of confirmation, like data analysis, could be used to confirm the information in the diagram.
Once the task force was satisfied that it had visualized the entire problem, it would use other tools, like an interrelationship diagraph, to prioritize the drivers. By identifying the most impactful drivers, the task force would focus its limited resources where they could have the greatest effect.
Perhaps the task force would find that the technical glitches were causing student disengagement by creating frustration. Perhaps, then, addressing the technical-glitches driver would do more to help outcomes than limiting the scope of the task force’s efforts to the student-disengagement driver, which falls further downstream.
A Pareto Chart could help the group visualize which drivers were having the greatest impacts. If the Pareto Principle held true, which it often does, eighty percent of negative variance in system outcomes would result from twenty percent of the causes. In other words, encouragingly, the task force would find that big impacts were achievable through fewer changes than expected.
Once the group was on the same page about which drivers it would target—which levers were most useful to pull—it would already be leaps and bounds beyond the progress achieved by many committees and task forces.
Only then would the task force turn to suggesting solutions. Here, an aim statement would anchor the task force’s discussion to a common, memorialized objective.
A good aim statement includes what improvers want to accomplish, for whom, by how much, and by when, so there is no confusion later about what constitutes success. The task force’s aim statement could be to increase scores on a certain assessment for online students in a certain course by a certain amount by a certain date. In crafting an aim statement, the task force would hold itself to practical, realistic, measurable goals.
With a clear aim and list of high-impact drivers, the task force would then undergo a process of developing change ideas. Eventually, those ideas would be connected to key drivers in turn connected to the aim statement in a master diagram, representing the task force’s unified theory of improvement.
Next, the task force would turn to developing a system of measurement to ensure that once the change ideas are put into practice, results would be discernable and captured in qualitative and quantitative data. Perhaps one measurement would involve collecting student scores on assessments in courses outside the improvement project, to help account for the possibility that a student having an across-the-board bad semester for personal reasons may appear within the project to be a negative reaction to the intervention.
With measurements in place, the task force would design plan-do-study-act (PDSA) experiments to simulate and test the change ideas in rapid iterations, accelerating the task force’s learning. Perhaps the task force would arrange small, online workshops which deliver fast data useful in predicting results on the outcome targeted in the aim statement. With a little creativity, the task force would avoid waiting for a semester to play out before being able to gather evidence.
Based on the results of iterative PDSA cycles, changes would be gradually scaled up, introducing new variation, student cohorts, and contexts, until full-scale implementation was reached.
Undeniably, the improvement process is much harder than the guesswork and brainstorming of the customary approach, but it is attainable to legal educators and practical by design. Long before COVID-19, the Carnegie researchers concluded that educators need to adopt improvement science. Law schools should heed the Carnegie Foundation’s expert advice.
Any legal educator can make the paradigm shift to becoming an improver. It is never too late to start. Resources like The Improvement Guide are available. Consultants are available. Efforts to bring organizational theory into law schools have already begun, such as in Patrick Gaughan’s Facilitating Meaningful Change Within U.S. Law Schools.
Once educators enter the world of systems thinking, old ways of pursuing change seem almost absurd. They can no longer imagine what they would say in a task force meeting following the customary approach: Welcome. We are facing problems of unfathomable complexity in a vast organizational system with networks of interrelated processes none of us can begin to fully conceive. So, who has a good idea to fix everything?
Science is used to solve problems that are beyond the capacity of the human mind to solve. Just as doctors will use the scientific method to collectively learn how to beat COVID-19, educators can collectively learn how to improve student outcomes during a pandemic and in the economic fallout.
Many predict COVID-19 will usher in an era of distance learning. It is equally likely that emergency remote teaching will lead to poor results, and the takeaway for many professors will be that distance doesn’t work. Rather than continuing to blame teaching methods for the consequences of rushed implementation, we should come out of these awful events with a new commitment to organizational learning through science-based initiatives.
Legal education could benefit tremendously from the founding of an institute for improvement science in legal education. The institute would serve as a resource for training and technical talent to assist law schools in improvement initiatives, and could act as the hub of a networked improvement community, so law schools could learn from each other.
The organizational theorist W. Edwards Deming once said that educators have “miracle goals without methods.” Legal educators can no longer afford to naively hope a hodgepodge of good ideas is enough to create good outcomes. If we are willing to learn scientific methods of productive organizational change, we can make our method the miracle.