Imagine the best brief-writer you know. You can feel free to imagine yourself.
Now give your gut answer to these questions: Does that lawyer write shorter sentences than average? Use the passive voice less often? Include more analogies? Use fewer adverbs? Discuss more case law? Write fewer words? Use fresher language?
Perhaps the answers seem obvious. But what comes first, the perception of “great legal writing” or the answers to those questions?
You can tackle the challenge of defining “great legal writing” in many ways. I recently surveyed thousands of judges to get their take on briefs, for example, and I will keep soliciting and sharing similar insights.
But I wanted to try something different. Besides asking judges what they think distinguishes the good brief-writers from the bad, why not identify a group of exceptional brief-writers and then use artificial intelligence to figure out what they do differently from the rest of us? After all, judges might all agree that short sentences are hot while the word “clearly” is not, but wouldn’t it be great to see data backing that up? That is, unless that “short and to the point” feeling is really a proxy for something more meaningful but harder to pin down.
Here’s what I did: I created two universes of briefs and motions to help develop the five BriefCatch scores I’ve devised for legal documents.
The first set consisted of tens of thousands of pages of motions and briefs signed by dozens of top-rated lawyers. To remove my opinions from the equation, I relied mainly on Chambers and Partners’ rating of top litigators and appellate advocates. For diversity, I did add briefs from the Solicitor General’s Office across several administrations, briefs that had won Green Bag awards for “exemplary legal writing,” and briefs that judges had singled out as exceptional, either publicly in opinions or privately.
The second set: the same number of randomly selected motions and briefs of similar types.
It’s fair to question my selection method as arbitrary or elitist. If it were arbitrary, though, we wouldn’t have found so many significant differences in writing between the two sets of briefs. The same goes for the objection that “these bigwigs didn’t really write these briefs themselves.” I worry about elitism, too. But to believe that the selection method colored the results, you’d have to believe that equally good briefs from other lawyers are “good” in a vastly different way from the ones we did look at. And that the writing choices of the top performers in our study reflect their credentials more than their writing.
Premium Content For:
- Litigation Section