ChatGPT and a Lawyer's Duty to Provide Competent Representation
As it turns out, AI has come for lawyers, but not en masse just yet. Two weeks before this magazine’s print deadline, two attorneys in New York made headlines for failing to understand how ChatGPT implicates Model Rule of Professional Conduct 1.1, a lawyer’s duty to provide competent representation. In Mata v. Avianca, Inc. (available at CourtListener.com), plaintiff’s counsel filed a typical response to the defense’s motion to dismiss.
The cited case law seemed favorable to the plaintiff, but there was one hiccup: none of the citations were real. Plaintiff’s counsel had used ChatGPT to draft the brief, and, as widely reported, ChatGPT made up the citations. Defense counsel was understandably confused, and the court ordered plaintiff’s counsel to file the cases cited in full. Plaintiff’s counsel then appears to have returned to ChatGPT for the cases. And if you read them—and I highly encourage you to—you will find that they . . . are further fictions with obvious errors.
At the print deadline, attorneys had entered appearances in defense of plaintiff’s counsel, creating net employment for the profession. Until further notice, lawyers must check everything for which they use AI chatbots. (A federal district court judge in the Northern District of Texas requires lawyers to certify that a human checked any AI-drafted filings for accuracy.)
Layoffs in Big Tech
Big tech is keeping in-house and outside counsel employed, too.
After the pandemic, the tech industry has unsurprisingly contracted as the world has opened back up. Meta, Alphabet, Amazon, and others have announced tens of thousands of layoffs. But while moving fast and breaking employment contracts may be easy in the United States, that is not generally true in other jurisdictions. According to the National Conference of State Legislatures, “most countries throughout the world allow employers to dismiss employees only for cause.” Employment relationships in all US states but Montana are presumed to be at-will.
What does this look like in practice? Enter Elon Musk when he purchased Twitter late last year. A week into his ownership of the company, Musk eliminated about half of Twitter’s workforce, cutting positions around the world via email.
According to Business Insider, workers in New York were offered three months of severance, while employees in California received two. In Europe, staff received notice that their roles were “identified as potentially impacted or at risk of redundancy.” Twitter’s Ghana offices, which had only opened four days before the announced cuts, were reduced to one employee. The laid-off staff threatened to sue, accusing the company of failing to comply with Ghana’s labor laws.
More Legal Jobs
The above instances merely scratch the surface of technology-instigated legal work. Platforms have been disruptors, sure, but their algorithms and content implicate a wide variety of worker and user rights. In February, a Kenyan judge ruled to keep Meta in a lawsuit alleging union-busting and worker exploitation against Facebook’s parent company and its local partner Sama. Meta faces another lawsuit in Kenya for allegedly inciting violence that resulted in the death of a chemistry professor during Ethiopia’s recent civil war.
The tech industry is even asking for more legal jobs! OpenAI’s chief executive, testifying before a Senate subcommittee in May, asked Congress to regulate AI. As long as tech holds to Mark Zuckerberg’s famous motto to “move fast and break things,” lawyers will have plenty of work trying to put them back together.