chevron-down Created with Sketch Beta.

Law Practice Today

March 2024

Practical and Reasonable Use of AI in Small Law Firms

Alexander Paykin


  • Generative AI can be very useful in creating blog or social media posts, general business letters, and broad overviews or summaries.
  • Lawyers must be responsible for reading, reviewing, and verifying any AI-generated materials.
  • AI should be treated as a tool but not as a lawyer.
Practical and Reasonable Use of AI in Small Law Firms

Jump to:

Looking back at the first few years of this decade, it's impossible to escape the realization that the legal industry has seen a major leap in terms of technology. As I was looking at my phone this morning, it showed me a photo memory from two years ago today. It consisted of three photos: (1) my spouse and I having a romantic dinner; (2) me in a suit, standing in the hallway of the bed and breakfast we were staying at, across from me a laptop, a camera with tripod, and a canister microphone, and, in front of me, a judge and opposing counsel on the screen; and (3) me on a ski lift two hours later. As you may have already figured out, we went on a Valentine's Day getaway, and that morning I had a virtual court appearance. That is not something I could have imagined being allowed in the pre-pandemic world, and the idea of having a remote conference was something that required extenuating circumstances and a lot of effort to organize.

Fast forward a couple of years and remote conferences are a fairly routine and almost expected accommodation—in some cases, becoming the rule rather than the exception. The reason for that is simple. Based on necessity and convenience, we all adopted the technology and don't really see a reason to ever go back. Now, the next stage of technological evolution is here, and again we will embrace it, some with curiosity and excitement, others with trepidation or even terror. In any case, it's a perfectly reasonable reaction, depending on your perspective, but much like emails, PDFs, cell phones, smartphones, electronic filings, and virtual meetings have all crept into your practice, with or without your permission, so will our next technological leap, artificial intelligence. In fact, much of it has already crept into your daily life, years ago in fact, but you just didn't notice.

You've Been Using It Already

To start, it's been years now that your smartphone keyboard has been using predictive algorithms to auto-complete words once you start typing. Then it advanced to predicting the next word, making typing on your smartphone almost a matter of selecting the next words, often with great results. Spelling and grammar checking and sentence structuring tools have been around for ages and have been getting smarter. Often, my Gmail will predict and offer to complete entire sentences for me, based on the substance of the email I'm replying to and without me even typing out the first word. All of that is AI. As you're surely aware, AI is quite fallible, and auto-correct errors abound in all of our texts and emails. However, we all love it and rely on it, because it's accurate 90% or more of the time, and we usually catch the mistakes. We certainly wouldn't allow auto-complete to take its proposed words and sentences and email them directly to a client or the courts without our review. Interestingly, that demonstrates a key point: We already know what we need to do to make sure we don't get into ethical problems when using generative AI and essentially already understand how it works.

What's Available to You Now

In the last year, more and more law firms are looking at AI tools for building documents, doing research, and crafting compelling oratory. Among the most regular and practical uses for generative AI is to create blog posts, website and social media content, general business letters, and documents that provide broad overviews or summaries.

For example, I regularly use AI to create clear and concise summaries of documents that either my office or the opposing side produces. After we prepare a 25-page memorandum of law, we usually want to give the client an update, but the client usually doesn't want to read the 25-page brief. Instead, I drop it into a generative AI product and ask it to summarize it in a way a lay person would understand. Seconds later, I have something I can paste into an email, edit for content and style, and send off to my client.

Generative AI allows my office to quickly create entire documents, request generated clauses of various forms and varieties, and is absolutely amazing in explaining things in a way a lay person can understand. Provided we give it all of the facts (and if we want it to make legal arguments, the specific case law and statutes we want it to draw on), it can often draft very well-organized statements of fact and even legal arguments that are often much more eloquent and easy to follow than those written by humans.

But Take Heed and Use Best Practices

What AI is not yet ready for, and probably won't be any time soon, is being left alone. When explaining the correct use of AI to my colleagues and clients, I often explain it this way: I am the licensed attorney. My name goes on everything I file, and I risk my license to practice law if I originate content that is not up to legal standards. In the end, if I rely on ChatGPT, Bard, Claude, or any other generative program to help produce my work product, I have to do the same things I have to do when Microsoft Word autocorrects my typing or when an unlicensed law clerk prepares a first draft of a brief for me. I am responsible for reading, reviewing, and verifying the accuracy of both the facts and the law cited in the papers. Only after I am fully satisfied with the quality of the work product do I allow it to take a step out of my office.

Briefs, Memos, and Hallucinated Case Law

From a practical perspective, it is not that AI is incapable of providing accurate citations or aid in the legal drafting process. It’s just that there’s an app for that. Generative AI products like ChatGPT, Bard, and Claude are not built for it. They are all built to generate content or, said another way, to philosophize eloquently. Other AI products are built for legal research, with companies like Casetext coming to mind. If you use their AI product to identify all relevant case law, export those relevant opinions, and plug them into your favorite generative AI, you can ask it to craft an eloquent argument using only the citations you have provided. If you personally came up with the legal strategy and provided the generative AI with the relevant facts in bullet points, coupled with your arguments and strategy and the caselaw from your research AI product, you will be quite surprised at the logical flow and elevated oratory that generative AI can provide.

However, looking at the news today, you need not look far for attorneys who have already abused generative AI. In New York state alone, we have now had multiple representative cases, all of which have provided bad headlines for us attorneys and also for ChatGPT. While ChatGPT is still a new product with lots of bugs, which it will be the first to admit if you ask it directly, it is the lawyers involved that plainly deserve the criticism. Much like you can't blame the car for a careless driver crashing into a tree (nor can you blame the tree), you cannot blame generative AI for the fact that lawyers are not doing their jobs in reviewing what it is they are submitting to the courts and blindly signing briefs without even checking for the citations. We would all balk at the idea of an attorney having a law student intern prepare motion papers and then sign and file them without reading and reviewing the content, but that is quite literally the exact equivalent of what these attorneys have done. This was recently reaffirmed again, in the 2nd Circuit Court of Appeals, where the Court, in sanctioning and referring the offending attorney for disciplinary review, stated that while other jurisdictions have adopted specific rules as to the use of generative AI in filings, the 2nd Circuit regarded such rules as superfluous, arguing that such new rules are “not necessary to inform a licensed attorney who is a member of the bar of this Court, that [they] must ensure [their] submissions to the Court are accurate” (Park v. Kim).

Ironically, careless attorneys who can't be bothered to check the citations in their own briefs (or those of opposing counsel) can also turn to AI for help. There are multiple AI products out there that allow you to upload a brief and have it parse and identify all case-law cited, tell you of additional relevant citations you may wish to cite or be aware of, tell you whether the cited case is real (an important query these days), and, depending on the product you are using, can even tell you if the brief cites the case correctly or mischaracterizes the citation.

So, in reality, all lawyers have to do is identify the relevant facts and legal arguments, ask the generative AI tool to organize them into a nice statement of fact and legal argument, then drop the output into a legal research AI tool and have it provide the citations. Then lawyers can proof and cross-reference everything, edit as needed, and sign their name once they are satisfied with the work product.

As With Everything Else, It's the User

Not skimping on the legal research and quality control can result in legal documents, blog posts, and other content being generated faster, with more eloquence, better organization, and more thorough case law and legal argument than a brief drafted by hand, often with additional case law you might not have been able to find on your own, and often with arguments worded in a way that will impress you, as the ultimate final author.

Alternatively, relying on AI so you can grab more clients than you have the bandwidth or skill for and then being negligent in your duties of care and supervision will almost certainly result in low-quality work, whether it is through misuse of AI or overreliance on unlicensed and unsupervised support staff, which will eventually catch up with you. After all, getting disbarred may be the ultimate risk here. Even if that doesn't happen, becoming a laughingstock in the industry for citing fake cases and then telling the court, when pressed for whether you did anything to verify the authenticity of the cites, that you asked ChatGPT if its citations were real and that it replied in the affirmative could certainly harm your professional development. Couple that with the fact that low-quality work will result in harm to your client, malpractice suits, bad online reviews, and increases in your malpractice policy premiums and the conclusion is simple. You are responsible for your work product.

Client Interaction

Newer and even more exciting (or terrifying, as the case may be) is client-facing AI, such as website chatbots. The promises are amazing, while the consequences can be sanctionable. The premise of the new startup I encountered is that it can tie in to your practice management system and be able to answer substantive questions on the matters when your clients call in. Various products exist, with the simplest simply responding to the most basic questions, such as the office hours, directions, and practice areas, while more advanced systems tie into the firm’s calendars and offer to schedule appointments. Within reason, these products are starting to become ready for prime time.

However, as always, some products rush ahead, and some companies end up in the news as their AI chatbots perform in ways other than expected. A very recent example is a headline straight out of today’s news (today of course being the day I’m drafting the article and not the day you’re reading it): “Air Canada must honor refund policy invented by airline’s chatbot!” In short, Air Canada relied on an AI chatbot to provide customer service online. A customer queried the chatbot about a return policy on a particular ticket type. The AI misstated the terms of the refund policy and the customer relied on it. The airline later took the position that the customer should have known the AI was wrong since the AI had also provided a link to the entire terms of service and had the customer read all of it and not relied on their chatbot, he would have known the truth. Of course, the court in that case did not take kindly to the argument. The article also notes that “Air Canada appears to have quietly killed its costly chatbot support.” (Arstecnica Article)

While this incident is of course embarrassing to Air Canada and no one wants their company to be a laughingstock, this scenario is much more terrifying for law firms in particular, as demonstrated by a new and highly improbable startup, which delivers voice chatbots that can answer your phones and directly interact with the client, providing substantive case updates and answering substantive client data based on the information in your practice management system. I can of course spend an entire article listing and discussing the various terrible consequences that are doubtless foreseeable here. There can be serious consequences of a website chatbot giving bad legal advice, providing incorrect factual information, misinforming the client such as to prejudice the client’s case, or even expose privileged and confidential information to the other side if they call in and interrogate the chatbot cleverly enough. As such, attorneys should be very careful about allowing AI chatbots into their practice management systems or to communicate with clients. Scheduling appointments is one thing, but explaining the rule against perpetuities is not something you want to outsource to a machine just yet.

Treat AI as a tool, but do not treat it as a lawyer. That job is yours and as long as you take it seriously, you can increase your efficiency and work quality with AI, thereby providing better value to your client and more free time for yourself.