chevron-down Created with Sketch Beta.

GPSolo Magazine

GPSolo March/April 2025 (42:2): AI for Lawyers

Wielding the AI Advantage: Best Practices for Lawyers

Corinne Taylor-Davis

Summary

  • With a clear understanding of the strengths and weaknesses of artificial intelligence (AI), solos can achieve skyrocketing productivity, particularly in the areas of research, organization, review, communication, and scheduling.
  • Developing a robust set of AI best practices can allow attorneys to confidently integrate AI into our law practices while helping protect us from unintended consequences.
  • The only way to ensure AI works for us and not against us is to be aware of pitfalls and constantly monitor it for new issues.
  • By implementing smart security protocols, ensuring humans undertake the writing and review process, and keeping privacy central to our practices, solos can utilize AI with confidence and efficiency.
Wielding the AI Advantage: Best Practices for Lawyers
Jeffrey Coolidge/Stone via Getty Images

Jump to:

It’s 2025, and artificial intelligence (AI) platforms such as ChatGPT, Claude, and Alexa are everywhere. This new technology may feel exciting, terrifying, or a little of both—but with a clear understanding of its strengths and weaknesses, solos can wield this tool with confidence to achieve skyrocketing productivity. In this article, we’ll break down how firms can benefit from AI, the risks of using it improperly, and the best practices to ensure AI becomes an asset, not a liability. From selecting the right tools to learning how to keep our clients safe, happy, and better represented, solos will learn to feel comfortable in this new era of high-tech practice.

What AI Can Do for Law Firms

AI is a remarkably powerful technology that, when used responsibly, can boost a solo practitioner’s efficiency and productivity. As professionals who have limited time to do seemingly unlimited work, solos can uniquely benefit from these tools. Key areas in which AI is proving valuable include research, organization, review, communication, and scheduling.

Research

Lightning-fast research delivered in plain language is likely the most well-known use of generative AI (GenAI). While most of us have experience using legal databases for case research, swimming in these wonderfully deep oceans of information can sometimes feel like drowning. Generalized AI such as ChatGPT can now be used to gain a basic understanding of a topic, while legal-specific AI such as Westlaw Precision and Lexis+ AI can offer more nuanced and accurate legal information, with proper citations to boot.

Organization and Review

Perhaps the most useful function AI can offer a skilled professional is thought organization. As members of this learned profession, we know what we’re talking about—but sometimes it’s a challenge to organize it, especially for the lay reader. AI can be used to create outlines, identify counterpoints, draft summaries, and critically analyze emails, contracts, and case law. These platforms can also review the clarity and correctness of an attorney’s writing.

Communication and Scheduling

Raise your hand if you don’t get enough uninterrupted time to draft. Me, too! AI tools such as Motion can find, schedule, and prioritize blocks of time for big projects, while AI like Clara can email clients to schedule appointments. Platforms including Smith.ai function as an electronic answering service. Finally, chatbots can answer frequently asked questions on websites while gathering contact information for weekly e-blasts (which AI can write, too).

AI’s Limitations and Risks

Many attorneys remain hesitant to bring AI into their practices, whether due to security concerns, mistrust of analysis, ethical considerations, or technological insecurity. Their concerns are not without merit; AI is far from foolproof. Consequently, it is of utmost importance that AI not be used without oversight.

Hallucinated Legal Citations

One of the more chilling pitfalls of unchecked AI usage is the phenomenon of “hallucinations.” Hallucinations occur when AI fabricates evidence to support its positions. An unfortunate example occurred in the New York case Mata v. Avianca, Inc., 1:22-cv-01461 (S.D.N.Y.), where the plaintiff’s attorney used ChatGPT to draft his brief. ChatGPT cited six cases in support of the plaintiff’s position. Unfortunately, those cases were entirely fabricated. This did not go over well with the judge, who later sanctioned the offending lawyer.

It gets worse: AI wants to be right and will double down until proven wrong. In Mata, plaintiff’s counsel asked ChatGPT if the cases were real, to which ChatGPT replied they were real—and verifiable on Westlaw.

Interestingly, hallucinations can distort not only hard facts but also social norms being investigated. Recently, I was using AI to research an industry with which I was unfamiliar. When I pushed ChatGPT for a source regarding a claimed industry norm, ChatGPT deflected for 20 minutes before finally admitting that a numbered citation was not a verifiable source but merely a “placeholder.”

Bias

Another pitfall to watch out for is amplified bias. It is important to recognize that AI does not have human morals. AI simply gathers published information and condenses it. To do this, it groups and prioritizes things it sees regularly, thinking “most humans” must agree. But unfortunately, as we lawyers know, history and the majority are not always right or fair—especially when it comes to biases against minorities. Unfortunate examples of AI bias that have reinforced discrimination include the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) algorithm (in which AI has been shown to wrongly predict the rate of criminal recidivism of Black defendants as twice that of white defendants) and Amazon’s 2014 AI recruitment platform (which boosted men’s résumés and de-ranked women’s).

Privacy and Ethical Compliance

Another significant concern for AI in the legal workplace is privacy. It is imperative that attorneys know how AI uses the information we feed it, as some AI saves, learns from, and publicly iterates this previously private information.

An attorney who submits a contract, fact pattern, or client information to an open-source platform such as ChatGPT 3.5 may be breaching attorney-client privilege. Uploading a legal argument may share our mental impressions, legal theories, and case strategy, risking work product doctrine protections. It is also notable that prompts, times, dates, extent of research, and the accounts they are asked under may be saved and reproduced, which has implications for liability and professional malpractice. So, understanding exactly what we’re dealing with is crucial.

ChatGPT’s privacy policy specifically admits that for standard users, it collects and saves data (including user prompts, files, images, and audio) for the purpose of analyzing, aggregating, and developing new services for others. ChatGPT Enterprise, its business subscription, takes steps to protect user input, does not learn or iterate from it, and deletes input after 30 days or by user policy.

Law-specific AI systems attempt to further protect client data. Lexis+ AI immediately anonymizes interactions and walls off search sessions so prompts cannot be used to inform other users. Information can be immediately destroyed by the user and is only saved for 30 days by default. Westlaw Precision defaults to three months of data retention, allows the user to immediately destroy search history data, and does not iterate from prompts.

Best Practices

Developing a robust set of AI best practices can allow attorneys to confidently integrate AI into our law practices while helping protect us from unintended consequences.

Keep AI in Its Place

The most important protection against AI mishaps is human review. Understanding that AI is a tool to assist and not replace legal analysis is our first line of defense. Remember, our ethical standards require that we must act with competence and diligence, exercise independent judgment, and only make meritorious claims (see American Bar Association Model Rules of Professional Conduct 1.1 (Competence), 1.3 (Diligence), 2.1 (Advisor), and 3.1 (Meritorious Claims and Contentions)). Don’t find yourself in a hallucination situation. The advent of this technology is seductive in its ease, but it cannot replace skilled thought, human intuition, or creative problem-solving.

Lawyers should develop policies that welcome AI’s strengths without falling prey to its weaknesses. Smart policies could include:

  • Privacy must be protected, first and always. Before allowing our teams to use AI, attorneys must ensure staff understands the risks and fully complies with our privacy policies. All prompts must be stripped of any identifying characteristics. This includes document review. Using placeholders (like John Doe) for important information can help balance privacy requirements with document review needs. The uploading of evidence to open-source AI platforms for analysis should be prohibited. Using AI tools designed for the legal profession may further mitigate these concerns.
  • AI is only a starting point. Attorneys should encourage our teams to use AI to gain a 10,000-foot view of an issue, organize ideas, identify counterpoints, and highlight possible evidence. It should not be used for complete argument drafting.
  • Attorneys must perform their own research. Any “evidence” cited by AI must be verified through trustworthy channels. AI is not a trustworthy channel. Use Westlaw, Lexis, Fastcase, the courthouse, or a legal library.
  • All client- or court-facing work product must be human-drafted. Attorneys must always draft their own arguments. AI is a great tool to “bounce ideas off of,” but it cannot defend our clients’ rights and freedoms with the same quality and empathy that a fellow human can.
  • Attorneys must comply with local AI disclosure requirements. Some venues now require attorneys to disclose AI-assisted content in legal documents. Be sure to check local rules before filing.

Use Protection

Remember, AI is a new and fast-developing technology. Like all new technology, the scale of the risk is unknown, and the stakes change daily. So, it’s important to guard against foreseeable issues while fortifying against unknown attacks. Best practices include:

  • System updates. Technology platforms are continually updating to help fix “bugs” and privacy concerns. Lawyers must ensure we keep our systems up-to-date in order to avoid being vulnerable to attack.
  • Multi-factor authentication (MFA). With data breaches on the rise and credential misuse ranking as a top concern, requiring MFA is an easy way to help prevent unwanted individuals from accessing our AI accounts. Most platforms offer optional MFA. Opt in!
  • Encryption in transit and at rest. Most AI platforms use encryption to protect data, though the extent varies by provider. Subscriptions may offer additional security features. Traditional technology platforms, such as basic email and cloud storage, often encrypt data in transit but may lack end-to-end encryption unless upgraded. As we add more technology into our law practices, we must also add additional security measures to protect against bad actors.
  • Retainer agreements. On top of third-party privacy controls, attorneys will benefit from creating their own layer of liability protection through their retainer agreements. Our ethical standards require us to inform our clients as to how we intend to handle their matter and to obtain informed consent (see ABA Model Rule 1.4 (Communications)). For firms using AI and other technology, client agreements should include provisions that reflect clients’ approval of the use of AI and their understanding that electronic communication and storage may be more susceptible to attack.
  • Anonymized prompts. I’m saying it again for those in the back! As user error is a leading cause of data breach, do not rely solely on third-party privacy controls. Sterilizing AI prompts of any identifying information not only helps to protect us and our client in the event of a breach or subpoena, but it also helps to guard against AI bias.

Be Smart and Stay Vigilant

As we all know, AI is not foolproof, and it is constantly changing. The only way to ensure it works for us and not against us is to be aware of pitfalls and constantly monitor it for new issues. As solos, we wear multiple hats and often do not have additional support to share the burden of running a vigilant business. This may put us at higher risk of falling into an AI mishap. To keep ourselves accountable, consider the following:

  • Test AI for bias before use and adjust. Before using AI as a research source, attorneys must learn a technology’s original biases. Frame prompts using diverse positions. Does the system answer a question differently if it believes we or our hypothetical subject is a different race, gender, or religion? What about quality case identification? Does it properly weight significant holdings, or does it ignore or underrepresent certain arguments? We must adjust prompts and expectations accordingly and train our AI to become more neutral, recognize important legal principles, and identify contradictory evidence (consider ABA Model Rule 3.3 (Candor Toward the Tribunal)).
  • Regularly interact with AI from a client or opposing counsel’s position. Using AI to answer emails or calls? Attorneys must make sure to regularly interact with our AI from a third-party perspective to ensure it is functioning properly and that its tone matches our brand.
  • Schedule regular AI testing and training. We must make a point to regularly test and train ourselves and our AI. At regular intervals, attorneys should block off time to ensure our human and AI systems are trained properly. This should include checking our AI for tone, bias, and hallucinations, researching any newly found risks, and availing ourselves of third-party trainings. Do not expect it to happen naturally. Block off time on the calendar to keep yourself accountable.
  • Commit to keeping AI knowledge current. In addition to regularly scheduled AI research, attorneys should consider setting Google and legal platform alerts to advise when there is news regarding our AI platforms of choice.

Using AI with Confidence

By implementing smart security protocols, ensuring humans undertake the writing and review process, and keeping privacy central to our practices, solos can utilize AI with confidence and efficiency. AI, when wielded appropriately, is not to be feared, but celebrated.

Disclosure: AI was used to outline and review this article. A human wrote and researched it. During its review, AI attempted to remove the human’s story about its social-norm hallucination and also claimed its privacy policy did not allow for the saving of information despite its published policy to the contrary.

    Author