It’s 2025, and artificial intelligence (AI) platforms such as ChatGPT, Claude, and Alexa are everywhere. This new technology may feel exciting, terrifying, or a little of both—but with a clear understanding of its strengths and weaknesses, solos can wield this tool with confidence to achieve skyrocketing productivity. In this article, we’ll break down how firms can benefit from AI, the risks of using it improperly, and the best practices to ensure AI becomes an asset, not a liability. From selecting the right tools to learning how to keep our clients safe, happy, and better represented, solos will learn to feel comfortable in this new era of high-tech practice.
What AI Can Do for Law Firms
AI is a remarkably powerful technology that, when used responsibly, can boost a solo practitioner’s efficiency and productivity. As professionals who have limited time to do seemingly unlimited work, solos can uniquely benefit from these tools. Key areas in which AI is proving valuable include research, organization, review, communication, and scheduling.
Research
Lightning-fast research delivered in plain language is likely the most well-known use of generative AI (GenAI). While most of us have experience using legal databases for case research, swimming in these wonderfully deep oceans of information can sometimes feel like drowning. Generalized AI such as ChatGPT can now be used to gain a basic understanding of a topic, while legal-specific AI such as Westlaw Precision and Lexis+ AI can offer more nuanced and accurate legal information, with proper citations to boot.
Organization and Review
Perhaps the most useful function AI can offer a skilled professional is thought organization. As members of this learned profession, we know what we’re talking about—but sometimes it’s a challenge to organize it, especially for the lay reader. AI can be used to create outlines, identify counterpoints, draft summaries, and critically analyze emails, contracts, and case law. These platforms can also review the clarity and correctness of an attorney’s writing.
Communication and Scheduling
Raise your hand if you don’t get enough uninterrupted time to draft. Me, too! AI tools such as Motion can find, schedule, and prioritize blocks of time for big projects, while AI like Clara can email clients to schedule appointments. Platforms including Smith.ai function as an electronic answering service. Finally, chatbots can answer frequently asked questions on websites while gathering contact information for weekly e-blasts (which AI can write, too).
AI’s Limitations and Risks
Many attorneys remain hesitant to bring AI into their practices, whether due to security concerns, mistrust of analysis, ethical considerations, or technological insecurity. Their concerns are not without merit; AI is far from foolproof. Consequently, it is of utmost importance that AI not be used without oversight.
Hallucinated Legal Citations
One of the more chilling pitfalls of unchecked AI usage is the phenomenon of “hallucinations.” Hallucinations occur when AI fabricates evidence to support its positions. An unfortunate example occurred in the New York case Mata v. Avianca, Inc., 1:22-cv-01461 (S.D.N.Y.), where the plaintiff’s attorney used ChatGPT to draft his brief. ChatGPT cited six cases in support of the plaintiff’s position. Unfortunately, those cases were entirely fabricated. This did not go over well with the judge, who later sanctioned the offending lawyer.
It gets worse: AI wants to be right and will double down until proven wrong. In Mata, plaintiff’s counsel asked ChatGPT if the cases were real, to which ChatGPT replied they were real—and verifiable on Westlaw.
Interestingly, hallucinations can distort not only hard facts but also social norms being investigated. Recently, I was using AI to research an industry with which I was unfamiliar. When I pushed ChatGPT for a source regarding a claimed industry norm, ChatGPT deflected for 20 minutes before finally admitting that a numbered citation was not a verifiable source but merely a “placeholder.”
Bias
Another pitfall to watch out for is amplified bias. It is important to recognize that AI does not have human morals. AI simply gathers published information and condenses it. To do this, it groups and prioritizes things it sees regularly, thinking “most humans” must agree. But unfortunately, as we lawyers know, history and the majority are not always right or fair—especially when it comes to biases against minorities. Unfortunate examples of AI bias that have reinforced discrimination include the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) algorithm (in which AI has been shown to wrongly predict the rate of criminal recidivism of Black defendants as twice that of white defendants) and Amazon’s 2014 AI recruitment platform (which boosted men’s résumés and de-ranked women’s).
Privacy and Ethical Compliance
Another significant concern for AI in the legal workplace is privacy. It is imperative that attorneys know how AI uses the information we feed it, as some AI saves, learns from, and publicly iterates this previously private information.
An attorney who submits a contract, fact pattern, or client information to an open-source platform such as ChatGPT 3.5 may be breaching attorney-client privilege. Uploading a legal argument may share our mental impressions, legal theories, and case strategy, risking work product doctrine protections. It is also notable that prompts, times, dates, extent of research, and the accounts they are asked under may be saved and reproduced, which has implications for liability and professional malpractice. So, understanding exactly what we’re dealing with is crucial.
ChatGPT’s privacy policy specifically admits that for standard users, it collects and saves data (including user prompts, files, images, and audio) for the purpose of analyzing, aggregating, and developing new services for others. ChatGPT Enterprise, its business subscription, takes steps to protect user input, does not learn or iterate from it, and deletes input after 30 days or by user policy.
Law-specific AI systems attempt to further protect client data. Lexis+ AI immediately anonymizes interactions and walls off search sessions so prompts cannot be used to inform other users. Information can be immediately destroyed by the user and is only saved for 30 days by default. Westlaw Precision defaults to three months of data retention, allows the user to immediately destroy search history data, and does not iterate from prompts.