Once the stuff of imagination, Artificial intelligence (AI) no longer lives in the realm of science fiction. We have had AI in the real world at some level for many years. The recent innovation of ChatGPT created generative AI (GenAI). GenAI has come to the real world and affects us in very real ways, both positive and negative. On the good side, it has, opened the door to many uses, allowing it to fill a number of roles and generate output such as text or images. It has brought AI to the forefront, fulfilling promises of AI assistants, which no longer just help us do legal research, but for example, now help us do more sophisticated research, write emails, respond to text messages, keep calendars current, make calls for us, drive cars, and manage investments. GenAI can even help us write contracts or briefs. When its primary role in law related to legal research, it helped all lawyers, regardless of their practice area.
Recent innovations and the dramatic expansion of GenAI's powers have moved it squarely into the world of elder law. GenAI can help attorneys practice elder law, just as it helps attorneys in any other field. It can also provide some specific benefits to attorneys practicing elder law by tracking the behavior of seniors. It also offers many tools to make the lives of senior citizens easier to navigate. GenAI can help monitor the health, safety, and security of seniors, which benefits them. It can also help those who provide care and assistance to seniors, making it easier to keep track of them.
Let’s never forget, however, that technology always presents itself as a double-edged sword. Just as AI can help attorneys practicing elder law, those who care for seniors and seniors themselves, it can also facilitate efforts by scammers, cybercriminals, and other nefarious actors to take advantage of the elderly and inflict serious emotional and financial, if not physical, harm upon seniors.
This is an exciting moment for senior citizens and their caregivers, as well as elder law attorneys, advocates, and policymakers. However, it is also more than a little daunting. AI can improve access to justice, help protect seniors from fraud, and, by tracking their behavior, make capacity assessments more objective. However, it also comes with real risks: data privacy concerns, the potential of algorithmic bias, digital manipulation, and many legal gray areas. It can also provide a means to help fraudsters and other bad actors defraud seniors and take advantage of their increased vulnerability. As a result, we must exercise extreme caution in how we use AI in elder law and in dealing with seniors.
Let’s examine AI’s implications for elder law, the opportunities it creates, the dangers we can’t ignore, and what we must do to stay ahead of the curve. Please note that examples referenced in the remainder of this article do not represent the universe of good or bad with respect to the use of AI. They provide an example of what has occurred for purposes of discussion.
The Good: How AI Can Help Older Adults and Their Advocates
1. Bringing Legal Help to More People
One of the most significant upsides of AI in elder law is accessibility. Many older adults don’t have easy access to an elder law attorney, especially those living in rural areas or underserved communities. AI-powered legal tools can help bridge that gap by offering affordable, user-friendly options for drafting documents like wills, healthcare proxies, and powers of attorney.
Example: AI Document Tools:
Imagine an 85-year-old woman who wants to update her will but lives two hours from the nearest attorney. She no longer has a driver’s license, and no close friends or relatives live nearby to drive her to see an attorney. With the help of an AI platform, she can go online and create a basic estate plan for herself in under an hour. These platforms use natural language processing to guide users through key questions and generate documents that are legally valid in many states. But—and it’s a big but—these tools do not always work perfectly. The fact that a document, such as a will, qualifies as legally valid in a particular state does not mean that it accurately reflects the desires of the senior who created the document. Ask yourself this question: “Do you want your 85-year-old mother to use such forms to draft her will, or a trust, or otherwise plan her estate without the assistance of an attorney?” Is your answer the same if she has $200,000 or $1,000,000 in assets? Where do you draw the line?
Real-World Scenario: Ambiguous Do It Yourself Will
In 2023, a widow in rural Missouri used an online AI tool to draft her will. She selected vague options about dividing her estate and never had the document properly witnessed. After she passed, her children fought over the meaning of “equal shares” since the wording didn’t match state law. The probate judge ultimately invalidated the will, sending the estate into intestacy.
This case illustrates a key lesson: AI tools can help but require careful use, ideally with human legal review.
2. Using AI for Capacity Monitoring
One of the trickiest areas in elder law is determining mental capacity. Can someone still make financial decisions? Do they have testamentary capacity? Can they make or revoke a power of attorney? Do they need a guardian? If so, over the estate, the person, or both?
Traditionally, when doubt arises, these questions rely on doctor evaluations, court hearings, and sometimes anecdotal evidence. Sometimes, the attorney serves as the only gatekeeper, either determining the individual has contractual or testamentary capacity and allowing them to sign estate planning documents or concluding that the individual lacks such capacity and refusing to permit the execution of such documents. Often, such decisions reflect only brief, casual observations or anecdotal information from the relative who brought the senior citizen to the attorney to prepare the documents. If you are in the position of that attorney and the circumstances do not make you wonder if you are looking at undue influence, perhaps you need to examine things more carefully. However, AI might help by offering a more data-driven view of how a person’s cognition has changed over time, which may allow a more accurate assessment.
Example: Smart Devices and Cognitive Clues
Imagine a client in an assisted living arrangement. The client uses a smart speaker, like Amazon Alexa, to manage his day. Over time, the AI behind Alexa notices changes in his speech—more hesitations, repeated questions, and confusion with routine tasks. With proper consent obtained in advance, that data could help provide a broader and more accurate assessment of his cognitive decline.
In a 2024 California case, a daughter petitioned for guardianship, citing AI-analyzed voice data from her father’s smart home device. The AI flagged unusual speech patterns linked to dementia. While the court ultimately required a live medical assessment, it acknowledged the AI data as “supportive evidence.”
This opens the door to AI-assisted monitoring, which could be a game changer for spotting early signs of incapacity—but it also raises big questions about privacy and reliability.