chevron-down Created with Sketch Beta.

Voice of Experience

Voice of Experience: May 2025

How to Navigate the Risks and Rewards of a Technology-Driven Future

Jeffrey M Allen and Ashley Hallene

Summary

  • Scammers and cybercriminals can also make use of AI to exploit seniors, causing emotional, financial, and physical harm.
  • While AI can improve access to justice and protect seniors from fraud, it raises concerns about data privacy, algorithmic bias, and digital manipulation.
  • We need to use extreme caution in using AI in elder law to ensure it provides a fair, ethical, and just future for seniors.
How to Navigate the Risks and Rewards of a Technology-Driven Future
jacoblund via iStock

Jump to:

Once the stuff of imagination, Artificial intelligence (AI) no longer lives in the realm of science fiction. We have had AI in the real world at some level for many years. The recent innovation of ChatGPT created generative AI (GenAI). GenAI has come to the real world and affects us in very real ways, both positive and negative. On the good side, it has, opened the door to many uses, allowing it to fill a number of roles and generate output such as text or images. It has brought AI to the forefront, fulfilling promises of AI assistants, which no longer just help us do legal research, but for example, now help us do more sophisticated research, write emails, respond to text messages, keep calendars current, make calls for us, drive cars, and manage investments. GenAI can even help us write contracts or briefs. When its primary role in law related to legal research, it helped all lawyers, regardless of their practice area.

Recent innovations and the dramatic expansion of GenAI's powers have moved it squarely into the world of elder law. GenAI can help attorneys practice elder law, just as it helps attorneys in any other field. It can also provide some specific benefits to attorneys practicing elder law by tracking the behavior of seniors. It also offers many tools to make the lives of senior citizens easier to navigate. GenAI can help monitor the health, safety, and security of seniors, which benefits them. It can also help those who provide care and assistance to seniors, making it easier to keep track of them.

Let’s never forget, however, that technology always presents itself as a double-edged sword. Just as AI can help attorneys practicing elder law, those who care for seniors and seniors themselves, it can also facilitate efforts by scammers, cybercriminals, and other nefarious actors to take advantage of the elderly and inflict serious emotional and financial, if not physical, harm upon seniors.

This is an exciting moment for senior citizens and their caregivers, as well as elder law attorneys, advocates, and policymakers. However, it is also more than a little daunting. AI can improve access to justice, help protect seniors from fraud, and, by tracking their behavior, make capacity assessments more objective. However, it also comes with real risks: data privacy concerns, the potential of algorithmic bias, digital manipulation, and many legal gray areas. It can also provide a means to help fraudsters and other bad actors defraud seniors and take advantage of their increased vulnerability. As a result, we must exercise extreme caution in how we use AI in elder law and in dealing with seniors.

Let’s examine AI’s implications for elder law, the opportunities it creates, the dangers we can’t ignore, and what we must do to stay ahead of the curve. Please note that examples referenced in the remainder of this article do not represent the universe of good or bad with respect to the use of AI. They provide an example of what has occurred for purposes of discussion.

The Good: How AI Can Help Older Adults and Their Advocates

1. Bringing Legal Help to More People

One of the most significant upsides of AI in elder law is accessibility. Many older adults don’t have easy access to an elder law attorney, especially those living in rural areas or underserved communities. AI-powered legal tools can help bridge that gap by offering affordable, user-friendly options for drafting documents like wills, healthcare proxies, and powers of attorney.

Example: AI Document Tools:

Imagine an 85-year-old woman who wants to update her will but lives two hours from the nearest attorney. She no longer has a driver’s license, and no close friends or relatives live nearby to drive her to see an attorney. With the help of an AI platform, she can go online and create a basic estate plan for herself in under an hour. These platforms use natural language processing to guide users through key questions and generate documents that are legally valid in many states. But—and it’s a big but—these tools do not always work perfectly. The fact that a document, such as a will, qualifies as legally valid in a particular state does not mean that it accurately reflects the desires of the senior who created the document. Ask yourself this question: “Do you want your 85-year-old mother to use such forms to draft her will, or a trust, or otherwise plan her estate without the assistance of an attorney?” Is your answer the same if she has $200,000 or $1,000,000 in assets? Where do you draw the line?

Real-World Scenario: Ambiguous Do It Yourself Will

In 2023, a widow in rural Missouri used an online AI tool to draft her will. She selected vague options about dividing her estate and never had the document properly witnessed. After she passed, her children fought over the meaning of “equal shares” since the wording didn’t match state law. The probate judge ultimately invalidated the will, sending the estate into intestacy.

This case illustrates a key lesson: AI tools can help but require careful use, ideally with human legal review.

2. Using AI for Capacity Monitoring

One of the trickiest areas in elder law is determining mental capacity. Can someone still make financial decisions? Do they have testamentary capacity? Can they make or revoke a power of attorney? Do they need a guardian? If so, over the estate, the person, or both?

Traditionally, when doubt arises, these questions rely on doctor evaluations, court hearings, and sometimes anecdotal evidence. Sometimes, the attorney serves as the only gatekeeper, either determining the individual has contractual or testamentary capacity and allowing them to sign estate planning documents or concluding that the individual lacks such capacity and refusing to permit the execution of such documents. Often, such decisions reflect only brief, casual observations or anecdotal information from the relative who brought the senior citizen to the attorney to prepare the documents. If you are in the position of that attorney and the circumstances do not make you wonder if you are looking at undue influence, perhaps you need to examine things more carefully. However, AI might help by offering a more data-driven view of how a person’s cognition has changed over time, which may allow a more accurate assessment.

Example: Smart Devices and Cognitive Clues

Imagine a client in an assisted living arrangement. The client uses a smart speaker, like Amazon Alexa, to manage his day. Over time, the AI behind Alexa notices changes in his speech—more hesitations, repeated questions, and confusion with routine tasks. With proper consent obtained in advance, that data could help provide a broader and more accurate assessment of his cognitive decline.

In a 2024 California case, a daughter petitioned for guardianship, citing AI-analyzed voice data from her father’s smart home device. The AI flagged unusual speech patterns linked to dementia. While the court ultimately required a live medical assessment, it acknowledged the AI data as “supportive evidence.”

This opens the door to AI-assisted monitoring, which could be a game changer for spotting early signs of incapacity—but it also raises big questions about privacy and reliability.

3. Fighting Elder Financial Abuse with AI

It is well established that seniors are more vulnerable to scams. This vulnerability results from many factors, including an acknowledged lack of familiarity with technological Devices. Financial exploitation is one of the fastest-growing forms of elder abuse. AI tools used by banks and financial technology companies continue to improve their ability to spot unusual spending patterns, like large wire transfers or duplicate charges, and flag them in real-time.

Case in Point: Early Fraud Detection Saves Thousands

In 2023, a bank in Florida used machine learning software to spot and stop a scam targeting an 88-year-old customer. The algorithm noticed an irregular transfer attempt for $9,800 to an out-of-state account. After a quick investigation, it turned out to be part of a “grandparent scam” where fraudsters impersonated her grandson using a deepfake voice. The bank blocked the transfer, and a few months later, the scammer was arrested after federal authorities traced the AI voice software.

These tools aren’t foolproof, but when paired with legal tools like conservatorships, they can offer new ways to protect vulnerable clients from losing everything.

The Bad: The Legal Risks and Gray Areas

AI may be smart, but it’s not always safe, especially for older adults who may not be technologically savvy or who rely heavily on digital tools.

1. Privacy and Surveillance Worries

AI tools in elder care settings—such as wearable monitors, robotic companions, or smart sensors—can collect massive amounts of data. Who owns that data? Does HIPAA protect it? What happens if it is hacked? We need to figure out answers to these questions to ensure that we protect the rights of seniors.

Example: Cameras in Nursing Homes

Some long-term care facilities use AI-linked video monitoring to detect falls or patient distress. However, these cameras may also record private moments—bathing, dressing, or medical treatment—raising serious concerns about privacy, consent, and dignity.

Legal References:

  • HIPAA governs health data (45 C.F.R. §§ 160, 164), but it doesn’t clearly cover third-party tech vendors.
  • State privacy laws like the California Consumer Privacy Act (Cal. Civ. Code §§ 1798.100–1798.199) may apply, but enforcement is still limited.

Bottom line: AI in elder care must be deployed with full transparency and strong data protections.

2. Algorithmic Age Discrimination

Not all algorithms are fair. If AI tools train on biased data, they likely acquire the biases in the data on which they train. As a result, they may well discriminate in whatever decisions they have trained to make, whether in healthcare triage, housing decisions, insurance, or something else.

Case Example: AI Denial of Long-Term Care Claims

In Illinois, an AI claims processing system used by an insurer started flagging older applicants as “high risk” and denying benefits or imposing higher rates. After consumer complaints, the Illinois Department of Insurance opened a probe, citing the Illinois Human Rights Act’s protections against age discrimination (775 ILCS 5/).

This example shows why it’s critical to audit AI systems and ensure they don’t reinforce harmful stereotypes or cut off services for seniors.

3. AI-Enabled Elder Fraud and Deepfakes

We’ve seen a rise in scams using AI to create fake videos, emails, or voices. While problematic for all age groups, it poses more significant dangers for seniors, whose increased vulnerability may make it more difficult for them to recognize a fake call or message.

Example: Deepfake Grandson Scam

In an Arizona case, a scammer used AI voice-cloning software to mimic an 18-year-old’s voice and trick his grandfather into wiring $10,000. Prosecutors used forensic analysis to trace the fraud back to the defendant, who became one of the first people convicted of using AI in an elder fraud scheme.

These scams can be emotionally devastating and financially ruinous. Elder law attorneys should warn clients and their families about new tech-based frauds and help them set up legal safeguards, such as trusted contact alerts or limited powers of attorney.

What the Law Says—and Where It’s Going

So far, no single law governs the use of AI in elder law. But here are a few sources to keep in mind:

  • The ABA Model Rules of Professional Conduct require lawyers to understand the technology they use (Rule 1.1, Comment 8).
  • The Elder Justice Act (42 U.S.C. § 1397j) supports programs to prevent elder abuse—but it doesn’t yet address AI.
  • The European Union’s proposed AI Act (2021) classifies AI used in elder care as “high risk,” requiring strict oversight.

Before things settle, we will need more regulation, litigation, and case law. Meanwhile, elder law practitioners must pay attention, ask the right questions, and advocate for client protection.

What Should Lawyers and Policymakers Do?

  1. Stay technologically savvy. Understand how AI tools work before recommending them to clients or using them in practice. If you cannot explain an AI tool to your client, do not recommend it!
  2. Push for legislation. We need laws that protect older adults from AI-based discrimination and abuse, especially regarding data use and financial scams.
  3. Review AI-drafted documents. Don’t assume they’re valid or state specific. Always verify, especially when dealing with powers of attorney, wills, or healthcare directives. Remember that you need to deal with both the question of validity of the document and the question of whether it accurately reflects the intent of the individual who made it.
  4. Use AI wisely. It can be an excellent tool for drafting, research, and monitoring—but it’s no substitute for good lawyering.

Final Thoughts

AI has the potential to make life better for millions of older adults. It can help them stay independent, protect their finances, and access legal help more efficiently than ever. But it also brings new threats that didn’t exist a decade ago and brings back others that had fallen by the wayside, until reinvigorated by AI.

As elder law evolves, attorneys should carefully embrace technology’s benefits while fiercely guarding their clients’ rights, privacy, and dignity. Elder law has a digital future, and it’s up to us to ensure that it provides a fair, ethical, and just future.

References:

  1. Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447–453.
  2. ABA Model Rules of Professional Conduct, Rule 1.1, Comment 8 (2012).
  3. European Commission. (2021). Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act). COM/2021/206 final.
  4. Health Insurance Portability and Accountability Act (HIPAA), 45 C.F.R. §§ 160, 164
  5. California Consumer Privacy Act (Cal. Civ. Code §§ 1798.100 et seq.)
  6. Illinois Human Rights Act, 775 ILCS 5/
  7. Elder Justice Act, 42 U.S.C. § 1397j
  8. U.S. v. Doe, Case No. 2:23-cr-187 (D. Ariz. 2023)

    Authors