chevron-down Created with Sketch Beta.

Experience

Experience April/May 2024

The Top 5 Issues You Should Know About AI

Jeffrey M Allen and Ashley Hallene

Summary

  • AI operates based on the data it’s trained on and the algorithms that drive its processes.
  • Lawyers must understand the data protection laws applicable to their jurisdictions, and advise their clients on how to comply with these regulations when implementing AI solutions.
  • Lawyers must stay informed about these developments to ensure that AI implementations are compliant with current and upcoming regulations.
The Top 5 Issues You Should Know About AI
demaerre via Getty Images

Jump to:

In the rapidly evolving landscape of artificial intelligence, lawyers are faced with challenges that stretch across ethical, privacy, intellectual property, liability, and regulatory domains.

As AI continues to transform industries, including the legal profession, staying abreast of regulatory changes and understanding the multifaceted implications of AI’s use becomes imperative for lawyers. For now, here are what we believe are the five biggest challenges with AI today.

1. AI lacks a moral compass

AI systems, including advanced machine-learning models and algorithms, don’t possess inherent ethical values, principles, or the ability to make moral judgments. Unlike humans, who can consider the ethical implications, societal norms, and moral values when making decisions, AI operates based on the data it’s trained on and the algorithms that drive its processes.

This means AI systems can’t inherently understand right from wrong or evaluate the ethical consequences of their actions. They lack the ability to reason about ethical principles or to make decisions that align with societal values or moral considerations unless explicitly programmed to do so, which remains a complex and unresolved issue.

Essentially, the “moral compass” of an AI system is indirectly shaped by the data it’s trained on and the objectives it’s programmed to achieve. If the training data contains biases or the objectives don’t consider ethical implications, the AI’s actions may reflect or amplify these issues. Ethical decisions often require understanding nuanced contexts, including cultural, societal, and individual factors. AI systems generally lack the ability to fully understand these contexts and the subtleties that can significantly impact moral judgments.

Embedding ethical decision making into AI involves complex philosophical and technical challenges. Defining what’s ethical can vary significantly across different cultures and situations, making it difficult to program AI systems with a universal set of ethical guidelines that apply in all contexts. After all, who should be the model culture for ethics? Who’s responsible for unethical outcomes resulting from the AI’s decisions? Do you hold the developers, the users, or regulatory authorities accountable?

Consider the tale of the Chatbot Tay, developed by Microsoft. Launched in March 2016, Tay was designed to learn from its interactions with users on Twitter and other social media platforms, aiming to engage in human-like conversations. Users could follow and interact with the chatbot (@TayandYou) on Twitter, and the bot would tweet back.

Sadly, within 24 hours of its launch, Tay began producing and tweeting highly offensive and inappropriate content, including racist, sexist, and inflammatory statements. The bot was designed to learn from users’ posts, and the users who figured this out somehow managed to train the bot to spew out statements like “Hitler was right I hate the jews” and “Ted Cruz is the Cuban Hitler.”

While it’s unsurprising the bot encountered an overwhelming number of trolls and hateful comments on what was then Twitter (now known as X), it’s frustrating that it lacked the judgment to avoid incorporating these views into its own tweets.

Lawyers must be vigilant about the ethical implications of using AI to ensure the technologies they employ or advise on don’t perpetuate discrimination or violate ethical standards. This is particularly crucial in areas such as criminal justice, hiring practices, and loan approvals.

2. AI is chock full of data privacy and security concerns

Since AI systems often require access to vast amounts of data, there’s a heightened risk of data breaches and privacy violations. Lawyers have an obligation to protect sensitive client information, to comply with legal and ethical standards, and to ensure that AI tools don’t become liabilities.

Lawyers must understand the data protection laws applicable to their jurisdictions, such as the EU’s General Data Protection Regulation and the California Consumer Privacy Act of 2018, and advise their clients on how to comply with these regulations when implementing AI solutions. These laws often require specific handling, processing, and protection of personal data.

Let’s take the GDPR as an example. It imposes strict rules on handling data, which can be inadvertently or systematically breached by AI systems. First off, the lack of transparency can be an issue. GDPR requires that data processing be transparent. AI systems, especially those based on complex algorithms and machine learning, can be opaque, making it difficult for users to understand how their data is being used or how decisions are made.

This “black box” issue can violate GDPR’s transparency requirements. Another risk to consider is the inadequate measures for consent. GDPR mandates that consent for data processing must be informed, specific, and freely given.

AI applications that use personal data may not always obtain explicit consent, especially if the data use extends beyond the original purpose for which consent was given. This can happen when AI learns and evolves, applying data in new, unanticipated ways.

Another concern is the inability to ensure data accuracy (a peril that has plagued more than a few lawyers in the U.S.) GDPR requires that personal data be accurate and, where necessary, kept up to date. AI systems, particularly those relying on machine learning, may propagate and even amplify errors or inaccuracies in data. That could lead to decisions based on incorrect information, thus violating GDPR.

3. AI causes IP woes

The use of AI raises complex questions about copyright, patents, and ownership rights. For example, determining the ownership and authorship of works generated by AI, such as texts, images, music, or code, is challenging.

Lawyers need to consider whether these works can be protected under copyright law and, if so, who holds the rights—the creator of the AI, the user of the AI, or potentially the AI itself (though current laws don’t recognize AI as an author or rights holder). Jurisdictions vary on whether AI creations meet these criteria because they traditionally require human authorship.

The AI’s algorithm may violate another person’s intellectual property rights. AI systems are designed to process, generate, and even mimic human-like outputs based on the data they’re fed. Thus, there are scenarios where these capabilities intersect, and possibly infringe, the rights of IP holders.

AI-generated content can infringe another person’s copyright if it produces works that are substantially like copyrighted material. That’s especially true if the AI was trained on or directly incorporates parts of those copyrighted works without authorization.

Consider the situation that Hollie Mengert found herself in. Hollie is a Disney illustrator who found that her art style had been cloned as an AI experiment by a mechanical engineering student in Canada. The student downloaded several of Hollie’s pieces and used them to train a machine-learning model to reproduce her style.

It felt like a violation to Hollie, but is it fair? Can she do anything about it? It seems everyone remains divided on the issue. Questions loom on whether owning the copyright of the material used in the training algorithm for AI gives you any legal claim to the output of generative AI. Lawyers will need to navigate these uncharted territories to protect their clients’ interests.

4. Who’s liable and who’s accountable?

As AI systems make more decisions, the question of who holds liability in cases of malfunctions or harm is a complicated one. Lawyers must consider who’s responsible when an AI system fails—the developer, the user, or the AI itself? And how could you hold AI responsible? This issue is particularly significant in sectors like autonomous vehicles, healthcare, and finance.

Consider the issues surrounding law enforcement use of facial recognition technology and its implications for privacy, bias, and civil liberties. For example, Detroit police wrongfully arrested Robert Julian-Borchak Williams in January 2020 due to a misidentification by facial recognition technology. This incident may be the first known case of a wrongful arrest based on an incorrect AI facial recognition match in the United States, but it likely won’t be the last.

Williams, an African-American man, was arrested in front of his family and detained for 30 hours based on accusations of shoplifting watches from a luxury retail store. His arrest and retention were based solely on facial recognition technology that misidentified him as a suspect from surveillance footage. It later emerged that the technology had incorrectly matched Williams’ driver’s license photo with surveillance video images of the actual perpetrator.

This case highlights the serious consequences of relying on AI technologies that may not be entirely accurate or fair. That’s especially true for people of color; studies have shown that they’re at a higher risk of misidentification by facial recognition systems.

Following this incident and others like it, several cities and organizations around the world have called for moratoriums, or outright bans, on the use of facial recognition technology by police and other government agencies. They cite concerns over privacy, racial bias, and the potential for misuse.

5. Regulating AI is complicated

The regulatory landscape for AI is evolving, with state and federal authorities focusing on how to regulate it. Lawyers must stay informed about these developments to ensure that AI implementations are compliant with current and upcoming regulations.

In August 2023, Thomson Reuters released its Future of Professionals report, which highlights how AI is transforming every aspect of work. While regulations will almost certainly evolve more slowly than AI, you should begin honing the rules for usage, privacy, and communication policies in your practice.

Be honest with yourself, your clients, and the court on how you’re using AI technology. And be vigilant in verifying the accuracy and authenticity of anything you use or reference from AI. The pervasive integration of AI into our personal and professional lives presents a complex array of challenges that demand careful consideration and proactive management.

From ensuring that AI systems operate within ethical bounds to navigating the intricate landscape of intellectual property rights and addressing data privacy concerns, lawyers play a crucial role in shaping the future of AI. The incidents involving Chatbot Tay and Robert Julian-Borchak Williams serve as stark reminders of the potential repercussions of unchecked AI deployment.

As AI technologies become increasingly ingrained in our daily lives, the legal profession must remain at the forefront of advocating for responsible AI use, safeguarding individual rights, and contributing to the development of legal frameworks that keep pace with technological advancements. Embracing AI’s potential while mitigating its risks will require continuous learning, ethical vigilance, and an unwavering commitment to justice and fairness.

    Author