2. AI is chock full of data privacy and security concerns
Since AI systems often require access to vast amounts of data, there’s a heightened risk of data breaches and privacy violations. Lawyers have an obligation to protect sensitive client information, to comply with legal and ethical standards, and to ensure that AI tools don’t become liabilities.
Lawyers must understand the data protection laws applicable to their jurisdictions, such as the EU’s General Data Protection Regulation and the California Consumer Privacy Act of 2018, and advise their clients on how to comply with these regulations when implementing AI solutions. These laws often require specific handling, processing, and protection of personal data.
Let’s take the GDPR as an example. It imposes strict rules on handling data, which can be inadvertently or systematically breached by AI systems. First off, the lack of transparency can be an issue. GDPR requires that data processing be transparent. AI systems, especially those based on complex algorithms and machine learning, can be opaque, making it difficult for users to understand how their data is being used or how decisions are made.
This “black box” issue can violate GDPR’s transparency requirements. Another risk to consider is the inadequate measures for consent. GDPR mandates that consent for data processing must be informed, specific, and freely given.
AI applications that use personal data may not always obtain explicit consent, especially if the data use extends beyond the original purpose for which consent was given. This can happen when AI learns and evolves, applying data in new, unanticipated ways.
Another concern is the inability to ensure data accuracy (a peril that has plagued more than a few lawyers in the U.S.) GDPR requires that personal data be accurate and, where necessary, kept up to date. AI systems, particularly those relying on machine learning, may propagate and even amplify errors or inaccuracies in data. That could lead to decisions based on incorrect information, thus violating GDPR.
3. AI causes IP woes
The use of AI raises complex questions about copyright, patents, and ownership rights. For example, determining the ownership and authorship of works generated by AI, such as texts, images, music, or code, is challenging.
Lawyers need to consider whether these works can be protected under copyright law and, if so, who holds the rights—the creator of the AI, the user of the AI, or potentially the AI itself (though current laws don’t recognize AI as an author or rights holder). Jurisdictions vary on whether AI creations meet these criteria because they traditionally require human authorship.
The AI’s algorithm may violate another person’s intellectual property rights. AI systems are designed to process, generate, and even mimic human-like outputs based on the data they’re fed. Thus, there are scenarios where these capabilities intersect, and possibly infringe, the rights of IP holders.
AI-generated content can infringe another person’s copyright if it produces works that are substantially like copyrighted material. That’s especially true if the AI was trained on or directly incorporates parts of those copyrighted works without authorization.
Consider the situation that Hollie Mengert found herself in. Hollie is a Disney illustrator who found that her art style had been cloned as an AI experiment by a mechanical engineering student in Canada. The student downloaded several of Hollie’s pieces and used them to train a machine-learning model to reproduce her style.
It felt like a violation to Hollie, but is it fair? Can she do anything about it? It seems everyone remains divided on the issue. Questions loom on whether owning the copyright of the material used in the training algorithm for AI gives you any legal claim to the output of generative AI. Lawyers will need to navigate these uncharted territories to protect their clients’ interests.
4. Who’s liable and who’s accountable?
As AI systems make more decisions, the question of who holds liability in cases of malfunctions or harm is a complicated one. Lawyers must consider who’s responsible when an AI system fails—the developer, the user, or the AI itself? And how could you hold AI responsible? This issue is particularly significant in sectors like autonomous vehicles, healthcare, and finance.
Consider the issues surrounding law enforcement use of facial recognition technology and its implications for privacy, bias, and civil liberties. For example, Detroit police wrongfully arrested Robert Julian-Borchak Williams in January 2020 due to a misidentification by facial recognition technology. This incident may be the first known case of a wrongful arrest based on an incorrect AI facial recognition match in the United States, but it likely won’t be the last.
Williams, an African-American man, was arrested in front of his family and detained for 30 hours based on accusations of shoplifting watches from a luxury retail store. His arrest and retention were based solely on facial recognition technology that misidentified him as a suspect from surveillance footage. It later emerged that the technology had incorrectly matched Williams’ driver’s license photo with surveillance video images of the actual perpetrator.
This case highlights the serious consequences of relying on AI technologies that may not be entirely accurate or fair. That’s especially true for people of color; studies have shown that they’re at a higher risk of misidentification by facial recognition systems.
Following this incident and others like it, several cities and organizations around the world have called for moratoriums, or outright bans, on the use of facial recognition technology by police and other government agencies. They cite concerns over privacy, racial bias, and the potential for misuse.
5. Regulating AI is complicated
The regulatory landscape for AI is evolving, with state and federal authorities focusing on how to regulate it. Lawyers must stay informed about these developments to ensure that AI implementations are compliant with current and upcoming regulations.
In August 2023, Thomson Reuters released its Future of Professionals report, which highlights how AI is transforming every aspect of work. While regulations will almost certainly evolve more slowly than AI, you should begin honing the rules for usage, privacy, and communication policies in your practice.
Be honest with yourself, your clients, and the court on how you’re using AI technology. And be vigilant in verifying the accuracy and authenticity of anything you use or reference from AI. The pervasive integration of AI into our personal and professional lives presents a complex array of challenges that demand careful consideration and proactive management.
From ensuring that AI systems operate within ethical bounds to navigating the intricate landscape of intellectual property rights and addressing data privacy concerns, lawyers play a crucial role in shaping the future of AI. The incidents involving Chatbot Tay and Robert Julian-Borchak Williams serve as stark reminders of the potential repercussions of unchecked AI deployment.
As AI technologies become increasingly ingrained in our daily lives, the legal profession must remain at the forefront of advocating for responsible AI use, safeguarding individual rights, and contributing to the development of legal frameworks that keep pace with technological advancements. Embracing AI’s potential while mitigating its risks will require continuous learning, ethical vigilance, and an unwavering commitment to justice and fairness.