chevron-down Created with Sketch Beta.

ARTICLE

Webinar Recap: Cloud Privacy and Artificial Intelligence—Trends and Legal Implications

Juliette Caminade, Maksym Khomenko, and Quyen Ha

Webinar Recap: Cloud Privacy and Artificial Intelligence—Trends and Legal Implications
iStock.com/ryasick

Artificial intelligence (AI) and cloud computing are reshaping industries, challenging regulatory frameworks, and sparking ethical debates. During Cybersecurity Awareness Month, the ABA Antitrust Law Section’s Privacy and Information Security Committee convened a panel of experts to discuss these pressing issues in a program titled “Cloud/Privacy/AI: Trends and Legal Implications.” The panel was also co-sponsored by the Media & Technology Committee. Moderated by Trisha Grant, a liaison from the FTC, the panel featured Kevin Fumai (Oracle), Tyler Chou (Tyler Chou Law for Creators), Junsu Choi (Keystone), and Tatiana Rice (Future of Privacy Forum).

Over the course of the discussion, the panelists provided an overview of AI’s transformative potential, the regulatory responses it necessitates, and practical strategies for navigating the challenges these technological developments pose.

Generative AI: Overview and Trends

Mr. Choi opened the discussion by explaining generative AI, including how the underlying technologies evolved. These technologies, which underpin popular tools like ChatGPT, can generate new content, including text, images, and audio. Mr. Choi explained that at its core, generative AI models are trained by predicting the next token in a data sequence the model is provided (e.g., a word in a sentence). Scaling this simple objective across trillions of examples enables the models to develop a rich understanding of language, concepts, and world knowledge, resulting in models that are highly versatile in a wide range of tasks.

Mr. Choi emphasized that the success of generative AI has relied on advancements such as the Transformer architecture, which allows machine learning models to scale efficiently with increased computational power, data, and model size. Mr. Choi discussed that training these models often requires high-quality data with trillions of tokens­ from such sources as internet archives, books, academic publications, or Wikipedia.

Mr. Choi concluded with a discussion of considerations in developing and deploying generative AI applications related to:

  • Data privacy: the importance of ensuring that confidential and/or personal information is neither included in the training data nor extractable from models.
  • Cybersecurity vulnerabilities: the need for robust security measures in both model development and deployment because (1) generative AI expands potential attack surface (i.e., the number of entry points that an attacker can exploit) and (2) malicious actors can manipulate the models to generate harmful outputs.
  • Bias: generative AI and AI models generally reflect the data on which they are trained. As such, Mr. Choi emphasized the importance of implementing guardrails throughout model development and deployment processes to ensure fairness and avoid bias.

The AI Regulatory Landscape in the United States

Ms. Rice turned the discussion to AI regulation in the U.S., which involves a combination of “hard law” (laws passed by a legislative body that imply enforcement mechanisms) and “soft law” (guidance and regulatory frameworks).

On the hard law side, Ms. Rice first highlighted that AI is not exempt from existing laws, such as those on data privacy, civil rights, and antitrust, and cannot be used as a shield for liability. However, when examining how existing laws apply to AI, lawmakers focus on clarifying how these laws should address AI-specific issues, with federal agencies such as the Federal Trade Commission (FTC) taking an active role in enforcement against the misuse of AI, as demonstrated by the Rite Aid case. In this case, Ms. Rice described an AI-powered facial recognition system designed to detect suspicious activities in Rite Aid stores that led to discriminatory outcomes due to a lack of AI governance. Ms. Rice discussed how this case, and other scientific research, had led to a focus on mitigating algorithmic bias and discrimination in many state-level legislations.

Ms. Rice then described how local jurisdictions are addressing AI governance using (1) a risk-based approach, which focuses on mitigating algorithmic bias and discrimination in high-risk use of AI, and (2) a technology-specific approach, which focuses on specific technologies such as generative AI.

Ms. Rice used New York City and Colorado as examples of local jurisdictions implementing a risk-based approach. Colorado passed the Colorado AI Act, which aims to protect consumers from algorithmic discrimination in high-risk settings such as employment, housing, education, healthcare, and other areas. It aims to accomplish this through, among other things, mandating detailed documentation between parties, risk management programs, and consumer rights protection. New York City similarly focuses on mitigating bias in employment through bias audits. As an example of the technology-specific approach to AI regulation, Ms. Rice discussed California as a state that has passed numerous laws targeting generative AI, including requirements for identifying synthetic content, regulations against deepfakes, and transparency around data usage (e.g., copyrighted data).

On the soft law side, Ms. Rice highlighted that federal agencies are creating guidance for applying AI technologies. For example, the National Institute of Standards and Technology (NIST) has developed an AI risk management framework and continues to refine it with a generative AI focus. Agencies such as the FTC, the US Equal Employment Opportunity (EEOC), and the United States Department of Health and Human Services (HHS) also provide guidance to clarify how current regulations apply to AI.

Ms. Rice pointed to two tools developed by the Future of Privacy Forum. The first tool is the Generative AI Checklist, a resource designed to guide companies in responsibly implementing generative AI systems within their organizations. Developed in collaboration with over 30 practitioners and experts, the checklist emphasizes a comprehensive and practical approach to generative AI governance.

Key recommendations from the checklist include:

  • Leverage Existing Frameworks: Companies need not reinvent policies but should integrate generative AI considerations into their current processes.
  • Employee Awareness and Training: Employees must be educated about the risks of generative AI, including the possibility of incorrect, outdated, or biased outputs.
  • Use Case Disclosures: Organizations should communicate how generative AI is used in the organization and implement oversight mechanisms for safe usage.

The second tool is a report focusing on AI best practices for human resources and employment decisions, crafted alongside leading employment software vendors. This framework, informed by guidelines from the EEOC and NIST, aims to ensure non-discrimination through, among other means, conducting due diligence for vendors to understand how their AI tools are tested for demographic biases; testing for biases if additional data is integrated into existing solutions; and clarifying roles and responsibilities throughout the AI life cycle.

Ms. Rice explained that while platforms historically favored an opt-out approach to data collection, regulations have been shifting toward more transparent practices, particularly in AI and data privacy. She highlighted the ineffectiveness of the current notice and consent system, which often fails to inform consumers adequately or promote ethical business practices. As awareness increases about the use of personal, copyrighted, and proprietary information in AI training, there is a push for opt-in consent as a starting point.

AI Governance: A Corporate Legal Counsel Perspective

Mr. Fumai highlighted a need for a tailored, comprehensive organizational strategy rather than a “one size fits all” approach to AI governance. Such an approach should leverage different perspectives from those with different technical, operational, or legal expertise levels to contribute effectively toward shared objectives.

To implement an effective organizational strategy, Mr. Fumai emphasized the importance of addressing information gaps within companies. By sharing knowledge across domains, businesses can break down longstanding silos, especially in larger organizations. This practice fosters collaboration and nurtures a culture of innovation, encouraging employees to question current practices (“why?”), explore alternatives (“why not?”), and envision improvements (“what if?”). This mindset shift enables organizations to evolve their processes and foster responsible innovation.

Building on the importance of collaboration, Mr. Fumai emphasized that the next key step for organizations is effective monitoring of the evolving regulatory and industry landscape. This involves leveraging resources, such as insights from external organizations, to understand not only legal requirements but also broader industry trends and standards. Monitoring can extend beyond compliance to include analyzing competitor strategies and identifying emerging best practices. This proactive approach allows shifting from merely considering what must be done to comply with applicable regulations to focusing on what they should be done to align with ethical and strategic goals.

In response to follow-up questions, Mr. Fumai outlined strategies for companies to maximize the benefits and minimize the risks of adopting AI. He emphasized modern risk intelligence, which acknowledges the dynamic and context-specific nature of those risks. Companies must shift their mindset from traditional approaches to one that holistically evaluates both the risks of taking action and the risks of inaction. This shift encourages organizations to explore and embrace the right risks, creating opportunities for innovation while ensuring accountability. In this mindset, Mr. Fumai emphasized the important role of in-house counsel in identifying, explaining, managing, and quantifying risks, enabling informed decision-making and demonstrating accountability. Mr. Fumai also highlighted the importance of continuous monitoring of public statements that companies make to avoid inaccurate and misleading statements, incentivizing adherence to guidelines within the organization, and collaboration across functions, such as marketing and compliance. Mr. Fumai also highlighted the importance of feedback loops from corporations to governance authorities to make more informed decisions and foster collaboration.

The Creator Economy: Navigating AI’s Impact

Tyler Chou reflected on the evolution of the creator economy, highlighting its rapid growth, particularly during the pandemic. What started as a creative outlet for many has matured into a thriving industry, projected to reach a half-trillion-dollar valuation by 2027. Ms. Chou described this evolution as emblematic of the modern “American dream,” where individuals can create and share content with millions of viewers through platforms like YouTube, TikTok, and Instagram.

The advent of AI, including tools like ChatGPT, has further transformed the landscape in the creator economy. While AI’s use in video and music creation is still in its early stages, its rapid advancement signals significant potential for creators. However, these opportunities also bring challenges. Ms. Chou emphasized the need for stronger protections for creators and younger audiences, particularly in light of concerns about the potentially addictive nature of services like TikTok.

Ms. Chou also addressed content creators’ concerns about protecting their intellectual property (IP) amid the rise of AI. In her view, creators are often exploited by unfair contracts with large companies and due to the unauthorized use of their work through AI training and scraping. To mitigate these issues, Ms. Chou shared a nine-point framework for creators to protect their rights:

  1. Copyright Protection: Register copyrights and establish clear terms of use and licensing policies, especially on personal websites.
  2. Creative Commons Licensing: Use licenses that allow attribution while restricting commercial or AI-related uses of content.
  3. Opting Out of AI Training: Actively opt out of data training on services like YouTube, Google, Reddit, and LinkedIn, which often default to participation. Ms. Chou also emphasized that companies should adopt a more user-friendly opt-in approach rather than defaulting to automatic opt-ins, which, as she argued, benefit the platforms but often leave creators unaware of their participation in data training programs. She highlighted the need for greater transparency and education to empower creators, stressing that most are unaware of these defaults. Ms. Chou called on large platforms, such as Google, YouTube, and Reddit, to take responsibility for improving their practices and for attorneys and advocates to raise awareness and hold these companies accountable.
  4. Digital Watermarking: Apply visible or digital signatures to content to deter unauthorized use and establish proof of ownership.
  5. Metadata and Invisible Tags: Use hidden markers to make content traceable if scraped.
  6. Restrict Access: Utilize tools to limit content exposure to unauthorized scrapers.
  7. Monitor for Misuse: Employ tools like Google Alerts or reverse image searches to identify potential violations, acknowledging this can be labor-intensive.
  8. Leverage Legal Remedies: Pursue action under copyright and IP laws for unauthorized use.
  9. Public Advocacy: Advocate for creators by engaging with platforms and raising awareness to protect those who cannot advocate for themselves.

Balancing Traditional Roles with Emerging AI Risks

The panelists discussed challenges faced by smaller in-house legal teams with limited resources as they navigate traditional advisory roles alongside the growing demands of AI governance.

Mr. Fumai emphasized the importance of adopting a generalist mindset, encouraging legal professionals to view everyone in the organization as a client and collaborate across departments to tackle AI-related challenges. By fostering teamwork and stepping beyond narrowly defined roles, in-house legal teams can identify compliance gaps early and develop tailored, context-specific solutions to address emerging risks effectively.

Ms. Chou provided insights from the creator economy, stressing the value of embracing AI as a tool to enhance human creativity rather than fearing it. She highlighted that while AI can generate content, it often lacks the depth and emotional connection of human storytelling. By focusing on these unique strengths, creators and companies alike can use AI to amplify creativity and foster meaningful engagement with audiences.

Ms. Rice emphasized the need to embrace AI regulation as an opportunity rather than a burden. She acknowledged that the regulatory landscape can be daunting, particularly for teams with limited resources. However, she encouraged starting small by documenting AI usage, conducting research, and prioritizing governance in high-risk or consumer-facing applications. Ms. Rice stressed that incremental steps toward AI governance can provide a solid foundation, even if perfection is not immediately achievable.

    Authors