chevron-down Created with Sketch Beta.
August 10, 2023 Column

Get the SciTech Edge: Membership & Diversity Committee News

Charlene C. Goldfield and Joanne Charles

Brave New World: The promises and challenges of generative AI legislation

We live in exciting times. New EU legislation impacting AI and promises from US law enforcement to address bias signal new challenges for developers and practitioners. The European Parliament has approved the EU AI Act, landmark rules for artificial intelligence. Among its provisions, the EU AI Act seeks to require generative AI systems to be reviewed before commercial release and to ban real-time facial recognition. Like 2018’s EU General Data Protection Regulation (GDPR), the EU AI Act is likely to become a global standard, determining to what extent the law can address AI development and potential impact.

The EU AI Act seeks to create harmonized rules for AI systems in the EU, following a risk-based approach to development and banning certain practices that may create unacceptable risk to the health and safety of individuals. The EU AI Act establishes transparency rules and mandatory requirements for high-risk AI systems and operators.

The United States also seems poised to address the risks that AI proliferation presents. In April, US law-enforcement officials announced their resolution to combat discrimination and bias arising from the use of artificial intelligence in areas such as lending and housing. Rohit Chopra, director of the Consumer Financial Protection Bureau, on a call with reporters, signaled that officials from the Department of Justice, Federal Trade Commission, and Equal Employment Opportunity Commission have noted the risk of AI bias and have committed to investing staff resources to keep up with the explosion of new uses for AI tools. We can expect only more regulation to shape the development and use of AI.

But how ripe is this area for regulation? The ABA has been looking at how this technology develops for several years. In 2017, the ABA published an article about how AI can benefit a legal practice. Authors at the time noted the benefits of AI, but without clear policies about the use and limits of the tools, many of the failures that have impacted early adopters of AI may resurface across industries and sectors.

New legislation can also shape the use of AI for consumers. Laws requiring transparency in the processes that underlie AI decision-making can alleviate some of the concerns related to eliminating improper use. The first step to creating fair outcomes from using AI tools has to be education working hand-in-hand with legislation. Educating the public about the use, benefits, and limits of AI can protect users from deep fakes, misinformation, and other negative effects.

As attorneys, scientists, law makers, and specialists, we watch the development of new laws to address new technology with a close eye. The Membership and Diversity Committee of the ABA Science & Technology Law Section is looking into how these new regulations seek to protect users and consumers from potential harms. We have more to fear from inaction than from smart legislation. We have a part to play in how this technology develops, for the better.

    The material in all ABA publications is copyrighted and may be reprinted by permission only. Request reprint permission here.

    Charlene C. Goldfield

    Young Lawyer’s Division Liaison for the ABA Cybersecurity Legal Task Force

    Charlene C. Goldfield is a co-chair of the MAD Committee and Homeland Security Committee, and also the Young Lawyer’s Division Liaison for the ABA Cybersecurity Legal Task Force. Charlene is currently a national security federal government attorney in Washington, D.C.

    Joanne Charles


    Joanne Charles is a co-chair of the MAD Committee and a senior corporate counsel in the Global Privacy & Regulatory Affairs group within Microsoft’s Corporate and External Legal Affairs division. Prior to joining Microsoft, she focused her practice on the regulation of health and life sciences entities, health information technology, and data protection.