Artificial Intelligence and Robotics (AI) Virtual National Institute 2020 Schedule

View PDF of Program Agenda

Wednesday, October 7, 2020 | Day 1

12:00-1:00 pm ET | Hot, But Cool: The Latest AI and Robotics Developments

Learn about new cases, statutes, and regulations in AI and robotics since the ABA's inaugural National Institute in January 2020. This panel will provide a snapshot overview of legal developments with rapid-fire summaries and actionable pointers on managing AI and robotics compliance and liability risks.

Speakers:

Stephen S. Wu, Partner, Silicon Valley Law Group; Past Chair, ABA Science & Technology Law Section [Moderator]
Colleen Chien, Professor of Law, Santa Clara University Law School
Preston Thomas , Privacy and Compliance Counsel, Dialpad, Inc.

1:15-2:15 pm ET | Contagion: Battling COVID-19 and Future Pandemics with AI and Robotics

COVID-19 has eclipsed the once-sunny national and world economy. At the same time, the U.S. healthcare system is under unprecedented strain, as it struggles to provide enough hospital beds and breathing apparatuses for patients and personal protective equipment for healthcare workers. How can AI help to track, diagnose, and treat COVID-19? Can telepresence robots facilitate treatment and minimize healthcare workers' exposure to affected patients? This panel will discuss the technology and resulting legal issues of using AI and robotics to battle COVID-19 and future pandemics.

Speakers:

Heather B. Deixler, Counsel, Latham & Watkins LLP [Moderator]
John Byrnes, Principal Computer Scientist, Advanced Analytics Group, SRI International
Derek Forman, Founder/Chairman, ClearFocus Innovations
Mark Hanson, CEO, Decoded Health, Inc.
 

2:30-3:30 pm ET | Systematic Bias: AI as Cause and Cure

The new national dialogue on race relations following George Floyd's death has sparked new dialogue about bias in artificial intelligence systems. Hiring, lending, and housing systems might discriminate because of the design of AI systems or the data used to create them, but they also hold the promise of decreasing bias. Media and legal journal articles have identified bias as a risk, but specific methods of mitigating bias are in short supply. This program goes beyond issue-spotting to provide actionable advice on mitigating bias in developing and operating AI systems. The panel also will cover methods to avoid discriminatory practices by companies operating robots, such as security and service robots.

Speakers:

Natalie A. Pierce, Partner and Chair, Labor and Employment Practice, Gunderson Dettmer [Moderator]
Jeffrey Brown, Diversity and Inclusion Research Fellow, Partnership on AI; Assistant Professor of Psychology, Minnesota State University, Mankato
Raluca Crisan, Co-Founder, etiq.ai; Director of Data Science, Merkle | Aquila
Travis LeBlanc, Member, U.S. Privacy and Civil Liberties Oversight Board; Partner, Cooley, LLP
 

Wednesday, October 14, 2020 | Day 2

12:00-1:00 pm ET | Can AdTech Close the Sale? Using AI in Marketing and Advertising

Sales are the lifeblood of any business in a capitalist society, and advertisements drive sales. How does AI make advertising more effective? And with AdTech AI systems ingesting massive amounts of personal data, advertisers can delve into consumers' personal lives. This panel will cover the opportunities and privacy challenges of AdTech AI systems and how advertisers can minimize their risks.

Speakers:

Peter McLaughlin , Partner, Culhane Meadows [Moderator]
Mauricio Paez, Partner, Jones Day
Noga Rosenthal, Chief Privacy Officer and General Counsel, Ampersand
Dominique Shelton Leipzig, Partner and Co-Chair, Ad Tech Privacy & Data Management, Perkins Coie LLP

1:15-2:15 pm ET | You Can't Make This Stuff Up: AI and Robotics in Manufacturing

AI holds the promise of revolutionizing manufacturing processes by ferreting out manufacturing defects, predicting product and machine failures, predicting system failures, improving and speeding product design processes, modeling product behavior, managing supply chains, and improving operational efficiency and effectiveness. For decades, industrial robots have made the factory floor safer and more efficient. This panel will cover occupational safety and compliance, procurement, risk management, data protection, and governance of AI systems and robots used for manufacturing.

Speakers:

Hogene L. Choi , Partner & Practice Group Co-Chair - Patent Prosecution Baker Botts [Moderator]
Jeffrey Jones, Partner, Jones Day
Manish Mehta, Director of Open Innovation, Stanley Black & Decker
Christopher Lubeck, Senior Director, Head of Patents and Open Source, ServiceNow  

2:30-3:30 pm ET | Show Me the Money: Insuring Robots and AI Systems

Accidents, biased outcomes, and data breaches will occur hand-in-hand with the deployment of AI systems and robots. Insurance will play a vital role in managing the risks associated with developing, selling, purchasing, and operating AI systems and robots. This panel will explore the current position of the insurance industry in providing insurance coverage regarding AI and robotics suppliers and operators, the nature of available coverage, the use of captive insurance companies, coverage issues, and tips on obtaining effective coverage and managing the underwriting processes so businesses can obtain the insurance they need.

Speakers:

Laura Foggan, Partner, Crowell & Moring LP [Moderator]
John Buchanan, Senior Counsel, Covington & Burling LLP
Kevin P. Kalinich, Global Collaboration Leader, Intangible Assets, Aon Commercial Risk Solutions

Wednesday, October 21, 2020 | Day 3

12:00-1:00 pm ET | On Board with AI: Corporate Governance and AI Management

How should corporate boards handle the development and deployment of AI and robotics? What liabilities do directors and officers face? What policy and procedure documentation tools can management use to govern the development, sale, procurement, and operation of AI systems and robots? Just as every major company has a privacy policy and many are undertaking data protection impact analysis, future AI governance could include AI impact assessments and operational policies. This panel will cover the challenges of corporate governance in the AI era and provide tips for managing officer and director liability.

Speakers:

Cynthia Cwik , Legal Advisor, The Cantellus Group; Former Fellow, Stanford Distinguished Careers Institute; Former Partner, Jones Day; Past Chair, ABA Section of Science & Technology Law (Moderator)
Dan Siciliano , Law Science & Technology Fellow, Stanford University (CodeX) Chair, Federal Home Loan Bank of San Francisco 
Christopher Savoie , CEO and Founder, Zapata Computing Inc.
Richard J. Johnson , Partner, Jones Day
 

1:15-2:15 pm ET | National Security S.H.I.E.L.D: Protecting AI and Robots

U.S. national security will depend heavily on remaining competitive in AI and robotics. New export control regulations on the horizon could limit the ability to transfer AI and robotics technologies overseas or even to disclose these technologies to foreign persons within the U.S. Moreover, enhanced rules for the Committee on Foreign Investment in the U.S. (CFIUS) may restrict foreign acquisition and other corporate transactional activities regarding domestic businesses in this area. At the same time, the U.S. Department of Defense must acquire new AI and robotics defense technologies driven by lightning-speed innovation in the private sector. This panel will cover legal issues involved with keeping the nation safe in the AI era. 

NOTE: This session will NOT be recorded and will only be available during the LIVE Broadcast.

Speakers:

Roland L. Trope, Partner, Trope and Schramm LLP; Adjunct Professor, Departments of Law and of Electrical Engineering and Computer Science, U.S. Military Academy at West Point [Moderator]
Ama Adams, Partner, Ropes Gray
Guest Speakers: Two speakers from law enforcement
 

2:30-3:30 pm ET | Unsafe at Any Speed? Clearing the Standard of Care Bar for AI Systems and Robots

When it comes time to deploy new AI and robotics system, manufacturers and buyers want safety. But how safe is safe enough? Some companies push out products and services aggressively, stating that they save lives, reduce greenhouse gases, and so on. How should a business determine how much it should spend on safety programs to make a product safe and avoid product liability? How can it avoid going broke spending on safety while not incurring potentially company-ending liability? Using examples such as autonomous vehicles, this panel will examine where the law should set the standard of care (e.g., at the human level or perhaps higher) and how sellers and buyers of these technologies can mitigate their product liability risks.

Speakers:

Tonya Newman, Partner, Neal Gerber Eisenberg; Co-Chair, Product Liability Committee, ABA Section of Litigation [Moderator]
Jeffrey Gurney, Associate, Nelson Mullins; Author, Automated Vehicle Law (2020)
Sven Beiker, Managing Director, Silicon Valley Mobility; lecturer in management, Stanford Graduate School of Business; formerly, Executive Director, Stanford Center for Automotive Research
 

Wednesday, October 28, 2020 | Day 4

12:00-12:20 pm ET | Keynote Address

Keynote:
David Engstrom, Professor of Law and Associate Dean for Strategic Initiatives, Stanford Law School

12:30-1:30 pm ET | Ethics, Meet Ethics: Do Attorneys Have AI-Related Ethical Duties Beyond the Rules of Professional Responsibility?

Today's continuing legal education ethics courses focus on the text of rules of professional conduct, whether the ABA Model Rules or state rules. Nonetheless, attorneys spend little time analyzing ethics in the sense of moral philosophy. It's almost as if the two kinds of ethics have nothing to do with each other. What are attorneys' ethical responsibilities in the era of AI beyond the text of the rules? How can attorneys be sure what their ethical duties are with unprecedented new technologies? AI and robotics will be used as a case study of ethical duties against a backdrop of professional rules that were crafted for lawyers at a distant time and place. (This program will offer 1 hour of ethics/professional responsibility credit.)

Speakers:

Huu Nguyen, Partner, Squire Patton Boggs [Moderator]
Nicholas G. Evans, Assistant Professor of Philosophy, University of Massachusetts Lowell
Irina Raicu, Director, Internet Ethics Program, Markkula Center for Applied Ethics, Santa Clara University

John Steele, Attorney at Law, JohnSteeleLaw
 

1:15-2:15 pm ET | It's All in Your Head: Legal and Policy Issues with Brain-Computer Interfaces and Neural Devices

Scientists are now exploring the use of brain and neural implants to help patients with disabilities. Paralyzed patients can move a cursor on a computer, type, and play computer games with their thoughts alone. Newer experiments show the possibility of mute patients communicating with a "voice prosthesis" using their thoughts. Over time, people will want to enhance their cognition and memory with information technology. What are the legal, ethical, and security issues associated with brain-computer interfaces? What impact do neural devices have on legal issues such as intent and criminal responsibility?

Speakers:

Eric Y. Drogin, Harvard University; Chair, ABA Section of Science & Technology Law [Moderator]
Alex Feerst, General Counsel, Neuralink Corporation
Andrea Matwyshyn, Associate Dean for Innovation and Technology, Professor of Law and Engineering Policy, Penn State University
Keith Abney, Lecturer, California Polytechnic State University; Co-Author, Robot Ethics 2.0.
 

2:30-3:30 pm ET | The Future of Legal Personhood for Superintelligent AI Systems

Some futurists believe that a day could come, perhaps within our lifetimes, when we will have artificial general intelligence or superintelligent AI systems whose capabilities exceed those of human beings. If machines are as intelligent or more intelligent than humans, will a day come when some AI systems gain the legal status of "persons" under the law? What would it take for policymakers to know the time is right for personhood status? This panel will cover the debate over legal personhood for AI systems, covering both contemporary examples of personhood and the roadmap to future personhood.

Speakers:

Stephen S. Wu, Shareholder, Silicon Valley Law Group; Past Chair, ABA Science & Technology Law Section [Moderator]
Don Howard, Professor of Philosophy, Notre Dame University
John Weaver, Associate, McLane Middleton; Author, Robots are People Too: How Siri, Google Car, and Artificial Intelligence Will Force Us to Change Our Laws (2014)
Animashree (Anima) Anandkumar,  Bren Professor of Computing, California Institute of Technology; Director of Machine Learning Research, NVIDIA Corporation

Entity: