This article was originally pulished at abajournal.com.
The House of Delegates adopted a resolution at the 2023 ABA Midyear Meeting on Monday that addresses how attorneys, regulators and other stakeholders should assess issues of accountability, transparency and traceability in artificial intelligence.
Resolution 604 calls on organizations that design, develop, deploy and use AI to follow these guidelines:
- Developers of AI should ensure their products, services, systems and capabilities are subject to human authority, oversight and control.
- Organizations should be accountable for consequences related to their use of AI, including any legally cognizable injury or harm caused by their actions, unless they have taken reasonable steps to prevent harm or injury.
- Developers should ensure the transparency and traceability of their AI and protect related intellectual property by documenting key decisions made regarding the design and risk of data sets, procedures and outcomes underlying their AI.
The Cybersecurity Legal Task Force, which submitted the resolution, also urges Congress, federal executive agencies and state legislatures and regulators to adhere to these guidelines in laws and standards associated with AI.
Lucy Thomson, a founding member of the Cybersecurity Legal Task Force, introduced the measure, saying that following its proposed guidelines “will enhance AI, reduce its inherent risks and facilitate the development and use of AI in a trustworthy and responsible manner.” She added that a broad group of AI experts across the association spent the past year developing the resolution.