The deployment of artificial intelligence (AI) technologies within the public and private sectors has shifted how organizations and governments manage risk. AI’s increasing adoption stems from the promise of increased efficiency, productivity, and innovation. However, alongside its transformative potential, the use of AI creates risks that have the potential to negatively affect the rights of individuals and has consequences for society at large. AI-based automated decision-making tools (ADMT) in particular can have profound and far-reaching ramifications when that technology is used as a gatekeeper to important services, such as education, employment, and healthcare.
As a result, the need for structured and meaningful AI impact and risk assessments is evident. By systematically evaluating the potential consequences and vulnerabilities associated with AI implementation, these assessments serve as crucial risk-mitigation tools in the design, development, and procurement processes. They not only facilitate informed decisions on what AI tools an organization should use, but enable organizations to proactively address ethical, legal, and operational challenges as well, thus fostering responsible AI development and deployment. Use of AI can affect both internal stakeholders (e.g., employees) and external stakeholders (e.g., customers and regulators), making these assessments incredibly important for organizations to engage in.
Impact assessments and risk assessments are essential pieces to the design, development, and deployment of AI technologies within an entity, and they serve different purposes. An AI impact assessment evaluates the potential effects of an AI technology on individuals, communities, and the environment. It examines both positive and negative outcomes, focusing on the social, economic, and environmental benefits and consequences of the technology. An AI risk assessment identifies, analyzes, and prioritizes potential risks associated with the AI technology, including technical, operational, compliance, and reputational risks. Both types of assessments have the same goal of anticipating and mitigating negative outcomes resulting from the deployment of AI technologies.
Both assessments should be done prior to implementing an AI technology licensed from a third-party provider because they help organizations prepare to address issues that may arise after the AI technology is integrated and utilized. AI impact and risk assessments complement each other and can be conducted in conjunction with other assessments that an organization already engages in, such as data privacy impact assessments, instead of needing to be an entirely new process.
The objective of AI impact and risk assessments is not elimination of risk, as it is impossible to eliminate all risk. Instead, the goal of the assessments is to determine whether an AI tool should be adopted and, if it is adopted, to determine how the tool should be used, adapted, and managed. Also, the assessments can be used to identify high-risk systems (a legal term of art under the EU Artificial Intelligence Act and the Colorado Artificial Intelligence Act), and systems that could present unacceptable risks. For example, an assessment might identify a system that would negatively impact marginalized groups, leading to a decision to not use or customize an AI tool to mitigate the potential harms.