chevron-down Created with Sketch Beta.

ARTICLE

AI Impact and Risk Assessments: Keys to Smarter AI Deployment

Jenni Oprosko and Matthew D Kohel

Summary

  • AI has the potential to boost efficiency and spark innovation, but it also comes with some risks that could impact our rights and society as a whole. 
  • When we look at how AI tools work, especially in important fields like education and healthcare, it's crucial to weigh both the good and the bad. 
  • By pinpointing possible risks, organizations can make smart choices about using AI and tackle any ethical, legal, or operational issues that might pop up.
AI Impact and Risk Assessments: Keys to Smarter AI Deployment
Jackyenjoyphotography via Getty Images

The deployment of artificial intelligence (AI) technologies within the public and private sectors has shifted how organizations and governments manage risk. AI’s increasing adoption stems from the promise of increased efficiency, productivity, and innovation. However, alongside its transformative potential, the use of AI creates risks that have the potential to negatively affect the rights of individuals and has consequences for society at large. AI-based automated decision-making tools (ADMT) in particular can have profound and far-reaching ramifications when that technology is used as a gatekeeper to important services, such as education, employment, and healthcare.

As a result, the need for structured and meaningful AI impact and risk assessments is evident. By systematically evaluating the potential consequences and vulnerabilities associated with AI implementation, these assessments serve as crucial risk-mitigation tools in the design, development, and procurement processes. They not only facilitate informed decisions on what AI tools an organization should use, but enable organizations to proactively address ethical, legal, and operational challenges as well, thus fostering responsible AI development and deployment. Use of AI can affect both internal stakeholders (e.g., employees) and external stakeholders (e.g., customers and regulators), making these assessments incredibly important for organizations to engage in.

Impact assessments and risk assessments are essential pieces to the design, development, and deployment of AI technologies within an entity, and they serve different purposes. An AI impact assessment evaluates the potential effects of an AI technology on individuals, communities, and the environment. It examines both positive and negative outcomes, focusing on the social, economic, and environmental benefits and consequences of the technology. An AI risk assessment identifies, analyzes, and prioritizes potential risks associated with the AI technology, including technical, operational, compliance, and reputational risks. Both types of assessments have the same goal of anticipating and mitigating negative outcomes resulting from the deployment of AI technologies.

Both assessments should be done prior to implementing an AI technology licensed from a third-party provider because they help organizations prepare to address issues that may arise after the AI technology is integrated and utilized. AI impact and risk assessments complement each other and can be conducted in conjunction with other assessments that an organization already engages in, such as data privacy impact assessments, instead of needing to be an entirely new process.

The objective of AI impact and risk assessments is not elimination of risk, as it is impossible to eliminate all risk. Instead, the goal of the assessments is to determine whether an AI tool should be adopted and, if it is adopted, to determine how the tool should be used, adapted, and managed. Also, the assessments can be used to identify high-risk systems (a legal term of art under the EU Artificial Intelligence Act and the Colorado Artificial Intelligence Act), and systems that could present unacceptable risks. For example, an assessment might identify a system that would negatively impact marginalized groups, leading to a decision to not use or customize an AI tool to mitigate the potential harms.

The starting point for an AI impact and risk assessment should be for the organization to understand the technology and figure out whether it is the right tool for the job. This should include understanding an AI system’s implications on operations, strategy, or security of the organization. After the purpose has been defined, the organization should determine what criteria it wants to consider as part of the assessment. Below are example criteria for an AI impact and risk assessment:       

Purpose

  • What purpose will the AI tool serve for the organization?
  • Does the AI tool actually achieve that purpose?
  • What business needs is the AI tool addressing?

 Stakeholders

  • Who are the stakeholders for the AI tool?
  • What are the potential benefits and potential harms to each stakeholder?
  • What are their concerns, objections, and expectations regarding AI?

Risk Identification

  • What are known, likely, and specific high risks that could occur?
  • Are there any critical risks, such as impacts on fundamental rights, personal safety, and physical, mental, and economic wellbeing?
  • Are there risks to the company’s technology and cybersecurity environment?
  • Should there be any restrictions on use of the AI tool?
  • Are there any potential sources of bias in the algorithms?
  • How will use of the tool impact existing job roles?

Decision Making

  • Will the system be used to make decisions?
  • Who will use the outputs of the system to make decisions?
  • How will decisions be made about using the system?
  • How much of an impact will the decisions have?
  • Will there be any human oversight over the decisions?

Data

  • Who will have access to the data added to the AI tool?
  • Who will own the data added to the AI tool?
  • What is the privacy policy of the AI tool?
  • What security measures does the AI tool have in place?
  • Does the tool collect and use personal data in the training and deployment phases?
  • What data minimization techniques have been deployed to eliminate, encrypt, or anonymize personal data?
  • Does the tool comply with applicable laws related to data privacy?
  • What data was used to train the system (consider scope, representativeness, and limitations of the training data)?

Compliance

  • Doe the AI tool comply with relevant data protection regulations and standards, such as GDPR, HIPAA, and industry-specific regulations?

Financial

  • What are the financial implications of adoption of the AI tool?
  • What are the upfront investment costs and potential long-term savings or income opportunities?

Post-Deployment Monitoring

  • What will the oversight and control be over the system?
  • Who will need to be notified about use of the system? Customers, regulatory bodies, or other third parties?
  • Who is responsible for troubleshooting, managing, operating, overseeing, and controlling the system during and after deployment?
  • How often will monitoring occur?
  • How will the AI tool be evaluated post-deployment for bias, erroneous outputs, model drift, etc.?
  • What resources will be required for this process?

Once an organization has established the criteria for AI impact and risk assessments, it is essential to create an action plan aimed at maximizing the benefits and mitigating the risks associated with the AI system. Mitigation measures should be proportional to the potential harms identified through the assessment and may be necessary only if the system is found to reach a certain level of risk. Potential mitigation factors should include human oversight over the operation and AI system outputs, potentially hiring third parties to conduct external reviews, testing for and managing bias, and ensuring persons affected by an AI system are aware of its use.

AI impact and risk assessments are an iterative process that should be revisited periodically. They should be conducted after a potential AI use case is identified and after implementation. Depending on the level of risk and legal requirements, AI impact and risk assessments may need to be performed annually and as circumstances dictate—if for example, a new risk is identified or during the due diligence phase of a merger or acquisition. Once the process is in place, the organization can use the same framework in the future, adjusting as necessary to ensure the assessment remains relevant, robust, and effective.

Overall, AI impact and risk assessments are essential for ensuring that AI technologies are deployed responsibly and with a clear understanding of their potential risks. By embedding these assessments into their decision-making processes, organizations can better navigate the complexities of AI implementation and safeguard both their stakeholders and society at large.

    Authors