It is paramount for institutions not only to understand AI-related risks, but also to develop appropriate governance, including processes and controls designed to effectively identify and manage risks and address adverse effects.
Dedicating sufficient resources to design, test, and manage AI and machine learning capabilities is table stakes. This requires a top-down approach, starting with the board of directors and senior management to ensure appropriate investments, commensurate with the size and complexity of the organization, are being made in capabilities that support AI and machine learning as well as risk management. Independent risk management functions (the second line of defense) and internal audit (the third line of defense) will be increasingly important.
Regulatory AI strategy continues to take shape, while still lagging behind industry use. In March 2021 the federal banking agencies issued a Request for Information seeking to better understand how AI and machine learning are used in the financial services industry and to identify areas where additional “clarification” could be beneficial. The Federal Trade Commission (FTC) has also issued guidance on the use of AI tools, such as in an April 2020 blog post, in which the FTC highlighted the importance of AI transparency. More recently, in October 2021, the White House Office of Science and Technology Policy announced the development of an “AI bill of rights,” designed to protect consumers from harmful AI consequences. In light of the growing regulatory focus, enterprises should understand risk around using AI and ensure appropriate governance.
Incorporating alternative data in each area of the product life cycle continues to provide important insight on identity, authentication, financial behavior, and collections potential. Although AI holds incredible promise, especially in financial services, there are also risks that each firm must be aware of and account for. These risks are both surmountable, and worth the effort. As regulator eyes continue to focus on AI, those building the algorithm, along with those governing its use, must sharpen the pencil to prevent blind spot results and promote equitable outcomes.