chevron-down Created with Sketch Beta.

Business Law Today

June 2022

AI Through the Product Life Cycle: Rise of the Machines

Sumeet Chugani and Leah Eberle


  • Algorithms, including AI and machine learning models, drive customer behavior. AI and machine learning hold much promise, especially in financial services, but also involve risks.
  • Data output is only as good as the input framework. Because models use historical data and lack judgment, AI and machine learning could amplify previously biased decision-making and negatively affect certain communities, particularly in credit underwriting.
  • Entities must dedicate sufficient resources to design, test, and manage AI and machine learning. This requires a top-down approach.
  • Regulatory AI use lags behind industry use but is ramping up, with recent interest from the federal banking agencies and input from the FTC and the White House Office of Science and Technology Policy.
AI Through the Product Life Cycle: Rise of the Machines

Jump to:

The modern enterprise has access to large amounts of user data collected through numerous user mediums, databases, products, and servicing efforts. This vast data tranche is only as valuable as its use and application. Data can undoubtedly help engage more customers (or appropriate customers) with products tailored to need, but it can also mitigate risks, including preventing fraud loss, leftfield marketing, or personal bias in the underwriting process.

Cue in algorithms, including artificial intelligence (AI) and machine learning models, which are assisting in driving customer behavior. AI involves borrowing characteristics from human intelligence and applying those characteristics as algorithms in an automated fashion. Machine learning, a subset of AI, involves algorithms designed to identify data patterns to create rules, which improve over time based on continued experience.

Tech and financial services firms have only scratched the surface of the benefits that leveraging AI and machine learning bring to the industry. However, as AI further develops, entities must exercise caution, as data output is only as good as the input framework, and appropriate guardrails must be put in place.

Models are trained on historical data and may lack the ability to exercise judgment or apply context within environments in which they are initially deployed. In certain cases, it may not be possible to create a model that is intelligent enough to understand all possible scenarios and data. Despite intention, models that utilize AI and machine learning risk amplifying previously biased decision-making. This can create disproportionately negative effects on certain communities, particularly in credit underwriting. Complex relationships between seemingly unrelated variables, and the potential failure to understand the models’ conclusions, further lead to this risk.

It is paramount for institutions not only to understand AI-related risks, but also to develop appropriate governance, including processes and controls designed to effectively identify and manage risks and address adverse effects.

Dedicating sufficient resources to design, test, and manage AI and machine learning capabilities is table stakes. This requires a top-down approach, starting with the board of directors and senior management to ensure appropriate investments, commensurate with the size and complexity of the organization, are being made in capabilities that support AI and machine learning as well as risk management. Independent risk management functions (the second line of defense) and internal audit (the third line of defense) will be increasingly important.

Regulatory AI strategy continues to take shape, while still lagging behind industry use. In March 2021 the federal banking agencies issued a Request for Information seeking to better understand how AI and machine learning are used in the financial services industry and to identify areas where additional “clarification” could be beneficial. The Federal Trade Commission (FTC) has also issued guidance on the use of AI tools, such as in an April 2020 blog post, in which the FTC highlighted the importance of AI transparency. More recently, in October 2021, the White House Office of Science and Technology Policy announced the development of an “AI bill of rights,” designed to protect consumers from harmful AI consequences. In light of the growing regulatory focus, enterprises should understand risk around using AI and ensure appropriate governance.

Incorporating alternative data in each area of the product life cycle continues to provide important insight on identity, authentication, financial behavior, and collections potential. Although AI holds incredible promise, especially in financial services, there are also risks that each firm must be aware of and account for. These risks are both surmountable, and worth the effort. As regulator eyes continue to focus on AI, those building the algorithm, along with those governing its use, must sharpen the pencil to prevent blind spot results and promote equitable outcomes.