Who Owns Model Risk in an AI World?

Complicated computerized models and quantitative analyses are a fundamental mainstay in the financial services industry, from quantitative investment asset managers who use models to manage investment portfolios, to banks who use models to underwrite loans or monitor for money laundering or other behavior. With the benefits of those models comes several forms of risk, generally lumped together as “model risk.”

Model risk generally refers to the potential for adverse consequences resulting from actions taken or decisions made based on incorrect or misused models or model outputs, and it includes risks related to errors in the quantification, coding or calculation process, use of improper or inaccurate data or other inputs, incorrect or inaccurate model design, or misuse or misapplication of models or model outputs. (The definition of a model “error” of “defect” is itself a subject of substantial debate, and often depends on the purpose and context for using the model. As noted in the article, whether a design decision rises to the category of “defect” will likely depend on the context of the use of the model, the model limitations disclosed to users, and the language of any agreement between the parties.)

The risk of such model errors is not theoretical. Over the past several years model errors have led to Securities and Exchange Commission enforcement actions, litigation and adverse headlines. For example, the SEC disciplined a quantitative investment adviser where an error in the computer code of the quantitative investment model eliminated one of the risk controls in the model, and where that error was concealed from advisory clients.

Similarly, where a robo-adviser advertised that its algorithms would monitor for wash sales but failed to accurately do so in 31 percent of the accounts so enrolled, the SEC found that the adviser had made false statements to its clients. Mortgage lenders have been accused of incorrectly denying loan modifications due to computer errors, and banks have suffered anti-money laundering compliance failures due to coding errors. As banks, asset managers and other financial services firms begin to deploy artificial intelligence or machine learning—whether in credit risk scoring, fraud detection, robo-advisory services, algorithmic trading, insurance underwriting or other areas—the potential model risks and related consequences increase.

Based on guidance from the Federal Reserve, the FDIC and other regulators, financial service firms have generally developed tools to identify, measure and manage those model risks. But that guidance predates the AI renaissance, and with the advance of big data, artificial intelligence and machine learning, potential model risks increase, and the controls needed to manage those risks and comply with regulatory and contractual obligations deserve additional attention.

For example, pursuant to the Federal Reserve’s Guidance for Model Risk Management, the guiding principle of model risk management is effective challenge to the model, which requires critical analysis by objective, and informed parties who can identify model limitations and implement appropriate changes. Such effective challenge would include (among many other items) testing the theory and logic underlying the model design, validating the model as well as integrity of data it uses, testing the performance of the model over a range of inputs, and implementing a governance model that permits independent review and assessment.

But in an AI world, when models work by identifying patterns in large data sets and making decisions based on those patterns, replication of the model’s output (let alone reviewing performance across a range of inputs) becomes far more difficult. Further, when AI models apply machine learning to very large data sets, often from multiple sources, validating the integrity of such data becomes exponentially more challenging. And where model output may be generated in a black box based on the application of artificial intelligence, the ability of independent reviewers to effectively challenge any output becomes substantially more limited.

From a risk management and liability perspective, the questions that financial services firms should consider include, among others: How will a court determine (1) whether there were any defects in the model design, input or output; (2) whether any defect caused any adverse decision; (3) which party—among the model developer (or licensor), model user (or licensee), or the financial institution’s customer—assumed the risk of the error or defect; and (4) the amount of any damages? These are the questions that courts and participants in the financial services industry will face in the coming years.

Read the full bylined article here.


Mayer Brown partners Reginald R. Goeke, David L. BeamLeslie S. Cruz, Alex C. Lakatos and Brad L. Peterson highlight areas of interest in this article.

Related Content