HomeDocsAPI Reference
Kumo.ai
Docs

Model Risk Management

Kumo features can help you with Model Risk Management

Kumo and MRM

Model Risk Management (MRM) refers to the set of regulations in the financial industry that control the design and deployment of machine learning models, such as SR 11-7 (USA), SS3/8 (UK), IFRS 9, and others.

While MRM provides many benefits to society by ensuring that ML is used safely and fairly, it is often seen as a burden for data scientists, who need to go through a heavyweight audit and review process for every single new model they develop.

Kumo helps data scientists save time during the MRM audit process. All information that is typically needed for the MRM audit is available in the Kumo UI, API, or public documentation.

In this datasheet, we highlight the the set of capabilities within Kumo that you would use to satisfy a typical MRM process, such as the one outlined in the Comptroller’s Handbook for Model Risk Management, published by the US government. The sections in this datasheet correspond roughly to the table-of-contents of the Comptroller’s Handbook.

Governance

The Governance section of the Comptroller’s Handbook describes the organizational structure, tooling, and people-processes that are typically required to implement Risk Management within a large organization. We recommend using a separate compliance platform to run your MRM governance program as a whole. As described in subsequent sections, Kumo is able to provide documentation, validation, and IT controls, so that you can efficiently audit and safely deploy all models that were built with Kumo.

Model Development and Implementation

The first thing a data scientist does during MRM is to prove that “the design, theory, and logic underlying the model” are sound and appropriate for the business problem.

In practice, this involves documenting the rationale for all components of model design, such as (1) the reasons for using specific tables/columns as an input to the model, and any pre-processing of these features (2) the choice of final model architecture, including tests that were run that led to the selection of the final architecture.

Kumo has several capabilities that simplify this documentation process, such as:

  • Model Architecture Export: Ability to view the final model architecture in human-readable format, including the specific GNN architecture that was selected by AutoML, and the encoding (one-hot, scaling, null-value handling, etc) applied to every single column in the input data.

  • Data Quality Checks: Distribution statistics, graphs, and automatic data-quality checks for each input column, enabling a human to confirm that the data matches business expectations, and has sufficient trustworthiness and suitability.

  • Benchmarks: Academic research papers benchmarking GNN model architecture against leading alternatives, demonstrating the theoretical soundness of Kumo’s approach to tabular machine learning.

  • Architecture Search History: When using AutoML, the architecture and hyperparameter search history is recorded, along with the performance seen for each combination, helping the data scientist document why the chosen architecture is sound.

Model Use

In the Comptroller’s Handbook, the “Model Use” section is focused on understanding any additional risks and limitations related to the model’s use in production, such as any post-processing or overlays applied on the predictions, or whether the assumptions made by the model reflect reality.

Most of this does not directly relate to ML model itself, but there are a few areas where Kumo can help:

  • Sensitivity Analysis: A custom generated scoring table can be provided to Kumo at batch prediction time, to enable easy sensitivity analysis.

  • Prediction Explainability: Kumo can generate human-interpretable explanations of individual predictions. This enables users of the predictions to sanity-check that the reason for each prediction, rather than blindly trusting the predicted score.

Model Validation

A sound model validation process ensures that a model performs as expected, including input data, processing, and reporting. This typically starts with a rigorous evaluation of model correctness prior to the deployment of the first version, including an outcomes analysis by backtesting on a holdout dataset. After the model is in production, ongoing validation is needed to ensure continued correctness of the model predictions, a practice often known as MLOps.

Kumo offers many capabilities around model evaluation, including:

  • Learning Curves and Distribution: Enables monitoring of convergence rates (to detect under/overfitting), and that the distribution of training data is well-balanced over time.

  • Backtesting on Holdout: All models are back-tested on a configurable holdout dataset. Users may download this holdout for custom analysis.

  • Standard Eval Metrics and Charts: Including: ROC and PRC curve, cumulative gain chart, AUPRC, AUROC, predicted vs actual scatter plot and histogram, MAE, MSE, RMSE, SMAPE, average precision, AUPRC, per-category recall, F1, MAP

  • Baseline Comparison: Models are benchmarked against an automatically generated analytic baseline.

  • Column Explainability: A visualization similar to Partial Dependence Plots, highlighting which columns have the greatest predictive power. This helps prove that the model has no data leakage.

In order to support ongoing validation of model correctness, Kumo has the following features related to MLOps:

  • Data Source Snapshotting: During each job, data source statistics are snapshotted (including size, time range, and import time), enabling faster root cause analysis.

  • Drift Detection: Distributions of features and predictions are recorded and monitored over time. This enables early detection of issues, preventing bad predictions from being published to production.

  • Champion/Challenger: If orchestrating automatic job retraining through the REST API, a champion/challenger approach can be adopted to validate the key metrics of the newly retrained model.

Third Party Risk Management

When engaging with a third-party vendor for modeling, additional restrictions may apply. Most importantly, it is recommended for banks to maintain as much knowledge in-house as possible, in case the vendor or bank terminates the contract for any reason.

As Kumo is a platform that lets organizations train their own models, all of the knowledge on how to build your specific model, as well as the data used to train the model, remains in control of the organization. As such, satisfying this aspect of the MRM requirements is typically much easier, compared to a vendor that sells a specific model or dataset as a service.

IT Systems

MRM typically requires that the IT systems used to train and serve models meet your organization’s needs around availability and security.

Kumo relies on industry standard information security best practices and compliance frameworks, such as NIST 800-53, ISO 27000 series. It is SOC-2 type 2 compliant, and provides several standard deployment offerings to meet the needs of various security teams. Custom offerings are available upon request:

  • SaaS: Kumo manages the cloud infrastructure GPU compute, data processing platform, and temporary data cache for model training. All data in flight and at rest is encrypted and protected according to principle of least access.

  • Snowflake Native: The Kumo Application can run as a Native App within an organization’s Snowflake account. In this world, all data and compute remains within the organization’s Snowflake account and VPC, analogous to an on-prem deployment.

  • Databricks Native: The Kumo control plane and GPU compute is served from a Kumo-managed environment, while all data processing and storage is pushed down to the organization’s Databricks account and VPC. No raw data is persisted outside of the Databricks environment.