HomeDocsAPI Reference
Kumo.ai
Docs

Explainability

Solving the black box problem with Kumo explainable AI (XAI)

Explaining How Predictive Queries Work

While sophisticated algorithms have demonstrated remarkable performance in various prediction tasks, their inherent opacity poses challenges in understanding why a particular decision was made. Explainable AI (XAI) aims to shed light on the "black box" nature of machine learning models, making their reasoning more transparent and interpretable.

Kumo leverages advanced graph neural networks (GNNs) to make predictions, but you don’t have to worry about them being a black box. Kumo's platform provides several XAI mechanisms for explaining why a particular prediction was made, as well as detecting potential issues like data leakage, bias, and model performance degradation. With these XAI tools, you can more confidently trust your predictions, effectively spot data quality issues, and troubleshoot issues with your predictive queries.

Kumo's XAI metrics can help you make sense of how the tables in your graph, the columns within those tables, and the range of values in each of those columns contribute to how your predictive query behaves. These state-of-the-art XAI tools enable you to understand and explain to stakeholders—down to the level of individual values in each column of the tables in your graph—exactly how your predictive query makes its predictions.

XAI by Task Type

The following are the Kumo XAI mechanisms available per task type: