Explaining a series of models by propagating Shapley values.

Nat Commun

Paul G. Allen School of Computer Science and Engineering, University of Washington, Seattle, USA.

Published: August 2022

Local feature attribution methods are increasingly used to explain complex machine learning models. However, current methods are limited because they are extremely expensive to compute or are not capable of explaining a distributed series of models where each model is owned by a separate institution. The latter is particularly important because it often arises in finance where explanations are mandated. Here, we present Generalized DeepSHAP (G-DeepSHAP), a tractable method to propagate local feature attributions through complex series of models based on a connection to the Shapley value. We evaluate G-DeepSHAP across biological, health, and financial datasets to show that it provides equally salient explanations an order of magnitude faster than existing model-agnostic attribution techniques and demonstrate its use in an important distributed series of models setting.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9349278PMC
http://dx.doi.org/10.1038/s41467-022-31384-3DOI Listing

Publication Analysis

Top Keywords

series models
16
local feature
8
distributed series
8
models
5
explaining series
4
models propagating
4
propagating shapley
4
shapley values
4
values local
4
feature attribution
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!