Mitigating belief projection in explainable artificial intelligence via Bayesian teaching.

Sci Rep

Department of Mathematics and Computer Science, Rutgers University, 101 Warren Street, Newark, NJ, 07102, USA.

Published: May 2021

State-of-the-art deep-learning systems use decision rules that are challenging for humans to model. Explainable AI (XAI) attempts to improve human understanding but rarely accounts for how people typically reason about unfamiliar agents. We propose explicitly modelling the human explainee via Bayesian teaching, which evaluates explanations by how much they shift explainees' inferences toward a desired goal. We assess Bayesian teaching in a binary image classification task across a variety of contexts. Absent intervention, participants predict that the AI's classifications will match their own, but explanations generated by Bayesian teaching improve their ability to predict the AI's judgements by moving them away from this prior belief. Bayesian teaching further allows each case to be broken down into sub-examples (here saliency maps). These sub-examples complement whole examples by improving error detection for familiar categories, whereas whole examples help predict correct AI judgements of unfamiliar cases.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8110978PMC
http://dx.doi.org/10.1038/s41598-021-89267-4DOI Listing

Publication Analysis

Top Keywords

bayesian teaching
20
predict ai's
8
bayesian
5
teaching
5
mitigating belief
4
belief projection
4
projection explainable
4
explainable artificial
4
artificial intelligence
4
intelligence bayesian
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!