Explainable AI: A Neurally-Inspired Decision Stack Framework.

Biomimetics (Basel)

Schar School of Policy and Government, George Mason University, Arlington, VA 22201, USA.

Published: September 2022

European law now requires AI to be explainable in the context of adverse decisions affecting the European Union (EU) citizens. At the same time, we expect increasing instances of AI failure as it operates on imperfect data. This paper puts forward a neurally inspired theoretical framework called "decision stacks" that can provide a way forward in research to develop Explainable Artificial Intelligence (X-AI). By leveraging findings from the finest memory systems in biological brains, the decision stack framework operationalizes the definition of explainability. It then proposes a test that can potentially reveal how a given AI decision was made.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9496620PMC
http://dx.doi.org/10.3390/biomimetics7030127DOI Listing

Publication Analysis

Top Keywords

decision stack
8
stack framework
8
explainable neurally-inspired
4
neurally-inspired decision
4
framework european
4
european law
4
law requires
4
requires explainable
4
explainable context
4
context adverse
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!