Publications by authors named "Emanuele Marconato"

Research on Explainable Artificial Intelligence has recently started exploring the idea of producing explanations that, rather than being expressed in terms of low-level features, are encoded in terms of . How to reliably acquire such concepts is, however, still fundamentally unclear. An agreed-upon notion of concept interpretability is missing, with the result that concepts used by both post hoc explainers and neural networks are acquired through a variety of mutually incompatible strategies.

View Article and Find Full Text PDF