An increasing number of explainability methods began to emerge as a response for the black-box methods used to make decisions that could not be easily explained. This created the need for a better evaluation for these methods. In this paper we propose a new method for evaluation based on features. The main advantage of applying the proposed method to CNNs explanations are: a fully automated way to measure the quality of an explanation and the fact that the score uses the same information as the CNN, in this way being able to offer a measure of the quality of explanation that can be obtained automatically, ensuring that the human bias will not be present in the measurement of the explanation.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.3233/SHTI241102 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!