<div class="csl-bib-body">
<div class="csl-entry">Nematov, I., Sacharidis, D., Hose, K., & Sagi, T. (2024). <i>The Susceptibility of Example-Based Explainability Methods to Class Outliers</i>. arXiv. https://doi.org/10.48550/arXiv.2407.20678</div>
</div>
-
dc.identifier.uri
http://hdl.handle.net/20.500.12708/224910
-
dc.description.abstract
This study explores the impact of class outliers on the effectiveness of example-based explainability methods for black-box machine learning models. We reformulate existing explainability evaluation metrics, such as correctness and relevance, specifically for example-based methods, and introduce a new metric, distinguishability. Using these metrics, we highlight the shortcomings of current example-based explainability methods, including those who attempt to suppress class outliers. We conduct experiments on two datasets, a text classification dataset and an image classification dataset, and evaluate the performance of four state-of-the-art explainability methods. Our findings underscore the need for robust techniques to tackle the challenges posed by class outliers.
en
dc.language.iso
en
-
dc.subject
explainability
en
dc.subject
interpretability
en
dc.subject
explainability evaluation
en
dc.title
The Susceptibility of Example-Based Explainability Methods to Class Outliers