Reassuring, Misleading, Debunking: Comparing Effects of XAI Methods on Human Decisions

HOXAIStudy2 Teaser

Abstract

Trust calibration is essential in AI-assisted decision-making. If human users understand the rationale on which an AI model has made a prediction, they can decide whether they consider this prediction reasonable. Especially in high-risk tasks such as mushroom hunting (where a wrong decision may be fatal), it is important that users make correct choices to trust or overrule the AI. Various explainable AI (XAI) methods are currently being discussed as potentially useful for facilitating understanding and subsequently calibrating user trust. So far, however, it remains unclear which approaches are most effective. In this paper the effects of XAI methods on human AI-assisted decision-making in the high-risk task of mushroom picking were tested. For that endeavor, the effects of (i) Grad-CAM attributions, (ii) nearest-neighbor examples, and (iii) network-dissection concepts were compared in a between-subjects experiment with 𝑁 = 501 participants. In general, nearest-neighbor examples improved decision correctness the most. However, varying effects for different task items became apparent. All explanations seemed to be particularly effective when they revealed reasons to (i) doubt a specific AI classification when the AI was wrong and (ii) trust a specific AI classification when the AI was correct. Our results suggest that well-established methods, such as Grad-CAM attribution maps, might not be as beneficial to end users as expected and that XAI techniques for use in real-world scenarios must be chosen carefully.


Citation

Christina Humer, Andreas Hinterreiter, Benedikt Leichtmann, Martina Mara, Marc Streit
Reassuring, Misleading, Debunking: Comparing Effects of XAI Methods on Human Decisions
OSF Preprint, doi:10.31219/osf.io/h6dwz, 2022.

BibTeX

@article{,
    title = {Reassuring, Misleading, Debunking: Comparing Effects of XAI Methods on Human Decisions},
    author = {Christina Humer and Andreas Hinterreiter and Benedikt Leichtmann and Martina Mara and Marc Streit},
    journal = {OSF Preprint},
    doi = {10.31219/osf.io/h6dwz},
    url = {https://doi.org/10.31219/osf.io/h6dwz},
    month = {October},
    year = {2022}
}

Acknowledgements

This work was funded by Johannes Kepler University Linz, Linz Institute of Technology (LIT), the State of Upper Austria, and the Federal Ministry of Education, Science and Research under grant number LIT-2019-7-SEE-117, awarded to MM and MS, the Austrian Science Fund under grant number FWF DFH 23–N, and under the Human-Interpretable Machine Learning project (funded by the State of Upper Austria). We thank Moritz Heckmann for helping with the implementation of the AI Forest - The Schwammerl Hunting Gameand Stefan Eibelwimmer for the graphic design of the game. We thank Dr. Otto Stoik, the members of the Mycological Working Group (MYAG) at the Biology Center Linz, Austria, and the German Mycological Society (DGfM) for providing mushroom images for this study. Finally, we thank Alfio Ventura for helping with the study setup.