JALALI, Anahid, Bernhard HASLHOFER, Simone KRIGLSTEIN a Andreas RAUBER. Predictability and Comprehensibility in Post-Hoc XAI Methods: A User-Centered Analysis. In Arai, K. Intelligent Computing. Springer, 2023. Dostupné z: https://dx.doi.org/10.1007/978-3-031-37717-4_46.
Další formáty:   BibTeX LaTeX RIS
Základní údaje
Originální název Predictability and Comprehensibility in Post-Hoc XAI Methods: A User-Centered Analysis
Autoři JALALI, Anahid, Bernhard HASLHOFER, Simone KRIGLSTEIN a Andreas RAUBER.
Vydání Intelligent Computing, 2023.
Nakladatel Springer
Další údaje
Originální jazyk angličtina
Typ výsledku Stať ve sborníku
Utajení není předmětem státního či obchodního tajemství
WWW URL
Organizační jednotka Fakulta informatiky
Doi http://dx.doi.org/10.1007/978-3-031-37717-4_46
Klíčová slova česky eXplainable Artificial Intelligence, Machine Learning Interpretability,Human Computer Interaction
Klíčová slova anglicky eXplainable Artificial Intelligence, Machine Learning Interpretability,Human Computer Interaction
Příznaky Mezinárodní význam, Recenzováno
Změnil Změnila: Priv.-Doz. Dipl.-Ing. Dr. Simone Kriglstein, učo 112812. Změněno: 5. 9. 2023 22:05.
Anotace
Post-hoc explainability methods aim to clarify predictions of black-box machine learning models. However, it is still largely unclear how well users comprehend the provided explanations and whether these increase the users’ ability to predict the model behavior. We approach this question by conducting a user study to evaluate comprehensibility and predictability in two widely used tools: LIME and SHAP. Moreover, we investigate the effect of counterfactual explanations and misclassifications on users’ ability to understand and predict the model behavior. We find that the comprehensibility of SHAP is significantly reduced when explanations are provided for samples near a model’s decision boundary. Furthermore, we find that counterfactual explanations and misclassifications can significantly increase the users’ understanding of how a machine learning model is making decisions. Based on our findings, we also derive design recommendations for future post-hoc explainability methods with increased comprehensibility and predictability.
VytisknoutZobrazeno: 5. 10. 2024 16:01