Measuring the Quality of Explanations: The System Causability Scale (SCS): Comparing Human and Machine Explanations - PubMed (original) (raw)
Measuring the Quality of Explanations: The System Causability Scale (SCS): Comparing Human and Machine Explanations
Andreas Holzinger et al. Kunstliche Intell (Oldenbourg). 2020.
Abstract
Recent success in Artificial Intelligence (AI) and Machine Learning (ML) allow problem solving automatically without any human intervention. Autonomous approaches can be very convenient. However, in certain domains, e.g., in the medical domain, it is necessary to enable a domain expert to understand, why an algorithm came up with a certain result. Consequently, the field of Explainable AI (xAI) rapidly gained interest worldwide in various domains, particularly in medicine. Explainable AI studies transparency and traceability of opaque AI/ML and there are already a huge variety of methods. For example with layer-wise relevance propagation relevant parts of inputs to, and representations in, a neural network which caused a result, can be highlighted. This is a first important step to ensure that end users, e.g., medical professionals, assume responsibility for decision making with AI/ML and of interest to professionals and regulators. Interactive ML adds the component of human expertise to AI/ML processes by enabling them to re-enact and retrace AI/ML results, e.g. let them check it for plausibility. This requires new human-AI interfaces for explainable AI. In order to build effective and efficient interactive human-AI interfaces we have to deal with the question of how to evaluate the quality of explanations given by an explainable AI system. In this paper we introduce our System Causability Scale to measure the quality of explanations. It is based on our notion of Causability (Holzinger et al. in Wiley Interdiscip Rev Data Min Knowl Discov 9(4), 2019) combined with concepts adapted from a widely-accepted usability scale.
Keywords: Explainable AI; Human–AI interfaces; System causability scale (SCS).
© The Author(s) 2020.
Figures
Fig. 1
The Process of Explanation. Explanations (e) by humans and machines (subscripts h and m) must be congruent with statements (s) and models (m) which in turn are based on the ground truth (gt). Statements are a function of representations (r), knowledge (k) and context (c)
Similar articles
- Explainable AI and Multi-Modal Causability in Medicine.
Holzinger A. Holzinger A. I Com (Berl). 2021 Jan 26;19(3):171-179. doi: 10.1515/icom-2020-0024. Epub 2021 Jan 15. I Com (Berl). 2021. PMID: 37014363 Free PMC article. - Causability and explainability of artificial intelligence in medicine.
Holzinger A, Langs G, Denk H, Zatloukal K, Müller H. Holzinger A, et al. Wiley Interdiscip Rev Data Min Knowl Discov. 2019 Jul-Aug;9(4):e1312. doi: 10.1002/widm.1312. Epub 2019 Apr 2. Wiley Interdiscip Rev Data Min Knowl Discov. 2019. PMID: 32089788 Free PMC article. Review. - Explainability and causability for artificial intelligence-supported medical image analysis in the context of the European In Vitro Diagnostic Regulation.
Müller H, Holzinger A, Plass M, Brcic L, Stumptner C, Zatloukal K. Müller H, et al. N Biotechnol. 2022 Sep 25;70:67-72. doi: 10.1016/j.nbt.2022.05.002. Epub 2022 May 6. N Biotechnol. 2022. PMID: 35526802 - Explainability and causability in digital pathology.
Plass M, Kargl M, Kiehl TR, Regitnig P, Geißler C, Evans T, Zerbe N, Carvalho R, Holzinger A, Müller H. Plass M, et al. J Pathol Clin Res. 2023 Jul;9(4):251-260. doi: 10.1002/cjp2.322. Epub 2023 Apr 12. J Pathol Clin Res. 2023. PMID: 37045794 Free PMC article. Review. - CLARUS: An interactive explainable AI platform for manual counterfactuals in graph neural networks.
Metsch JM, Saranti A, Angerschmid A, Pfeifer B, Klemt V, Holzinger A, Hauschild AC. Metsch JM, et al. J Biomed Inform. 2024 Feb;150:104600. doi: 10.1016/j.jbi.2024.104600. Epub 2024 Jan 30. J Biomed Inform. 2024. PMID: 38301750
Cited by
- To explain or not to explain?-Artificial intelligence explainability in clinical decision support systems.
Amann J, Vetter D, Blomberg SN, Christensen HC, Coffee M, Gerke S, Gilbert TK, Hagendorff T, Holm S, Livne M, Spezzatti A, Strümke I, Zicari RV, Madai VI; Z-Inspection initiative. Amann J, et al. PLOS Digit Health. 2022 Feb 17;1(2):e0000016. doi: 10.1371/journal.pdig.0000016. eCollection 2022 Feb. PLOS Digit Health. 2022. PMID: 36812545 Free PMC article. - Leading with AI in critical care nursing: challenges, opportunities, and the human factor.
Hassan EA, El-Ashry AM. Hassan EA, et al. BMC Nurs. 2024 Oct 14;23(1):752. doi: 10.1186/s12912-024-02363-4. BMC Nurs. 2024. PMID: 39402609 Free PMC article. - Re-focusing explainability in medicine.
Arbelaez Ossa L, Starke G, Lorenzini G, Vogt JE, Shaw DM, Elger BS. Arbelaez Ossa L, et al. Digit Health. 2022 Feb 11;8:20552076221074488. doi: 10.1177/20552076221074488. eCollection 2022 Jan-Dec. Digit Health. 2022. PMID: 35173981 Free PMC article. Review. - Effects of explainable artificial intelligence in neurology decision support.
Gombolay GY, Silva A, Schrum M, Gopalan N, Hallman-Cooper J, Dutt M, Gombolay M. Gombolay GY, et al. Ann Clin Transl Neurol. 2024 May;11(5):1224-1235. doi: 10.1002/acn3.52036. Epub 2024 Apr 5. Ann Clin Transl Neurol. 2024. PMID: 38581138 Free PMC article. Clinical Trial. - Quality Models for Artificial Intelligence Systems: Characteristic-Based Approach, Development and Application.
Kharchenko V, Fesenko H, Illiashenko O. Kharchenko V, et al. Sensors (Basel). 2022 Jun 27;22(13):4865. doi: 10.3390/s22134865. Sensors (Basel). 2022. PMID: 35808361 Free PMC article.
References
- Hinton G, Deng L, Dong Y, Dahl GE, Mohamed A, Jaitly N, Senior A, Vanhoucke V, Nguyen P, Sainath TN, Kingsbury B. Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Signal Process Mag. 2012;29(6):82–97. doi: 10.1109/MSP.2012.2205597. - DOI
- Silver D, Schrittwieser J, Simonyan K, Antonoglou I, Huang A, Guez A, Hubert T, Baker L, Lai M, Bolton A, Chen Y, Lillicrap T, Hui F, Sifre L, van den Driessche G, Graepel T, Hassabis D. Mastering the game of go without human knowledge. Nature. 2017;550(7676):354–359. doi: 10.1038/nature24270. - DOI - PubMed
- Richards N, Moriarty DE, Miikkulainen R. Evolving neural networks to play go. Appl Intell. 1998;8(1):85–96. doi: 10.1023/A:1008224732364. - DOI
LinkOut - more resources
Full Text Sources