Explainable artificial intelligence (AI) is attracting currently much interest in the AI world an particularly in medicine. Technically, the problem of explainability is as old as AI itself and classic AI represented comprehensible retraceable approaches. However, their weakness was in dealing with uncertainties of the real world. Through the introduction of probabilistic learning, applications became increasingly successful, but increasingly opaque. Explainable AI deals with the implementation of transparency and traceability of statistical black‐box machine learning methods, particularly deep learning (DL). In our recent paper we argue that there is a need to go beyond explainable AI. To reach a level of explainable medicine we need causability. In the same way that usability encompasses measurements for the quality of use, causability encompasses measurements for the quality of explanations. In our recent article, we provide some necessary definitions to discriminate between explainability and causability as well as a use‐case of Deep Learning interpretation and of human explanation on a use case in histopathology. The main contribution of our recent article is the notion of causability, which is differentiated from explainability in that causability is a property of a person, while explainability is a property of a system.

The article is categorized under fundamental Concepts of Data and Knowledge > Human Centricity and User Interaction and can be found online (full open access) here: https://onlinelibrary.wiley.com/doi/full/10.1002/widm.1312

Andreas Holzinger, Georg Langs, Helmut Denk, Kurt Zatloukal & Heimo Mueller 2019. Causability and Explainability of AI in Medicine. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, doi:10.1002/widm.1312.

The paper has been among the most downloaded articles at Wiley; meanwhile we have also developed our System Causability Scale, see: https://link.springer.com/article/10.1007/s13218-020-00636-z

Andreas Holzinger, Andre Carrington & Heimo Müller 2020. Measuring the Quality of Explanations: The System Causability Scale (SCS). Comparing Human and Machine Explanations. KI – Künstliche Intelligenz (German Journal of Artificial intelligence), Special Issue on Interactive Machine Learning, Edited by Kristian Kersting, TU Darmstadt, 34, (2), doi:10.1007/s13218-020-00636-z

All this is a successor of the human-in-the-loop approach:
https://link.springer.com/article/10.1007/s40708-016-0042-6