Explainability vs. Causability of Artificial Intelligence in Medicine

In our recent paper we define the notion of causability, which is different from explainability in that causability is a property of a person, while explainability is a property of a system!

The need for deep understanding of algorithms

There are many different machine learning algorithms for a certain problem, but which one to chose for solving a practical problem? The comparison of learning algorithms is very difficult and is highly dependent of the quality of the data!

AI, explain yourself !

“It’s time for AI to move out its adolescent, game-playing phase and take seriously the notions of quality and reliability.”

There is an interesting commentary with interviews by Don MONROE in the recent Communications of the ACM, November 2018, Volume 61, Number 11, Pages 11-13, doi:

Artificial Intelligence (AI) systems are taking over a vast array of tasks that previously depended on human expertise and judgment (only). Often, however, the “reasoning” behind their actions is unclear, and can produce surprising errors or reinforce biased processes. One way to address this issue is to make AI “explainable” to humans—for example, designers who can improve it or let users better know when to trust it. Although the best styles of explanation for different purposes are still being studied, they will profoundly shape how future AI is used.

Some explainable AI, or XAI, has long been familiar, as part of online recommender systems: book purchasers or movie viewers see suggestions for additional selections described as having certain similar attributes, or being chosen by similar users. The stakes are low, however, and occasional misfires are easily ignored, with or without these explanations.

“Considering the internal complexity of modern AI, it may seem unreasonable to hope for a human-scale explanation of its decision-making rationale”.

Read the full article here:
https://cacm.acm.org/magazines/2018/11/232193-ai-explain-yourself/fulltext

 

 

What if the AI answers are wrong?

Cartoon no. 1838 from the xkcd [1] Web comic by Randall MUNROE [2] describes in a brilliant sarcastic way the state of the art in AI/machine learning today and shows us the current main problem directly. Of course you will always get results from one of your machine learning models. Just fill in your data and you will get results – any results. That’s easy. The main question remains open: “What if the results are wrong?” The central problem is to know at all that my results are wrong and to what degree. Do you know your error? Or do you just believe what you get? This can be ignored in some areas, desired in other areas, but in a safety critical domain, e.g. in the medical area, this is crucial [3]. Here also the interactive machine learning approach can help to compensate or lower the generalization error through human intuition [4].

 

[1] https://xkcd.com

[2] https://en.wikipedia.org/wiki/Randall_Munroe

[3] Andreas Holzinger, Chris Biemann, Constantinos S. Pattichis & Douglas B. Kell 2017. What do we need to build explainable AI systems for the medical domain? arXiv:1712.09923. online available: https://arxiv.org/abs/1712.09923v1

[4] Andreas Holzinger 2016. Interactive Machine Learning for Health Informatics: When do we need the human-in-the-loop? Brain Informatics, 3, (2), 119-131, doi:10.1007/s40708-016-0042-6. online available, see:
https://hci-kdd.org/2018/01/29/iml-human-loop-mentioned-among-10-coolest-applications-machine-learning

There is also a discussion on the image above:

https://www.explainxkcd.com/wiki/index.php/1838:_Machine_Learning

 

 

Project Feature Cloud – Pre-Project Meeting and Workshop successful

From October, 21-22, 2018, the project partners of the EU RIA 826078 FeatureCloud project (EUR 4,646,000,00) met at the Technische Universität München, Campus Weihenstephan.  Starting from January, 1, 2019 the project partners will work jointly for 60 months on awesome topics around federated machine learning and explainability. The project’s ground-breaking novel cloud-AI infrastructure will only exchange learned representations (the feature parameters theta θ, hence the name “feature cloud”) which are anonymous by default. This approach is privacy by design or to be more precise: privacy by architecture. The highly interdisciplinary consortium, ranging from AI and machine learning experts to medical professionals covers all aspects of the value chain: assessment of cyber risks, legal considerations and international policies, development of state-of-the-art federated machine learning technology coupled to blockchaining and encompasing social issues and AI-ethics.

How different are Cats vs. Cells in Histopathology?

An awesome question stated in an article by Michael BEREKET and Thao NGUYEN (Febuary 7, 2018) brings it straight to the point: Deep learning has revolutionized the field of computer vision. So why are pathologists still spending their time looking at cells through microscopes?

The most famous machine learning experiments have been done with recognizing cats (see  the video by Peter Norvig) – and the question is relevant, how different are these cats from the cells in histopathology?

Machine Learning, and in particular deep learning, has reached a human-level in certain tasks, particularly in image classification. Interestingly, in the field  of pathology these methods are not so ubiqutiously used currently. A valid question indeed is: Why do human pathologists spend so much time with visual inspection? Of course we restrict this debate on routine tasks!

This excellent article is worthwhile giving a read:
Stanford AI for healthcare: How different are cats from cells

Source of the animated gif above:
https://giphy.com/gifs/microscope-fluorescence-mitosis-2G5llPaffwvio

Yoshua Bengio emphasizes: Deep Learning needs Deep Understanding !

Yoshua BENGIO from the Canadian Institute for Advanced Research (CIFAR) emphasized during his workshop talk entitled “towards disentangling underlying explanatory factors”  (cool title) at the ICML 2018 in Stockholm, that the key for success in AI/machine learning is to understand the explanatory/causal factors and mechanisms. This means generalizing beyond identical independent data (i.i.d.); current machine learning theories are strongly dependent on this iid assumption, but applications in the real-world (we see this in the medical domain!) often require learning and generalizing in areas simply not seen during the training epoch. Humans interestingly are able to protect themselves in such situations, even in situations which they have never seen before. See Yoshua BENGIO’s awesome talk here:
http://www.iro.umontreal.ca/~bengioy/talks/ICMLW-limitedlabels-13july2018.pptx.pdf

and here a longer talk (1:17:04) at Microsoft Research Redmond on January, 22, 2018 – awesome – enjoy the talk, I recommend it cordially to all my students!

Federated Machine Learning – Privacy by Design won

Federated machine learning – privacy by design EU-project granted!

Good news from Brussels: Our EU RIA project application 826078 FeatureCloud with a total volume of EUR 4,646,000,00 has just been granted. The project was submitted to the H2020-SC1-FA-DTS-2018-2020 call “Trusted digital solutions and Cybersecurity in Health and Care”. The lead is done by TU Munich and we are excited to work in a super cool project consortium together with our partners for the next 60 months. The project’s ground-breaking novel cloud-AI infrastructure only exchanges learned representations (the feature parameters theta θ, hence the name “feature cloud”) which are anonymous by default (no hassle with “real medical data” – no ethical issues). Collectively, our highly interdisciplinary consortium from AI and machine learning to medicine covers all aspects of the value chain: assessment of cyber risks, legal considerations and international policies, development of state-of-the.-art federated machine learning technology coupled to blockchaining and encompasing AI-ethics research. FeatureCloud’s goals are challenging bold, obviously, but achievable, and paving the way for a socially agreeable big data era for the benefit of future medicine. Congratulations to the great project consortium!

Judea Pearl on explainable-AI: teach machines cause and effect

To build truly intelligent machines, teach them cause and effect, emphasizes Judea PEARL in a recent Quanta Magazine article (May, 15, 2018) posted by Kevin HARTNETT. Judea Pearl won in 2011 the Turing Award (“the Nobel Prize in Computer Science”) and just published his newest book, called “The book of why: the new science of cause and effect”, wherein Pearl argues that AI has been handicapped by an incomplete understanding of what intelligence really is. Causal reasoning is a cornerstone in explainable-AI!

Read the interesting article here:
https://www.quantamagazine.org/to-build-truly-intelligent-machines-teach-them-cause-and-effect-20180515

The book is also announced by the UCLA newsroom, along with a nice interview see:
http://newsroom.ucla.edu/stories/artificial-intelligence-pioneers-new-book-examines-the-science-of-cause-and-effect