machine learning knowledge extraction digital pathology

ABSTRACT

Together with the Diagnostic and Research Institute of Pathology (Prof. Kurt ZATLOUKAL, head of the Diagnostic and Research Center for Molecular Biomedicine and the data management and machine learning group of Dr. Heimo MUELLER) we are working on the whole machine learning pipeline from preprocessing to visualization of the results with a particular emphasis on making the results retraceable hence interpretable to a human expert towards explainable ai and causability. We base our work on our ICT-2011.9.5 – FET Flagship Initative Preparatory Action “IT Future of Medicine” in a joint effort together with BBMRI.at and the ADOPT project. The work of pathologists is interesting for  fundamental research in Artificial Intelligence (AI) and Machine Learning (ML) for several reasons: 1) Digital pathology is not just the transformation of the classical microscopic analysis of histological slides by pathologists to a digital visualization, it is an innovation that will dramatically change medical workflows in the coming years; 2) Much information is hidden in arbitrarily high dimensional spaces of heterogenous data sources (images, patient records, *omics data), not accessible to a human, consequently we need AI/ML for information fusion thereby generating new information, which was not yet available and not exploited in current diagnostics; 3) Pathologists are able to learn from very few examples and to transfer previously learned knowledge quickly to new tasks. Insights into the latter supports AI research theoretically and ML research practically and may contribute to answer a grand question: How can machine learning algorithms perform a task by exploiting knowledge, extracted during solving previous tasks? Contributions to solve the problem of transfer learning would have major impact to Artificial Intelligence generally, and Machine Learning specifically. This implies to develop software which can learn from experience and can adapt to context – similarly as we humans do. A major aspect in the medical domain is to foster transparency, explainability and traceability – to explain why a machine decision has been made and to understand the underlying explanatory factors and context herein. Here the iML glass box approach with a human-in-the-loop approach can be beneficial, as it can not only foster trust of medical experts in AI generally and ML specifically – but will emphasize the importance of the human expert and ensure her/his importance during their job – but will free her/him  from routine work and enable what neither a machine nor a human can do at their own.

  • Publications

    [3] Peter Regitnig, Heimo Mueller & Andreas Holzinger 2020. Expectations of Artificial Intelligence in Pathology. Springer Lecture Notes in Artificial Intelligence LNAI 12090. Cham: Springer, pp. 1-15, doi: 10.1007/978-3-030-50402-1-1 [pdf for students]

    [2] Holzinger, A., Malle, B., Kieseberg, P., Roth, P.M., Müller, H., Reihs, R. & Zatloukal, K. 2017. Towards the Augmented Pathologist: Challenges of Explainable-AI in Digital Pathology. arXiv:1712.06657.

    [1] Holzinger, A., Malle, B., Kieseberg, P., Roth, P. M., Müller, H., Reihs, R. & Zatloukal, K. 2017. Machine Learning and Knowledge Extraction in Digital Pathology needs an integrative approach. Springer Lecture Notes in Artificial Intelligence Volume LNAI 10344. Cham: Springer International, pp. 13-50. doi: 10.1007/978-3-319-69775-8_2

  • Technical Area

    Deep Learning, interactive Machine Learning, geometrical approaches

  • Application Area

    Digital Pathology