Intelligent User Interfaces – Human-Computer Interaction meets Artificial Intelligence
Summer Term 2019 > Start Monday, March, 4, 2019, 14:00 IDEG 134 Seminarraum Inffeldgasse 16c – Graz
Intelligent User Interfaces: Explanation Interfaces
See here a short (9 min) video intro: https://goo.gl/dB9heh
GOAL: In this research-based teaching course you will learn to build explanation user-interface frameworks in order to interface explainable models to human end users in the real world.
BACKGROUND: Artificial Intelligence (AI) and Machine Learning (ML) demonstrate impressive success. Particularly deep learning (DL) approaches hold great premises (see differences between AI/ML/DL here). Unfortuntately, the best performing methods turn out to be “black-boxes”. Of course this is not quite true, but even if we understand the underlying mathematical principles, such models have no explicit declarative knowledge representation, hence have difficulty in generating the required explanatory and contextual structures. This considerably limits the achievement of their full potential in certain application domains as it is lacking transparency. Consequently, in safety critical systems and domains (e.g. in health) we may raise the question: “Can we trust these results?”, “Can we explain how and why a result was achieved?”. This is crucial for user acceptance, because the ultimate responsibility, e.g. in medicine, remains with the human. Therefore, systems must enable transparency and re-traceability at least on demand.
Generally, there is growing industrial demand in machine learning approaches, which are not only well performing, but transparent, interpretable and trustworthy, e.g. in medicine, in production, robotics, automous driving, etc.
However, methods and models to reenact the machine decision-making process, to reproduce and to comprehend the learning and knowledge extraction process need affective user interfaces. For decision support it is necessary to understand the causality of learned representations. If human intelligence is complemented by machine learning and at least in some cases even overruled, humans must still be able to understand, and most of all to be able to interactively influence the machine decision process. This needs context awareness and sensemaking to close the gap between human thinking and machine “thinking”.
All this needs not only explainable models but novel intelligent explanation user interfaces supporting the end user to understand why the machine came up with a particular decision. This needs interaction with the algorithm and to visualize the underlying explanatory factors for sensemaking.
In this course we will work on mini-projects from diverse projects, e.g. in our digital pathology project.
Last updated by A.Holzinger 28.10.2018, 16:00 CET
Human-Computer Interaction meets Artificial Intelligence
[AK HCI, 16S, 706.046, 3 VU, 4,5 ECTS > TUG-Online]
Intelligent User Interfaces (IUI) is where the Human-computer interaction (HCI) meet Artificial Intelligence (AI), often defined as the design of intelligent agents – the core essence in Machine Learning (ML). In interactive Machine Learning (iML) this agents can also be humans:
Holzinger, A. 2016. Interactive Machine Learning for Health Informatics: When do we need the human-in-the-loop? Springer Brain Informatics (BRIN), 3, (2), 119-131, doi:10.1007/s40708-016-0042-6.
Holzinger, A. 2016. Interactive Machine Learning (iML). Informatik Spektrum, 39, (1), 64-68, doi:10.1007/s00287-015-0941-6.
Holzinger, A., Plass, M., Holzinger, K., Crisan, G.C., Pintea, C.-M. & Palade, V. 2017. A glass-box interactive machine learning approach for solving NP-hard problems with the human-in-the-loop. arXiv:1708.01104.
In this practically oriented course, Software Engineering is seen as dynamic, interactive and cooperative process which facilitate an optimal mixture of standardization and tailor-made solutions. Here you have the chance to work on real-world problems.
Previous knowledge expected
Interest in experimental Software Engineering in the sense of:
Science is to test crazy ideas – Engineering is to put these ideas into Business.
Interest in cross-disciplinary work, particularly in the HCI-KDD approach: Many novel discoveries and insights are found at the intersection of two domains, see: A. Holzinger, “Human–Computer Interaction and Knowledge Discovery (HCI-KDD): What is the benefit of bringing those two fields to work together?“, in Multidisciplinary Research and Practice for Information Systems, Springer Lecture Notes in Computer Science LNCS 8127, A. Cuzzocrea, C. Kittl, D. E. Simos, E. Weippl, and L. Xu, Eds., Heidelberg, Berlin, New York: Springer, 2013, pp. 319-328. [DOI] [Download pdf]
After successful completion of this course:
- Students are autonomously able to apply a selection of the most important scientific HCI methods and practical methods of UE
- Students understand the most essential problems which End-Users are faced in our modern, complex and dynamic environment
- Students are able to apply the most important experimental designs
- Students learn to deal with the problems in modern user interface design
- Students are able to conduct elementary research experiments and carry out solid evaluations in HCI research
1. One scientific paper per group (50 %)
2. Project presentations during the semester – EVERY student of a group has to present one part of the work! (50%)
To contervail arguing against paper writing, please have a look at this from a Harvard Master (!) course:
Basically this VU is a very practice-led course, and therefore the majority of the work will take place at home or in the field (field work with end-users). The room is reserved from 14:00 to 18:00, but that does not mean that we always need the full time! (Room IDEG134, Inffeldgasse 16c). Please make sure you are on time that day, as we will be presenting the projects (first come, first served!).
|Mo 04.03.2019||14:00 –||IDEG134||01 Introduction and presentation of cool mini-projects by the tutors|
|Mo 11.03.2018||14:00 –||IDEG134||02 Presenting the mini-project goals by the groups – to ensure mutual understanding|
|Now working individually with your tutors in the context of our real-world project, e.g. digital pathology||individual mini-project||real-world||Alone, in pairs or in groups of maximal three colleagues you work on your individual mini-project with the course tutors|
|Mo 16.04.2018||14:00 –||IDEG134||03 Progress Meeting – presenting the mini project status|
|Mo 14.05.2018||14:00 –||IDEG134||04 Progress Report presentation – mid term review|
|Mo 11.06.2018||14:00 –||IDEG134||05 Mini Conference – final presentation|
General guidelines for the scientific paper
Holzinger, A. (2010). Process Guide for Students for Interdisciplinary Work in Computer Science/Informatics. Second Edition. Norderstedt: BoD (128 pages, ISBN 978-3-8423-2457-2)
also available at Fachbibliothek Inffeldgasse.
Scientific paper templates
Please use the following templates for your scientific paper:
(new) A general LaTeX template can be found on overleaf > https://www.overleaf.com/4525628ngbpmv
Further information and templates available at: Springer Lecture Notes in Computer Science (LNCS)
Paper review template
Power-Point Template for the final presentation:
- 2019-clean-slides (pptx, 4,010 kB)
Some pointers to interesting sources in intelligent HCI:
- Visual Turing Test, see: Lake, B. M., Salakhutdinov, R. [expertise] & Tenenbaum, J. B. 2015. Human-level concept learning through probabilistic program induction. Science, 350, (6266), 1332-1338. [http://web.mit.edu/cocosci/Papers/Science-2015-Lake-1332-8.pdf]
You can try out some online experiments (“visual Turing tests”) to see if you can find out the difference between human and computer behavior. The code and images for running these experiments are available on github.
- The Human Kernel, see: Wilson, A. G. , Dann, C., Lucas, C. & Xing, E. P. The Human Kernel. Advances in Neural Information Processing Systems, 2015. 2836-2844. [papers.nips.cc/paper/5765-the-human-kernel.pdf]
You can try out some online experiements for the Human Kernel here:
- Hernández-Orallo, J. 2016. The measure of all minds: evaluating natural and artificial intelligence, Cambridge University Press.
- Trust building with explanation interfaces, see: https://hci.epfl.ch/members/pearl/index.html
Pearl Pu & Li Chen 2006. Trust building with explanation interfaces. Proceedings of the 11th international conference on Intelligent user interfaces. Sydney, Australia: ACM. 93-100, doi:10.1145/1111449.1111475.