MAKE-Explainable AI (MAKE – eXAI) Workshop

Canterbury, Kent (UK), August, 26-29, 2019

CD-MAKE 2019 Workshop on explainable Artificial Intelligence supported by IFIP and Springer/Nature
in the context of the CD-MAKE conference and the the
14th International Conference on Availability, Reliability and Security ARES 2019

(this page is current as of 12.12.2018 09:00 CET)

HISTORY:

After the success of our 1st international workshop on explainable AI, see:
https://2018.cd-make.net/special-sessions/make-explainable-ai/index.html

and see our output in Springer Lecture Notes in Computer Science LNCS 11015
https://link.springer.com/book/10.1007/978-3-319-99740-7

we organize our 2nd international workshop on explainable AI at IFIP CD-MAKE 2019 in Canterbury/Kent (UK)
during August, 26 and August, 29, 2019

GOAL:

In this cross-disciplinary workshop we aim to bring together international cross-domain experts interested in artificial intelligence/machine learning to stimulate research, engineering and evaluation in and for explainable AI – towards making machine decisions transparent, re-enactive, comprehensible, interpretable, thus explainable, re-traceable and reproducible – towards causality research, one of the cornerstone of scientific research per se.

SUBMISSION:

All submissions will be peer reviewed by three members of our international scientific comittee. Accepted papers will be presented at the workshop orally or as poster and published in the IFIP CD-MAKE Volume of Springer Lecture Notes (LNCS), see LNCS 11015 as example.

Additionally there is also the opportunity to submit to our thematic collection “explainable AI in Medical Informatics and Decision Making” in Springer/Nature BMC Medical Informatics and Decision Making (MIDM), SCI-Impactfactor 2,134 (see link below).

INSTRUCTIONS:

See our main conference page: https://cd-make.net

or the special issue page respectively:
https://hci-kdd.org/special-issue-explainable-ai-medical-informatics-decision-making

BACKGROUND:

Explainable AI is not a new field. Actually the problem of explainability is as old as AI and maybe the result of AI itself. While early expert systems consisted of handcrafted knowledge, which enabled reasoning over at least a narrowly well-defined domain, such systems had no learning capabilities and were poor in handling of uncertainties when (trying to) solving real-world problems.  The big success of current AI solutions and ML algorithms is due to the practical applicability of statistical learning approaches in arbitrarily high dimensional spaces. Despite their huge successes their effectiveness is still limited by their inability to ”explain” their decisions in an human understandable and retraceable way. Even if we understand the underlying mathematical theories, it is complicated and often impossible to get insight into the internal working of the models, algorithms and tools and to explain how and why a result was achieved. Future AI needs contextual adaptation, i.e. systems that help to construct explanatory models for solving real-world problems. Here it would be beneficial not to exclude human expertise, but to augment human intelligence with artificial intelligence.

TOPICS:

In line with the general theme of the CD-MAKE conference of augmenting human intelligence with artificial intelligence, and Science is to test crazy ideas –  Engineering is to bring these ideas into Business – we foster cross-disciplinary and interdisciplinary work including but not limited to:

  • Novel methods, algorithms, tools, procedures for supporting explainability in AI/ML
  • Proof-of-concepts and demonstrators of how to integrate explainable AI into workflows and industrial processes
  • Frameworks, architectures, algorithms and tools to support post-hoc and ante-hoc explainability
  • Work on causality machine learning
  • Theoretical approaches of explainability (“What makes a good explanation?”)
  • Philsophical approaches of explainability (“When is it enough, do we have a degree of saturation?”)
  • Towards argumentation theories of explanation and issues of cognition
  • Comparison Human intelligence vs. Artificial Intelligence (HCI — KDD)
  • Interactive machine learning with human(s)-in-the-loop (crowd intelligence)
  • Explanatory User Interfaces and Human-Computer Interaction (HCI) for explainable AI
  • Novel Intelligent User Interfaces and affective computing approaches
  • Fairness, accountability and trust
  • Ethical aspects and law, legal issues and social responsibility
  • Business aspects of explainable AI
  • Self-explanatory agents and decision support systems
  • Explanation agents and recommender systems
  • Combination of statistical learning approaches with large knowledge repositories (ontologies)

MOTIVATION:

The grand goal of future explainable AI is to make results understandable and transparent  and to answer questions of how and why a result was achieved. In fact: “Can we explain how and why a specific result was achieved by an algorithm?” In the future it will be essential not only to answer the question “Which of these animals is a cat?”, but  to answer Why is it a cat [Youtube Video]” – “What are the underlying explanatory facts that the machine learning algorithms made this decison”.  This highly relevant emerging area is important for all application areas, ranging from health informatics [1]  to cyber defense [2], [3]. A partiuclar focus is on novel Human-Computer Interaction and intelligent user interfaces for interactive machine learning [4].

[1] Andreas Holzinger, Chris Biemann, Constantinos S. Pattichis & Douglas B. Kell (2017). What do we need to build explainable AI systems for the medical domain? arXiv:1712.09923.
[2] David Gunning (2016)  DARPA program on explainable artificial intelligence
[3] Katharina Holzinger, Klaus Mak, Peter Kieseberg & Andreas Holzinger (2018). Can we trust Machine Learning Results? Artificial Intelligence in Safety-Critical decision Support. ERCIM News, 112, (1), 42-43.
[4] Todd Kulesza, Margaret Burnett, Weng-Keen Wong & Simone Stumpf (2015). Principles of explanatory debugging to personalize interactive machine learning. Proceedings of the 20th International Conference on Intelligent User Interfaces (IUI 2015), 2015 Atlanta. ACM, 126-137, doi:10.1145/2678025.2701399.

Example: One motivation is the new European General Data Protection Regulation (GDPR and ISO/IEC 27001) entering into force on May, 25, 2018, and affects practically all machine learning and artificial intelligence applied to business. For example it will be difficult to apply black-box approaches for professional use in certain business applications, because they are not re-traceable and rarely able to explain on demand why a decision has been made.

Note: The GDPR replaces the data protection Directive 95/46/EC) of 1995. The regulation was adopted on 27 April 2016 and becomes enforceable from 25 May 2018 after now a two-year transition period and, unlike a directive, it does not require national governments to pass any enabling legislation, and is thus directly binding – which affects practically all data-driven businesses and particularly machine learning and AI technology.

WORKSHOP ORGANIZERS:

Randy GOEBEL, University of Alberta, Edmonton, CA
Yoichi HAYASHI, Meiji University, Kawasaki, JP
Katharina HOLZINGER, Secure Business Austria, SBA-Research Vienna, AT
Freddy LECUE, Accenture Technology Labs, Dublin, IE and INRIA Sophia Antipolis, FR
Peter KIESEBERG, Secure Business Austria, SBA-Research Vienna, AT
Andreas HOLZINGER, Medical University Graz, AT

Inquiries please directly to a.holzinger AT hci-kdd.org

SCIENTIFIC COMMITTEE:

in progress, see also the main scientific committee:
https://cd-make.net/committees

Jose Maria ALONSO, CiTiUS, University of Santiago de Compostela, ES
Christian BAUCKHAGE, Fraunhofer Institute Intelligent Analysis, IAIS, Sankt Augustin, and University of Bonn, DE
Vaishak BELLEBelle Lab, Centre for Intelligent Systems and their Applications, School of Informatics, University of Edinburgh, UK
Frenay BENOIT, Universite de Namur, BE
Enrico BERTINI, New York University, Tandon School of Engineering, US
Tarek R. BESOLD, Cognitive Aspects and Theory of AI, City, University of London, UK
Federico CABITZA,  Università degli Studi di Milano-Bicocca, DISCO, Milano, IT
Aldo FAISAL, Department of Computing, Brain and Behaviour Lab, Imperial College London, UK
Bryce GOODMAN, Oxford Internet Institute and San Francisco Bay Area, CA, US
Barbara HAMMER, Machine Learning Group, Bielefeld University, DE
Pim HASELAGER, Donders Institute for Brain, Cognition and Behaviour, Radboud University, NL
Brian Y. LIM, National University of Singapore, SG
Luca LONGO, Knowledge & Data Engineering Group, Trinity College, Dublin, IE
Huamin QU, Human-Computer Interaction Group HKUST VIS, Hong-Kong University of Science and Technology, CN
Daniele MAGAZZENI, Trusted Autonomous Systems Hub, King’s College London, UK
Marco Tulio RIBEIRO, Guestrin Group, University of Washington, Seattle, WA, US
Brian RUTTENBERG, Charles River Analytics, Cambridge, MA, US
Gerhard SCHURZ, Düsseldorf Center for Logic and Philosophy of Science, University Düsseldorf, DE
Sameer SINGH, University of California UCI, Irvine, CA, US
Alison SMITH, University of Maryland, MD, US
Mohan SRIDHARAN, University of Auckland, NZ
Simone STUMPF,  City, University London, UK
Ramya MALUR SRINIVASAN, Fujitsu Labs of America, Sunnyvale, CA, US
Janusz WOJTUSIAK, Machine Learning and Inference Lab, George Mason University, Fairfax, US

NEWS:

2018-10-20 Background Image with friendly permission with friendly permission of Michael D. Beckwith, see the marvellous original here:

Canterbury Cathedral was founded in 597 and completely rebuilt between 1070 and 1077.

2018-10-14 Starting to confirm previous experts and inviting new experts to the scientific committee

2018-10-10  Official go form the Springer/Nature BMC journal office for the special issue

2018-08-30 Thank you all for your participation and support;
we hopefully will see us all again end of August 2019 in Canterbury, Kent, UK

HISTORIC NEWS:

2018-08-30 The introduction is available here (preprint, pdf, 835kB):
[GOEBEL et al (2018) Explainable-AI-the-new-42]
The official paper is available via Springerlink: Randy Goebel, Ajay Chander, Katharina Holzinger, Freddy Lecue, Zeynep Akata, Simone Stumpf, Peter Kieseberg & Andreas Holzinger. Explainable AI: the new 42? Springer Lecture Notes in Computer Science LNCS 11015, 2018 Cham. Springer, 295-303, doi:10.1007/978-3-319-99740-7_21.

2018-08-30 The Slides of Randy GOEBEL are available here (with friendly permission of Randy):
https://hci-kdd.org/2018/08/30/explainable-ai-session-keynote-randy-goebel/

2018-08-27 Springer Lecture Notes are available, see:
https://hci-kdd.org/2018/08/27/machine-learning-and-knowledge-extraction-springer-volume-2/

2018-05-30 Our Program is available via:
https://www.ares-conference.eu/agenda/

2018-05-10 Randy GOEBEL from the Alberta Machine Intelligence Institute  has agreed to be our session Keynote Speaker, see:
https://cd-make.net/keynote-speaker-randy-goebel/

2018-05-02 Due to the international Workers’ Day we extend the official deadline to May, 7, 2018 to enable you a stress-free submission – please enjoy your holidays !

2018-03-21 Please note the deadline for submissions is April, 30, 2018, see the authors area:
https://cd-make.net/authors-area/important-dates
(In case you cannot meet the April, 30, 2018 deadline and would need a few more days, please submit your draft (and indicate it as draft) by April, 30, 2018 – so that we have an overview and can pre-assign reviewers) – you may then have still sufficient time to complete your paper.

2018-02-07 Web site live – starting to invite additional experts for the scientific program committee

RELATED EVENTS:

(additional suggestions welcome – we are dedicated to support the international community)

in progress

Workshop on Explainable Smart Systems (EXSS) at ACM IUI, Tokyo, March, 11, 2018

Advances on Explainable Artificial Intelligence as a part of the 17th International Conference on Information Processing and Management of Uncertainty in Knowledge-based Systems (IPMU 2018), Cadiz, Spain, June, 11-15, 2018

ODD v5.0 Outlier Detection De-constructed full day workshop, organized in conjunction with ACM SIGKDD 2018 at KDD 2018 in London, August, 20, 2018

Human Level AI – multi-conference on human-level artificial intelligence, Prague, August, 22-25, 2018