Special Issue
Springer/Nature BMC
Medical Informatics and Decision Making

This page is current as of 16.11.2018 08:00 CEST

Call for Papers – Explainable AI
in Medical Informatics & Decision Making

https://www.biomedcentral.com/collections/explainableai

INTRODUCTION:

Based on our successful 1st international workshop on explainable AI during our IFIP CD-MAKE 2018 conference (see Springer Lecture Notes in Computer Science, Volume LNCS 11015), we launch this call for papers for a special issue on explainable AI in Springer/Nature BMC Medical Informatics and Decision making (MIDM), with the possibility to present the papers at our next session on explainable AI during our CD-MAKE 2019 conference in Canterbury/Kent, UK during August 26, and August, 29, 2019.

We want to inspire cross-domain experts interested in artificial intelligence/machine learning to stimulate research, engineering and evaluation in, around and for explainable AI – towards making machine decisions transparent, re-enactive, comprehensible, interpretable, thus explainable, re-traceable and reproducible; the latter is the cornerstone of scientific research per se!

SCHEDULE:

Paper submission is possible from now on constantly until March, 30, 2019 at the latest

Notification of acceptance approximately four weeks later, April, 30, 2019 at the latest

Final version due after eight weeks after acceptance, June, 30, 2019 at the latest

BACKGROUND:

Explainable AI is NOT a new field. Actually, the problem of explainability is as old as AI and maybe the result of AI itself. While early expert systems consisted of handcrafted knowledge, which enabled reasoning over at least a narrowly well-defined domain, such systems had no learning capabilities and were poor in handling of uncertainties when (trying to) solving real-world problems.  The big success of current AI solutions and ML algorithms is due to the practical applicability of statistical learning approaches in arbitrarily high dimensional spaces. Despite their huge successes their effectiveness is still limited by their inability to ”explain” their decisions in an human understandable and retraceable way. Even if we understand the underlying mathematical theories, it is complicated and often impossible to get insight into the internal working of the models, algorithms and tools and to explain how and why a result was achieved. Future AI needs contextual adaptation, i.e. systems that help to construct explanatory models for solving real-world problems. Here it would be beneficial not to exclude human expertise, but to augment human intelligence with artificial intelligence.

TOPICS:

We foster cross-disciplinary and interdisciplinary work including but not limited to:

  • Novel methods, algorithms, tools for supporting explainable AI
  • Proof-of-concepts and demonstrators of how to integrate explainable AI into workflows
  • Frameworks, architectures, algorithms and tools to support post-hoc and ante-hoc explainability and causality machine learning
  • Theoretical approaches of explainability (“What is a good explanation?”)
  • Towards argumentation theories of explanation and issues of cognition
  • Comparison Human intelligence vs. Artificial Intelligence (HCI — KDD)
  • Interactive machine learning with human(s)-in-the-loop (crowd intelligence)
  • Explanation User Interfaces and Human-Computer Interaction (HCI) for explainable AI
  • Novel Intelligent User Interfaces and affective computing approaches
  • Fairness, accountability and trust
  • Ethical aspects, law and social responsibility
  • Business aspects of explainable AI
  • Self-explanatory agents and decision support systems
  • Explanation agents and recommender systems
  • Combination of statistical learning approaches with large knowledge repositories (ontologies)

MOTIVATION:

The grand goal of future explainable AI is to make results understandable and transparent  and to answer questions of how and why a result was achieved. In fact: “Can we explain how and why a specific result was achieved by an algorithm?” Here a short video which introduces the topic: https://goo.gl/dB9heh

Example: One motivation is the new European General Data Protection Regulation (GDPR and ISO/IEC 27001) in effect since May, 25, 2018, which affects practically all machine learning and artificial intelligence applied to business. For example, it will be difficult to apply black-box approaches for professional use in certain business applications, because they are not re-traceable and rarely able to explain on demand why a decision has been made.

Note: The GDPR replaces the data protection Directive 95/46/EC) of 1995. The regulation was adopted on 27 April 2016 and becomes enforceable from 25 May 2018 after now a two-year transition period and, unlike a directive, it does not require national governments to pass any enabling legislation, and is thus directly binding – which affects practically all data-driven businesses and particularly machine learning and AI technology.

EDITORS of the special issue:

Andreas HOLZINGER, Holzinger Group HCI-KDD, Institute for Medical Informatics/Statistics, Medical University Graz, AT

Randy GOEBEL, Alberta Machine Intelligence Institute (amii), University of Alberta, Edmonton, CA

Yoichi HAYASHI, Artificial Intelligence (AI) Lab,  Meiji University, Kawasaki, JP

Freddy LECUE, Accenture Technology Labs, Dublin, IE and INRIA Sophia Antipolis, FR

INQUIRIES:

Inquiries please directly to   a.holzinger AT hci-kdd.org

SCIENTIFIC PROGRAMME COMMITTEE:

tba.

see the workshop page:
https://hci-kdd.org/make-explainable-artificial-intelligence-2019

INSTRUCTIONS FOR REVIEWERS:

Each paper will be assigned a minimum of two reviewers to ensure the highest possible quality.  Reviewers are asked to provide detailed and constructive comments that will help not only the editors on decision making, but also to help the authors to improve their manuscript aiming at bringing a clear benefit to the readers of the journal. Reviewers are encouraged to provide references to substantiate their comments. Here are the instructions for reviewers > Guide for BMC Bioinformatics reviewers

INSTRUCTIONS FOR AUTHORS:

All submissions must follow the > Instructions for authors – BMC Appendix B. We encourage authors to submit original research following the guidelines and requirements of BMC Bioinformatics. All articles submitted to this special issue must be based on original research, following the guidelines and requirements of BMC  http://www.biomedcentral.com/about/duplicatepublication

https://bmcmedinformdecismak.biomedcentral.com/submission-guidelines/preparing-your-manuscript#preparing+main+manuscript+text

Authors are encouraged to use LaTeX, an Overleaf Template is available here:

ACCEPTANCE RATE:

We focus on an overall acceptance rate of approximately 20 % targeting for around 8 papers, plus 1 Editorial and 1 Tutorial, so approximately 10 Papers. However, we will focus on quality instead of quantity, so it may be less or more, dependent on the quality of the submissions received.

 

 

for-erich-reduced-size