Methods of explainable-AI (ex-AI)

Explainable-AI Pythia - the high prestress of the temple of Apollo at Delphi

Welcome Students to the course LV 706.315 !

3 ECTS, summer term 2018

This page is valid as of June, 17, 2018, 12:00 CEST
(for the 2019 course go to: https://human-centered.ai/seminar-explainable-ai-2019/)

INTRODUCTION: 6 min Youtube Video on explainable AI

MOTIVATION for this course:

This course “Methods of explainable AI” is a natural offspring of the interactive Machine Learning (iML) courses and the decision making courses held over the years before. A huge motivation in continuing to study our iML concept [1] – with a human in the loop [2] (see our project page) – is that modern AI/machine learning models (see the difference AI-ML) are often considered to be “black-boxes” [3]; it is difficult to re-enact and to answer the question of why a certain machine decision has been reached. A general serious drawback is that such models have no explicit declarative knowledge representation, hence have difficulty in generating the required explanatory structures – the context – which considerably limits the achievement of their full potential [4]. Most of all a human expert decision maker (e.g. a medical doctor) is not interested in the internals of any model, she/he wants to retrace a result on demand in an human understandable way. This calls not only for explainable models, but also for explanation interfaces (see AK HCI course). AI usability is experiencing a new renaissance in engineering. Interestingly, early AI systems (rule based systems) were explainiable to a certain extent within a well-defined problem space. Therefore this course will also provide a background on decision support systems from the early 1970ies (e.g. MYCIN, or GAMUTS of Radiology). Last but not least a proverb by Richard FEYNMAN: “If you do not undertand it – try to explain it” – and indeed “explainability” is the core of science.

GOAL of this course

This graduate course follows a research-based teaching (RBT) approach and provides an overview of selected current state-of-the-art methods on making AI transparent re-traceable, re-enactable, understandable, consequently explainable. Note: We speak Python.

BACKGROUND:

Explainability is motivated due to lacking transparency of so-called black-box approaches, which do not foster trust [6] and acceptance of AI generally and ML specifically. Rising legal and privacy aspects, e.g. with the new European General Data Protection Regulations (GDPR, which is now in effect since May 2018) will make black-box approaches difficult to use in Business, because they often are not able to explain why a machine decision has been made (see explainable AI).
Consequently, the field of Explainable AI is recently gaining international awareness and interest (see the news blog), because raising legal, ethical, and social aspects make it mandatory to enable – on request – a human to understand and to explain why a machine decision has been made [see Wikipedia on Explainable Artificial Intelligence]. Note: that does not mean that it is always necessary to explain everything and all – but to be able to explain it if necessary – e.g. for general understanding, for teaching, for learning, for research – or in court – or even on demand by a citizen – right of explanabiltiy.

TARGET GROUP:

Research students of  Computer Science who are interested in knowledge discovery/data mining by following the idea of iML-approaches, i.e. human-in-the-loop learning systems. This is a cross-disciplinary computer science topic and highly relevant for the application in complex domains, such as health, biomedicine, paleontology, biology and in safety critical domains e.g. cyberdefense.

HINT:

If you need a statistics/probability refresher go to the Mini-Course MAKE-Decisions and review the statistics/probability primer:
https://human-centered.ai/mini-course-make-decision-support/

Module 00 – Primer on Probability and Information Science (optional)

Keywords: probability, data, information, entropy measures

Topic 00: Mathematical Notations
Topic 01: Probability Distribution and Probability Density
Topic 02: Expectation and Expected Utility Theory
Topic 03: Joint Probability and Conditional Probability
Topic 04: Independent and Identically Distributed Data IIDD
Topic 05: Bayes and Laplace
Topic 06: Measuring Information: Kullback-Leibler Divergence and Entropy

Lecture slides 2×2 (10,300 kB): contact lecturer for slide set

Recommened Reading for students:
[1] David J.C. Mackay (2003). Information theory, inference and learning algorithms, Boston (MA), Cambridge University Press.
Online available: https://www.inference.org.uk/itprnn/book.html
Slides online available: https://www.inference.org.uk/itprnn/Slides.shtml

Module 01 – Introduction

Keywords: HCI-KDD approach > integrative AI/ML, complexity, automatic ML vs. interactive ML

Topic 00: Reflection – follow up from Module 0 – dealing with probability and information
Topic 01: The HCI-KDD approach: Towards an integrative AI/ML ecosystem
Topic 02: The complexity of the application area health informatics
Topic 03: Probabilistic information
Topic 04: Automatic ML
Topic 05: Interactive ML
Topic 06: From interactive ML to explainable AI

Lecture slides 2×2 (26,755 kB): contact lecturer for slide set

Module 02 – Decision Making and Decision Support

Keywords: information, decision, action

Topic 00: Reflection – follow up from Module 1 – introduction
Topic 01: Medical action = Decision making
Topic 02: The underlying principles of intelligence and cognition
Topic 03: Human vs. Computer
Topic 04: Human Information Processing
Topic 05: Probabilistic decision theory
Topic 06: The problem of understanding context

Lecture slides 2×2 (31,120 kB): contact lecturer for slide set

Module 03 – From Expert Sytems to Explainable AI

Topic 00: Reflection – follow up from Module 02
Topic 01: Decision Support Systems (DSS)
Topic 02: Computers help making better decisions?
Topic 03: History of DSS = History of AI
Topic 04: Example: Towards Precision Medicine
Topic 05: Example: Case based Reasoning (CBR)
Topic 06: A few principles of causality

Lecture slides 2×2 (27,177 kB): contact lecturer for slide set

Module 04 – Overview of Explanation Methods and Transparent Machine Learning Algorithms

Keywords: Explainability, Ante-hoc vs. Post-hoc interpretability

Topic 00: Reflection – follow up from Module 03
Topic 01: Global vs. local explainability
Topic 02: Ante-hoc vs. Post-hoc interpretability
Topic 03: Ante-hoc: GAM, S-AOG, Hybrid models, iML
Topic 04: Post-hoc: LIME, BETA, LRP
Topic 05: Making neural networks transparent
Topic 06: Explanation Interfaces

Lecture slides 2×2 (33,887 kB): contact lecturer for slide set

Module 05 – Selected Methods of explainable-AI Part I

Keywords: LIME, BETA, LRP, Deep Taylor Decomposition, Prediction Difference Analysis

Topic 00: Reflection – follow up from Module 04
Topic 01: LIME (Local Interpretable Model Agnostic Explanations) – Ribeiro et al. (2016) [1]
Topic 02: BETA (Black Box Explanation through Transparent Approximation) – Lakkaraju et al. (2017) [2]
Topic 03: LRP (Layer-wise Relevance Propagation) – Bach et al. (2015) [3]
Topic 04: Deep Taylor Decomposition – Montavon et al. (2017) [4]
Topic 05: Prediction Difference Analysis – Zintgraf et al. (2017) [5]

Lecture slides 2×2 (15,521 kB): contact lecturer for slide set

Reading for Students:

[1] Marco Tulio Ribeiro, Sameer Singh & Carlos Guestrin 2016. Model-Agnostic Interpretability of Machine Learning. arXiv:1606.05386.

[2] Himabindu Lakkaraju, Ece Kamar, Rich Caruana & Jure Leskovec 2017. Interpretable and Explorable Approximations of Black Box Models. arXiv:1707.01154.

[3] Sebastian Bach, Alexander Binder, Grégoire Montavon, Frederick Klauschen, Klaus-Robert Müller & Wojciech Samek 2015. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one, 10, (7), e0130140, doi:10.1371/journal.pone.0130140. NOTE: Sebastian BACH is now Sebastian LAPUSCHKIN

[4] Grégoire Montavon, Wojciech Samek & Klaus-Robert Müller 2017. Methods for interpreting and understanding deep neural networks. arXiv:1706.07979.

[5] Luisa M. Zintgraf, Taco S. Cohen, Tameem Adel & Max Welling 2017. Visualizing deep neural network decisions: Prediction difference analysis. arXiv:1702.04595.

Module 06 – Selected Methods of explainable-AI Part II

Topic 00: Reflection – follow up from Module 05
Topic 01: Visualizing Convolutional Neural Nets with Deconvolution – Zeiler & Fergus (2014) [1]
Topic 02: Inverting Convolutional Neural Networks – Mahendran & Vedaldi (2015) [2]
Topic 03: Guided Backpropagation – Springenberg et al. (2015) [3]
Topic 04: Deep Generator Networks – Nguyen et al. (2016) [4]
Topic 05: Testing with Concept Activation Vectors (TCAV)  – Kim et al. (2018)

Lecture slides 2×2 (11,944 kB): contact lecturer for slide set

[1] Matthew D. Zeiler & Rob Fergus 2014. Visualizing and understanding convolutional networks. In: Fleet, D., Pajdla, T., Schiele, B. & Tuytelaars, T. (eds.) ECCV, Lecture Notes in Computer Science LNCS 8689. Cham: Springer, pp. 818-833, doi:10.1007/978-3-319-10590-1_53.

[2] Aravindh Mahendran & Andrea Vedaldi. Understanding deep image representations by inverting them. Proceedings of the IEEE conference on computer vision and pattern recognition, 2015. 5188-5196, doi:10.1109/CVPR.2015.7299155.

[3] Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox & Martin Riedmiller 2014. Striving for simplicity: The all convolutional net. arXiv:1412.6806.

[4] Anh Nguyen, Alexey Dosovitskiy, Jason Yosinski, Thomas Brox & Jeff Clune. Synthesizing the preferred inputs for neurons in neural networks via deep generator networks. Advances in Neural Information Processing Systems (NIPS 2016), 2016 Barcelona. 3387-3395.  Read the reviews: https://media.nips.cc/nipsbooks/nipspapers/paper_files/nips29/reviews/1685.html

[5] Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler & Fernanda Viegas. Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav). International Conference on Machine Learning, ICML 2018. 2673-2682, Stockholm.

Module 07 – Selected Methods of explainable-AI  Part III

Topic 00: Reflection – follow up from Module 06
Topic 01: Understanding the Model: Feature Visualization – Erhan et al. (2009) [1]
Topic 02: Understanding the Model: Deep Visualization – Yoshynski et al (2015) [2]
Topic 03: Recursive Neureal Networks cell state analysis – Karpathy et al. (2015)  [3]
Topic 04: Fitted Additive – Caruana (2015) [4]
Topic 05: Interactive Machine Learning with the human-in-the-loop – Holzinger et al. (2017)  [5]

Lecture slides 2×2 (16,111 kB): contact lecturer for slide set

Reading for Students:

[1] Dumitru Erhan, Yoshua Bengio, Aaron Courville & Pascal Vincent 2009. Visualizing higher-layer features of a deep network. Technical Report 1341, Departement d’Informatique et Recherche Operationnelle, University of Montreal. [pdf available here]

[2] Jason Yosinski, Jeff Clune, Anh Nguyen, Thomas Fuchs & Hod Lipson 2015. Understanding neural networks through deep visualization. arXiv:1506.06579. Please watch this video: https://www.youtube.com/watch?v=AgkfIQ4IGaM
You can find the code here: https://yosinski.com/deepvis (cool stuff!)

[3] Andrej Karpathy, Justin Johnson & Li Fei-Fei 2015. Visualizing and understanding recurrent networks. arXiv:1506.02078. Code available here: https://github.com/karpathy/char-rnn (awesome!)

[4] Rich Caruana, Yin Lou, Johannes Gehrke, Paul Koch, Marc Sturm & Noemie Elhadad. Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD ’15), 2015 Sydney. ACM, 1721-1730, doi:10.1145/2783258.2788613.

[5]  Andreas Holzinger, et al. 2018. Interactive machine learning: experimental evidence for the human in the algorithmic loop. Applied Intelligence, doi:10.1007/s10489-018-1361-5.

Module 08 – Selected Methods of explainable-AI  Part IV

Topic 00: Reflection – follow up from Module 07
Topic 01: Sensitivity Analysis I  – Simonyan et al.  (2013) [1]
Topic 02: Sensitivity Analysis II – Baehrens et al (2009) [2]
Topic 03: Gradients General overview and usefulness for explaining
Topic 04: Gradients II: DeepLIFT Shrikumar et al. (2015) [4]
Topic 05: Gradients III: Grad-CAM Selvaraju et al. (2016) [5]
Topic 06: Gradients IV: Integrated Gradient Sundararajan et al. (2017) [6]
and please read the interesting paper on “Gradient vs. Decomposition” by Montavon et al. (2018) [7]

Lecture slides 2×2 (11,789 kB): contact lecturer for slide set

Reading for Students:

[1] Karen Simonyan, Andrea Vedaldi & Andrew Zisserman 2013. Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv:1312.6034.

[2] David Baehrens, Timon Schroeter, Stefan Harmeling, Motoaki Kawanabe, Katja Hansen & Klaus-Robert Mãžller 2010. How to explain individual classification decisions. Journal of Machine Learning Research, 11, (Jun), 1803-1831.
https://www.jmlr.org/papers/v11/baehrens10a.html

[3]

[4] Avanti Shrikumar, Peyton Greenside & Anshul Kundaje 2017. Learning important features through propagating activation differences. arXiv:1704.02685.
https://github.com/kundajelab/deeplift
Youtube Intro: https://www.youtube.com/watch?v=v8cxYjNZAXc&list=PLJLjQOkqSRTP3cLB2cOOi_bQFw6KPGKML

[5] Ramprasaath R. Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh & Dhruv Batra. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. ICCV, 2017. 618-626.

Module CA – Causality learning

Keywords: Causality, Graphical Causal Models, Bayesian Networks, Directly Acyclic Graphs

Topic 01: Making inferences from observational and unobservational variables and reasoning under uncertainty [1]
Topic 02: Factuals, Counterfactuals [2], Counterfactual Machine Learning and Causal Models [3]
Topic 03: Probabilistic Causality Examples
Topic 04: Causality in time series (Granger Causality)
Topic 05: Psychology of causation
Topic 06: Causal Inference in Machine Learning

Lecture slides 2×2 (15,544 kB): contact lecturer for slide deck

Reading for students:

[1] Judea Pearl 1988. Evidential reasoning under uncertainty. In: Shrobe, Howard E. (ed.) Exploring artificial intelligence. San Mateo (CA): Morgan Kaufmann, pp. 381-418.

[2] Matt J. Kusner, Joshua Loftus, Chris Russell & Ricardo Silva. Counterfactual fairness. In: Guyon, Isabelle, Luxburg, Ulrike Von, Bengio, Samy, Wallach, Hanna, Fergus, Rob & Vishwanathan, S.V.N., eds. Advances in Neural Information Processing Systems 30 (NIPS 2017), 2017. 4066-4076.

[3] Judea Pearl 2009. Causality: Models, Reasoning, and Inference (2nd Edition), Cambridge, Cambridge University Press.

[4]  Judea Pearl 2018. Theoretical Impediments to Machine Learning With Seven Sparks from the Causal Revolution. arXiv:1801.04016.

Module TE – Testing and Evaluation of Machine Learning Algorithms

Keywords: performance, metrics, error, accuracy

Topic 01: Test data and training data quality
Topic 02: Performance measures (confusion matrix, ROC, AOC)
Topic 03: Hypothesis testing and estimating
Topic 04: Comparision of machine learning algorithms
Topic 05: “There-is-no-free-lunch” theorem
Topic 06: Measuring beyond accuracy (simplicity, scalability, interpretability, learnability, …)

Lecture slides 2×2 (12,756 kB): contact lecturer for slide set

Reading for students:

Module HC – Methods for measuring Human Intelligence

Keywords: performance, metrics, error, accuracy

Topic 01: Fundamentals to measure and evaluate human intelligence [1]
Topic 02: Low cost biometric technologies: 2D/3D cameras, eye-tracking, heart sensors
Topic 03: Advanced biometric technologies: EMG/ECG/EOG/PPG/GSR
Topic 04: Thinking-Aloud Technique
Topic 05: Microphone/Infrared sensor arrays
Topic 06: Affective computing: measuring emotion and stress [3]

Lecture slides 2×2 (17,111 kB): contact lecturer for slide set

Reading for students:

[1] José Hernández-Orallo 2017. The measure of all minds: evaluating natural and artificial intelligence, Cambridge University Press, doi:10.1017/9781316594179. Book Website: https://allminds.org

[2] Andrew T. Duchowski 2017. Eye tracking methodology: Theory and practice. Third Edition, Cham, Springer, doi:10.1007/978-3-319-57883-5.

[3] Christian Stickel, Martin Ebner, Silke Steinbach-Nordmann, Gig Searle & Andreas Holzinger 2009. Emotion Detection: Application of the Valence Arousal Space for Rapid Biological Usability Testing to Enhance Universal Access. In: Stephanidis, Constantine (ed.) Universal Access in Human-Computer Interaction. Addressing Diversity, Lecture Notes in Computer Science, LNCS 5614. Berlin, Heidelberg: Springer, pp. 615-624, doi:10.1007/978-3-642-02707-9_70.

Bonus Module  – Vision

Keywords: human vision, visual system, seeing, perceiving, visual cognition

Topic 01: Visual attention
Topic 02: Visual Psychophysics
Topic 03: Visual Search
Topic 04: Attentive User Interfaces and Usability
Topic 05: Visual Analytics

Lecture slides 2×2 (8,776 kB): contact lecturer for slide set

Reading for students:

Module TA – Trust and Acceptance

Keywords: trust, acceptance,

Topic 01:
Topic 02:
Topic 03:
Topic 04:
Topic 05:

Lecture slides 2×2 (4,678 kB): contact lecturer for slide set

Reading for students:

Module TA – The Theory of Explanations

Keywords: Explanation

Topic 01: What is a good explanation?
Topic 02: Explaining Explanations
Topic 03: The limits of explainability [2]
Topic 04: How to measure the value of an explanation
Topic 05: Practical Examples from the medical domain

Lecture slides 2×2 (5,914 kB): contact lecturer for slide set

Reading for students:

[1] Zachary C. Lipton 2016. The mythos of model interpretability. arXiv:1606.03490.
[2] https://www.media.mit.edu/articles/the-limits-of-explainability/

Module ET – Ethical, Legal and Social Issues of Explainable AI

Keywords: Law, Ethics, Society, Governance, Compliance, Fairness, Accountability, Transparency

Topic 01: Definitions Automatic-Automated-Autonomous:
Human-out-of-loop, Human-still-in-control, Human-in-the-Loop, Computer-in-the-Loop
Topic 02: Legal accountability and Moral dilemmas
Topic 03: Ethical Alogorithms and the prove of explanations (truth vs. trust)
Topic 04: Responsible AI [2]
Topic 05: Explaining Explanations and the GDPR

Lecture slides 2×2 (7,689 kB): contact lecturer for slide deck

Reading for students:

[1] A very valuable ressource can be found here in the future of privacy forum:
https://fpf.org/artificial-intelligence-and-machine-learning-ethics-governance-and-compliance-resources/

[2] Ronald Stamper 1988. Pathologies of AI: Responsible use of artificial intelligence in professional work. AI & society, 2, (1), 3-16, doi: 10.1007/BF01891439.

References from own work (references to related work will be given within the course):

[1]          Andreas Holzinger, Chris Biemann, Constantinos S. Pattichis & Douglas B. Kell (2017). What do we need to build explainable AI systems for the medical domain? arXiv:1712.09923. https://arxiv.org/abs/1712.09923

[2]         Andreas Holzinger, Bernd Malle, Peter Kieseberg, Peter M. Roth, Heimo Müller, Robert Reihs & Kurt Zatloukal (2017). Towards the Augmented Pathologist: Challenges of Explainable-AI in Digital Pathology. arXiv:1712.06657. https://arxiv.org/abs/1712.06657

[3]         Andreas Holzinger, Markus Plass, Katharina Holzinger, Gloria Cerasela Crisan, Camelia-M. Pintea & Vasile Palade (2017). A glass-box interactive machine learning approach for solving NP-hard problems with the human-in-the-loop. arXiv:1708.01104.

[4]          Andreas Holzinger (2016). Interactive Machine Learning for Health Informatics: When do we need the human-in-the-loop? Brain Informatics, 3, (2), 119-131, doi:10.1007/s40708-016-0042-6.

[5]          Andreas Holzinger (2018). Explainable AI (ex-AI). Informatik-Spektrum, 41, (2), 138-143, doi:10.1007/s00287-018-1102-5.
https://link.springer.com/article/10.1007/s00287-018-1102-5

[6]          Katharina Holzinger, Klaus Mak, Peter Kieseberg & Andreas Holzinger 2018. Can we trust Machine Learning Results? Artificial Intelligence in Safety-Critical decision Support. ERCIM News, 112, (1), 42-43.
https://ercim-news.ercim.eu/en112/r-i/can-we-trust-machine-learning-results-artificial-intelligence-in-safety-critical-decision-support

[7]  Andreas Holzinger, et al. 2018. Interactive machine learning: experimental evidence for the human in the algorithmic loop. Applied Intelligence, doi:10.1007/s10489-018-1361-5.

Mini Glossary:

Ante-hoc Explainability (AHE) := such models are interpretable by design, e.g. glass-box approaches; typical examples include linear regression, decision trees/lists, random forests, Naive Bayes and fuzzy inference systems; or GAMs, Stochastic AOGs, and deep symbolic networks; they have a long tradition and can be designed from expert knowledge or from data and are useful as framework for the interaction between human knowledge and hidden knowledge in the data.

BETA := Black Box Explanation through Transparent Approximation, developed by Lakkarju, Bach & Leskovec (2016) it learns two-level decision sets, where each rule explains the model behaviour.

Explainability := motivated by the opaqueness of so called “black-box” approaches it is the ability to provide an explanation on why a machine decision has been reached (e.g. why is it a cat what the deep network recognized). Finding an appropriate explanation is difficult, because this needs understanding the context and providing a description of causality and consequences of a given fact. (German: Erklärbarkeit; siehe auch: Verstehbarkeit, Nachvollziehbarkeit, Zurückverfolgbarkeit, Transparenz)

Explanation := set of statements to describe a given set of facts to clarify causality, context and consequences thereof and is a core topic of knowledge discovery involving “why” questionss (“Why is this a cat?”). (German: Erklärung, Begründung)

Explanatory power := is the ability of a set hypothesis to effectively explain the subject matter it pertains to (opposite: explanatory impotence).

European General Data Protection Regulation (EU GDPR) :=  Regulation EU 2016/679 – see the EUR-Lex 32016R0679 , will make black-box approaches difficult to use, because they often are not able to explain why a decision has been made (see explainable AI).

Interactive Machine Learning (iML) := machine learning algorithms which can interact with – partly human – agents and can optimize its learning behaviour trough this interaction. Holzinger, A. 2016. Interactive Machine Learning for Health Informatics: When do we need the human-in-the-loop? Brain Informatics (BRIN), 3, (2), 119-131.

Inverse Probability:= an older term for the probability distribution of an unobserved variable, and was described by De Morgan 1837, in reference to Laplace’s (1774) method of probability.

Post-hoc Explainability (PHE) := such models are designed for interpreting black-box models and provide local explanations for a specific decision and re-enact on request, typical examples include LIME, BETA, LRP, or Local Gradient Explanation Vectors, prediction decomposition or simply feature selection.

Preference learning (PL) := concerns problems in learning to rank, i.e. learning a predictive preference model from observed preference information, e.g. with label ranking, instance ranking, or object ranking.  Fürnkranz, J., Hüllermeier, E., Cheng, W. & Park, S.-H. 2012. Preference-based reinforcement learning: a formal framework and a policy iteration algorithm. Machine Learning, 89, (1-2), 123-156.

Multi-Agent Systems (MAS) := include collections of several independent agents, could also be a mixture of computer agents and human agents. An exellent pointer of the later one is: Jennings, N. R., Moreau, L., Nicholson, D., Ramchurn, S. D., Roberts, S., Rodden, T. & Rogers, A. 2014. On human-agent collectives. Communications of the ACM, 80-88.

Transfer Learning (TL) := The ability of an algorithm to recognize and apply knowledge and skills learned in previous tasks to
novel tasks or new domains, which share some commonality. Central question: Given a target task, how do we identify the
commonality between the task and previous tasks, and transfer the knowledge from the previous tasks to the target one?
Pan, S. J. & Yang, Q. 2010. A Survey on Transfer Learning. IEEE Transactions on Knowledge and Data Engineering, 22, (10), 1345-1359, doi:10.1109/tkde.2009.191.

Pointers:

Time Line of relevant events for interactive Machine Learning (iML):

According to John Launchbury (watch his excellent video on Youtube) from DARPA there can be determined three waves of AI:

Wave 1: Handcrafted Knowledge – enables reasoning, explainability and re-traceabilty over narrowly defined problems; this is what we call classic AI, and consists for mostly rule-based systems.
Wave 2: Statistical Learning – black-box models, with no contextual capability and minimal reasoning ability needs big data; the fact that probabilsitic learning models can cope with stochastic and non-deterministic problems.
Wave 3: Contextual Adaptation – sytems may contruct contextual explanatory models for classes of real world phenomena and glass-box models allow to re-enact on decisons, to answer the question on “why” a machine decision has been reached. This is what humans can do very well – and thus, if we want to excel in this area we have to understand the underlying principles of intelligence.

Terms (incomplete)

Dimension = n attributes which jointly describe a property.

Features = any measurements, attributes or traits representing the data. Features are key for learning and understanding. Andrew Ng emphasizes that machine learning is mostly feature engineering.

Reals = numbers expressible as finite/infinite decimals.

Regression = predicting the value of a random variable y from a measurement x.

Reinforcement learning = adaptive control, i.e. to learn how to (re-)act in a given environment, given delayed/ non-deterministic rewards.  Human learning is mostly reinforcement learning.

Historic People (incomplete)

Bayes, Thomas (1702-1761) gave a straightforward definition of probability [Wikipedia]

Laplace, Pierre-Simon, Marquis de (1749-1827) developed the Bayesian interpretation of probability [Wikipedia]

Price, Richard (1723-1791) edited and commented the work of Thomas Bayes in 1763 [Wikipedia]

Tukey, John Wilder (1915-2000) suggested in 1962 together with Frederick Mosteller the name “data analysis” for computational statistical sciences, which became much later the name data science [Wikipedia]

Antonyms (incomplete)

big data sets < > small data sets

certain <> uncertain

correlation < > causality

comprehensible < > incomprehensible

confident <> doubtful

discriminative < > generative

explainable <> obscure

Frequentist < > Bayesian

Independent identical distributed data (IID-Data) <>non independent identical distributed data (non-IID)

intelligible <> unintelligible

legitimate <> illegitimate

low dimensional < > high dimensional

underfitting < > overfitting

parametric < > non-parametric

realisic <> unrealistic

reliable <> unreliable

supervised < > unsupervised

sure <> unsure

transparent < > opaque

trustworthy <> untrustworthy

truthful <> untruthful

 

List of Abbreviations:

AHE := Ante-hoc Explainability (interpretable by design)

BETA := Black Box Explanation through Transparent Approximation, developed by Lakkarju, Bach & Leskovec (2016) it learns two-level decision sets, where each rule explains the model behaviour.

ex-AI := Explainabe AI

EU GDPR := European General Data Protection Regulation (EU GDPR),  Regulation EU 2016/679

iML := Interactive Machine Learning (iML) according to Holzinger (2016)

PHE := Post-hoc Explainability (for interpreting black-box models)

PL := Preference learning

MAS := Multi-Agent Systems

TL := Transfer Learning