LV 706.315 Mini Course Methods of explainable-AI

Active-Machine-Learning-Oraclesummer term 2018 (date and venue tba.)

Motivation for this lecture:

A huge motivation for us in continuing to study our interactive Machine Learning (iML) concept [1] – with a human in the loop [2] (see our project page) – is that modern AI/ML-models (see the difference AI-ML) are often considered to be “black-boxes” [3] – which is not quite true. However, a serious drawback is that such models have no explicit declarative knowledge representation, hence have difficulty in generating the required explanatory structures – the context – which considerably limits the achievement of their full potential [4].

Goal of this lecture:

This graduate course follows a research-based teaching (RBT) approach and provides an overview of the current state-of-the-art methods on making machine learning models explainable, transparent and re-enactable.

Background:

Explainability is motivated due to lacking transparency of so-called black-box approaches, which do not foster trust and acceptance of AI generally and ML specifically. Rising legal and privacy aspects, e.g. with the new European General Data Protection Regulations (which come into effect in May 2018) will make black-box approaches difficult to use in Business, because they often are not able to explain why a decision has been made (see explainable AI).
Consequently, the field of Explainable AI is emerging because raising legal, ethical, and social aspects make it mandatory to enable – on request – a human to understand why a machine decision has been made, i.e. to make machine decisions re-traceable and to explain why a decision has been made [see Wikipedia on Explainable Artificial Intelligence] (Note: that does not mean that it is always necessary to explain everything and all – but to be able to explain it if necessary – e.g. for general understanding, for teaching, for learning, for research – or in court – or even on demand by a citizen)

Target Group:

Research students of  Computer Science who are interested in knowledge discovery/data mining by following the idea of iML-approaches, i.e. human-in-the-loop learning systems. This is a cross-disciplinary computer science topic and highly relevant for the application in complex domains, such as health, biomedicine, paleontology, biology etc.

Keywords:

Interactive Machine Learning, explainable-AI

[1]          Holzinger, A. 2016. Interactive Machine Learning for Health Informatics: When do we need the human-in-the-loop? Brain Informatics, 3, (2), 119-131, doi:10.1007/s40708-016-0042-6.

[2]          Holzinger, A., Plass, M., Holzinger, K., Crisan, G. C., Pintea, C.-M. & Palade, V. 2017. A glass-box interactive machine learning approach for solving NP-hard problems with the human-in-the-loop. arXiv:1708.01104.

[3]          Lipton, Z. C. 2016. The mythos of model interpretability. arXiv preprint arXiv:1606.03490.

[4]          Bologna, G. & Hayashi, Y. 2017. Characterization of Symbolic Rules Embedded in Deep DIMLP Networks: A Challenge to Transparency of Deep Learning. Journal of Artificial Intelligence and Soft Computing Research, 7, (4), 265-286, doi:10.1515/jaiscr-2017-0019.

Some Quick Explanations:

Active Learning (AL) := to select training samples to enable a minimization of loss in future cases; a learner must take actions to gain information, and has to decide which actions will provide the information that will optimally minimize future loss. The basic idea goes back to Fedorov, V. (1972). Theory of optimal experiments. New York: Academic Press. According to Sanjoy Dasgupta the frontier of active learning is mostly unexplored, and except for a few specic cases, we do not have a clear sense of how much active learning can reduce label complexity: whether by just a constant factor, or polynomially, or exponentially. The fundamental statistical and algorithmic challenges involved along with huge practical application possibility make AL a very important area for future research.

Interactive Machine Learning (iML) := machine learning algorithms which can interact with – partly human – agents and can optimize its learning behaviour trough this interaction. Holzinger, A. 2016. Interactive Machine Learning for Health Informatics: When do we need the human-in-the-loop? Brain Informatics (BRIN), 3, (2), 119-131.

Preference learning (PL) := concerns problems in learning to rank, i.e. learning a predictive preference model from observed preference information, e.g. with label ranking, instance ranking, or object ranking.  Fürnkranz, J., Hüllermeier, E., Cheng, W. & Park, S.-H. 2012. Preference-based reinforcement learning: a formal framework and a policy iteration algorithm. Machine Learning, 89, (1-2), 123-156.

Reinforcement Learning (RL) := examination on how an agent may learn from a series of reinforcements (sucess/rewards or failure/punishments). A must read is Kaelbling, L. P., Littman, M. L., & Moore, A. W. (1996). Reinforcement learning: A survey. Journal of artificial intelligence research, 237-285.

Multi-Agent Systems (MAS) := include collections of several independent agents, could also be a mixture of computer agents and human agents. An exellent pointer of the later one is: Jennings, N. R., Moreau, L., Nicholson, D., Ramchurn, S. D., Roberts, S., Rodden, T. & Rogers, A. 2014. On human-agent collectives. Communications of the ACM, 80-88.

Transfer Learning (TL) := The ability of an algorithm to recognize and apply knowledge and skills learned in previous tasks to
novel tasks or new domains, which share some commonality. Central question: Given a target task, how do we identify the
commonality between the task and previous tasks, and transfer the knowledge from the previous tasks to the target one?
Pan, S. J. & Yang, Q. 2010. A Survey on Transfer Learning. IEEE Transactions on Knowledge and Data Engineering, 22, (10), 1345-1359, doi:10.1109/tkde.2009.191.

Pointers:

NIPS 2017 Symposium “Interpretable Machine Learning” (December, 7, 2017)
https://arxiv.org/html/1711.09889
organized by
Andrew G. WILSON, Cornell University
Jason YOSINSKI, Uber AI Labs
Patrice SIMARD, Microsoft Research
Rich CARUANA, Cornell University
William HERLANDS, Carnegie Mellon University

Time Line of relevant events for interactive Machine Learning (iML):

1950 Reinforcement Learning: Alan Turing (1912-1954) discusses RL within his paper on “Computing Machinery and Intelligence” in Oxford MIND, Volume 59, Issue 236, October 1950, pp. 433-460 doi:10.1093/mind/LIX.236.433 [link to pdf]

2000 Utility Theory:

Glossary (incomplete)

Dimension = n attributes which jointly describe a property.

Features = any measurements, attributes or traits representing the data. Features are key for learning and understanding.

Reals = numbers expressible as finite/infinite decimals

Regression = predicting the value of a random variable y from a measurement x.

Reinforcement learning = adaptive control, i.e. to learn how to (re-)act in a given environment, given delayed/ nondeterministic rewards.  Human learning is mostly reinforcement learning.

Historic People (incomplete)

Bayes, Thomas (1702-1761) gave a straightforward definition of probability [Wikipedia]

Laplace, Pierre-Simon, Marquis de (1749-1827) developed the Bayesian interpretation of probability [Wikipedia]

Price, Richard (1723-1791) edited and commented the work of Thomas Bayes in 1763 [Wikipedia]

Tukey, John Wilder (1915-2000) suggested in 1962 together with Frederick Mosteller the name “data analysis” for computational statistical sciences, which became much later the name data science [Wikipedia]

Antonyms (incomplete)

big data sets < > small data sets

correlation < > causality

discriminative < > generative

Frequentist < > Bayesian

low dimensional < > high dimensional

underfitting < > overfitting

parametric < > non-parametric

supervised < > unsupervised