In machine learning deep convolutional networks (deep learning) are very successful for solving particular problems  – at least when having many training samples. Great success has been made recently, e.g. in automatic game playing by AlphaGo (see the nature news here). As fantastic these approaches are, it should be mentioned that deep learning has still serious limitations: they are black-box approaches, where it is currently difficult to explain how and why a result was achieved – see our recent work on glass-box approach , consequently lacking transparency and trust – issues which will become increasingly important in our data-centric world; they are demanding huge computational resources, and need enormous amounts of training data (often thousands, sometimes even millions of training samples) and standard approaches are poor at representing uncertainties which calls for Bayesian deep learning approaches . Most of all, deep learning approaches are affected by an effect we call “catastrophic forgetting”.
What is catastrophic forgetting? One of the critical steps towards general artificial intelligence (human-level AI) is the ability to continually learn – similarly as we humans do: ongoing and continuous being capable of learning a new task B without forgetting how to perform an old task A. This seemingly trivial characteristic is not trivial for machine learning generally and deep learning specifically: McCloskey & Cohen already in 1989  showed that neural networks have difficulties with this kind of transfer learning and coined the term catastrophic forgetting – and transfer learning is one attempt to overcoming it. Transfer learning is the ability to learn tasks permanently. Humans can do that very good – even very little children (refer to the work of Alison Gopnik, e.g. , and at the bottom of this post). The synaptic consolidation in human brains may enable continual learning by reducing the plasticity of synapses that are vital to previously learned tasks ( and see a recent work on intelligent synapses for multi-task and transfer learning ). Based on these ideas the Google Deepmind Group around Demis Hassabis implemented a cool algorithm that performs a similar operation in artificial neural networks by constraining important parameters to stay close to their old values in their work on overcoming catastrophic forgetting in neural networks (arXiv:1612.00796), . As we know, a deep neural network consists of multiple layers of linear projections followed by element-wise non-linearities. Learning a task consists basically of adjusting the set of weights and biases θ of the linear projections, consequently, many configurations of θ will result in the same performance which is relevant for the so-called elastic weight consolidation (EWC): over-parametrization makes it likely that there is a solution for task B, θ ∗ B , that is close to the previously found solution for task A, θ ∗ A . While learning task B, EWC therefore protects the performance in task A by constraining the parameters to stay in a region of low error for task A centered around θ ∗ A. This constraint has been implemented as a quadratic penalty, and can therefore be imagined as a mechanical spring anchoring the parameters to the previous solution, hence the name elastic. In order to justify this choice of constraint and to define which weights are most important for a task, it is useful to consider neural network training from a probabilistic perspective. From this point of view, optimizing the parameters is tantamount to finding their most probable values given some data D. Interestingly, this can be computed as conditional probability p(θ|D) from the prior probability of the parameters p(θ) and the probability of the data p(D|θ) by: log p(θ|D) = log p(D|θ) + log p(θ) − log p(D).
Here exists and constantly emerge a lot of open and important research avenues which challenge the international machine learning community (see e.g. ). The most interesting is what we don’t know yet – and the breaktrough machine learning approaches have not yet invented.
Andrew Y. Ng at the last NIPS 2016 conferene in Barcelona hold a tutorial where he emphasized the importance of transfer learning research and that “transfer learning will be the next driver of machine learning success” …
There is a wonderful post by Sebastian Ruder, see: http://knowledgeofficer.com/knowledge/46-transfer-learning-machine-learning-s-next-frontier
 Yann Lecun, Yoshua Bengio & Geoffrey Hinton 2015. Deep learning. Nature, 521, (7553), 436-444, doi:10.1038/nature14539.
 Andreas Holzinger, Markus Plass, Katharina Holzinger, Gloria Cerasela Crisan, Camelia-M. Pintea & Vasile Palade 2017. A glass-box interactive machine learning approach for solving NP-hard problems with the human-in-the-loop. arXiv:1708.01104.
 Alex Kendall & Yarin Gal 2017. What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision? arXiv:1703.04977.
 Michael Mccloskey & Neal J Cohen 1989. Catastrophic interference in connectionist networks: The sequential learning problem. In: Bower, G. H. (ed.) The Psychology of Learning and Motivation, Volume 24. San Diego (CA): Academic Press, pp. 109-164.
 Alison Gopnik, Clark Glymour, David M Sobel, Laura E Schulz, Tamar Kushnir & David Danks 2004. A theory of causal learning in children: causal maps and Bayes nets. Psychological review, 111, (1), 3.
 Stefano Fusi, Patrick J Drew & Larry F Abbott 2005. Cascade models of synaptically stored memories. Neuron, 45, (4), 599-611.
 Friedemann Zenke, Ben Poole & Surya Ganguli. Continual Learning Through Synaptic Intelligence. International Conference on Machine Learning, 2017. 3987-3995.
 James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A. Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, Demis Hassabis, Claudia Clopath, Dharshan Kumaran & Raia Hadsell 2017. Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences, 114, (13), 3521-3526, doi:10.1073/pnas.1611835114.
 Ian J Goodfellow, Mehdi Mirza, Da Xiao, Aaron Courville & Yoshua Bengio 2015. An empirical investigation of catastrophic forgeting in gradient-based neural networks. arXiv:1312.6211v3.
Machine learning researchers should watch the videos by Alison Gopnik, e.g.: