Our crazy iML-Concept has been accepted at the CiML 2016 workshop (organized by Isabelle Guyon, Evelyne Viegas, Sergio Escalera, Ben Hammer & Balazs Kegl) at NIPS 2016 (December, 5-10, 2016) in Barcelona:
How artificial intelligence will affect jobs
In an discussion with Barack OBAMA  on how artificial intelligence will affect jobs, he emphasized how important human-in-the-loop machine learning will become in the future. Trust, transparency and explainabiltity will be THE driving factors of future AI solutions! The discussion interview was led by the Wired  Editor Scott DADICH, and MIT Media Lab  Director Joi ITO. I recommend my students to watch the full video. Barack Obama demonstrates a good understanding of the field and indicates indirectly the importance of our research in the the human-in-the-loop approach , despite all progress towards fully automatic approaches and autonomous systems.
More information see:
 Barack Obama was the 44th President of the United States of America and was in office from January, 20, 2009 to January, 20, 2017. He was born August, 4, 1961 in Honolulu (Hawaii)
 Wired is a monthly tech magazine which reports since 1993 on how emerging technologies may affect culture, politics, economics. Very interesting to note is that Wired is known for coning the popular terms “long tail” and “crowdsourcing”. https://www.wired.com
 The MIT Media Lab is an interdisciplinary research lab at the Massachusetts Institute of Technology in Cambridge (MA), which is part of the Boston metropolitan area in the north, just across the Charles River – not far way from the Harvard Campus.
 Holzinger, A., Plass, M., Holzinger, K., Crisan, G.C., Pintea, C.-M. & Palade, V. 2017. A glass-box interactive machine learning approach for solving NP-hard problems with the human-in-the-loop. arXiv:1708.01104
Google researchers spend a lot of time thinking about how computer systems can read and understand human language in order to process it in intelligent ways. On May, 12, 2016 Slav Petrov (expertise) based in New York and leading the machine learning for natural language group (Slav Petrov Page), announced that they released SyntaxNet as an open-source neural network framework implemented in TensorFlow that provides a new foundation for Natural Language Understanding (NLU) . The release includes all code needed to train new SyntaxNet models on own data, as well as Parsey McParseface, an English parser that the Googlers have trained and that can be used to analyze English text. Parsey McParseface is built on powerful machine learning algorithms that learn to analyze the linguistic structure of language, and that can explain the functional role of each word in a given sentence.
Andor, D., Alberti, C., Weiss, D., Severyn, A., Presta, A., Ganchev, K., Petrov, S. & Collins, M. 2016. Globally normalized transition-based neural networks. arXiv preprint arXiv:1603.06042.
Petrov, S., Mcdonald, R. & Hall, K. 2016. Multi-source transfer of delexicalized dependency parsers. US Patent 9,305,544.
Weiss, D., Alberti, C., Collins, M. & Petrov, S. 2015. Structured Training for Neural Network Transition-Based Parsing. arXiv:1506.06158.
Vinyals, O., Kaiser, Ł., Koo, T., Petrov, S., Sutskever, I. & Hinton, G. Grammar as a foreign language. Advances in Neural Information Processing Systems, 2015. 2755-2763.
TensorFlow – part of the Google brain project – has recently open sourced on GitHub a nice playground for testing and learning the behaviour of deep learning networks, which also can be used following the Apache Licence:
Background: TensorFlow is an open source software library for machine learning. There is a nice video “large scale deep learning” by Jeffrey Dean. TensorFlow is an interface for expressing machine learning algorithms along with an implementation for executing such algorithms on a variety of heterogeneous systems, ranging from smartphones to high-end computer clusters and grids of thousands of computational devices (e.g. GPU). The system has been used for research in various areas of computer science (e.g. speech recognition, computer vision, robotics, information retrieval, natural language processing, geographic information extraction, computational drug discovery). The TensorFlow API and a reference implementation were released as an open-source package under the Apache 2.0 license on 9th November 2015 and is available at www.tensorflow.org
Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G. S., Davis, A., Dean, J. & Devin, M. 2016. TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems. arXiv preprint arXiv:1603.04467.
It is also discussed on episode 24 of talking machines.
Interactive machine learning for health informatics: when do we need the human-in-the-loop?
Machine learning (ML) is the fastest growing field in computer science, and health informatics is among the greatest challenges. The goal of ML is to develop algorithms which can learn and improve over time and can be used for predictions. Most ML researchers concentrate on automatic machine learning (aML), where great advances have been made, for example, in speech recognition, recommender systems, or autonomous vehicles. Automatic approaches greatly benefit from big data with many training sets. However, in the health domain, sometimes we are confronted with a small number of data sets or rare events, where aML-approaches suffer of insufficient training samples. Here interactive machine learning (iML) may be of help, having its roots in reinforcement learning, preference learning, and active learning. The term iML is not yet well used, so we define it as “algorithms that can interact with agents and can optimize their learning behavior through these interactions, where the agents can also be human.” This “human-in-the-loop” can be beneficial in solving computationally hard problems, e.g., subspace clustering, protein folding, or k-anonymization of health data, where human expertise can help to reduce an exponential search space through heuristic selection of samples. Therefore, what would otherwise be an NP-hard problem, reduces greatly in complexity through the input and the assistance of a human agent involved in the learning phase.
We define iML-approaches as algorithms that can interact with both computational agents and human agents *) and can optimize their learning behavior through these interactions.
*) In active learning such agents are referred to as the so-called “oracles”
From black-box to glass-box: where is the human-in-the-loop?
The first question we have to answer is: “What is the difference between the iML-approach to the aML-approach, i.e., unsupervised learning, supervised, or semi-supervised learning?”
Scenario D – see slide below – shows the iML-approach, where the human expert is seen as an agent directly involved in the actual learning phase, step-by-step influencing measures such as distance, cost functions, etc.
Obvious concerns may emerge immediately and one can argue: what about the robustness of this approach, the subjectivity, the transfer of the (human) agents; many questions remain open and are subject for future research, particularly in evaluation, replicability, robustness, etc.
Read full article here:
We are organizing a special session on Privacy Aware Machine Learning for Health Data Science at the 11th international Conference on Availability, Reliability and Security (ARES and CD-ARES), Salzburg, Austria, August 29 – September, 2, 2016
Machine learning is the fastest growing field in computer science [Jordan, M. I. & Mitchell, T. M. 2015. Machine learning: Trends, perspectives, and prospects. Science, 349, (6245), 255-260], and it is well accepted that health informatics is amongst the greatest challenges [LeCun, Y., Bengio, Y. & Hinton, G. 2015. Deep learning. Nature, 521, (7553), 436-444 ], e.g. large-scale aggregate analyses of anonymized data can yield valuable insights addressing public health challenges and provide new avenues for scientific discovery [Horvitz, E. & Mulligan, D. 2015. Data, privacy, and the greater good. Science, 349, (6245), 253-255]. Privacy is becoming a major concern for machine learning tasks, which often operate on personal and sensitive data. Consequently, privacy, data protection, safety, information security and fair use of data is of utmost importance for health data science.
The amount of patient-related data produced in today’s clinical setting poses many challenges with respect to collection, storage and responsible use. For example, in research and public health care analysis, data must be anonymized before transfer, for which the k-anonymity measure was introduced and successively enhanced by further criteria. As k-anonymity is an NP-hard problem, which cannot be solved by automatic machine learning (aML) approaches we must often make use of approximation and heuristics. As data security is not guranteed given a certain k-anonymity degree, additional measures have been introduced in order to refine results (l-diversity, t-closeness, delta-presence). This motivates methods, methodologies and algorithmic machine learning approaches to tackle the problem. As the resulting data set will be a tradeoff between utility, usability and individual privacy and security, we need to optimize those measures to individual (subjective) standards. Moreover, the efficacy of an algorithm strongly depends on the background knowledge of an potential attacker as well as the underlying problem domain. One possible solution is to make use of interactive machine learning (iML) approaches and put a human-in-the-loop where the central question remains open: “could human intelligence lead to general heuristics we can use to improve heuristics?”
Research topics covered by this special session include but are not limited to the following topics:
– Production of Open Data Sets
– Synthetic data sets for learning algorithm testing
– Privacy preserving machine learning, data mining and knowledge discovery
– Data leak detection
– Data citation
– Differential privacy
– Anonymization and pseudonymization
– Securing expert-in-the-loop machine learning systems
– Evaluation and benchmarking
This special session will bring together scientists with diverse background, interested in both the underlying theoretical principles as well as the application of such methods for practical use in the biomedical, life sciences and health care domain. The cross-domain integration and appraisal of different fields will provide an atmosphere to foster different perspectives and opinions; it will offer a platform for novel crazy ideas and a fresh look on the methodologies to put these ideas into business.
Accepted Papers will be published in a Springer Lecture Notes in Computer Science LNCS Volume.
I) Deadline for submissions: April, 30, 2016
Paper submission via:
II) Camera Ready deadline: July, 4, 2016
The International Scientific Committee – consisting of experts from the international expert network HCI-KDD dealing with area (7), privacy, data protection, safety and security and additionally invited international experts will ensure the highest possible scientific quality, each paper will be reviewed by at least three reviewers (the paper acceptance rate of the last special session was 35 %).
In January 2016, Yahoo announce the public release of the largest-ever machine learning data set to the international research community. The data set stands at a massive ~110B events (13.5TB uncompressed) of anonymized user-news item interaction data, collected by recording the user-news item interactions of about 20M users from February 2015 to May 2015.
Mastering the game of Go with deep neural networks and tree search – a very recent paper in Nature:
Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., Dieleman, S., Grewe, D., Nham, J., Kalchbrenner, N., Sutskever, I., Lillicrap, T., Leach, M., Kavukcuoglu, K., Graepel, T. & Hassabis, D. 2016. Mastering the game of Go with deep neural networks and tree search. Nature, 529, (7587), 484-489.
Go (in Chinese: 圍棋 , in Japanese 囲碁) is a two-player board strategy game (EXPTIME-complete, resp. PSPACE-complete) for two players aiming to surround more territory than the opponent; the number of he number of possible moves is enormous (10761 with a 19 x 19 board) compared to approximately 10120 in chess with a 8 x 8 board) – despite simple rules.
According to the new article by Silver et al (2016), Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves. The authors introduce a new approach to computer Go that uses ‘value networks’ to evaluate board positions and ‘policy networks’ to select moves. These deep neural networks are trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play. Without any lookahead search, the neural networks play Go at the level of state-of-the-art Monte Carlo tree search programs that simulate thousands of random games of self-play. The authors introduce a new search algorithm that combines Monte Carlo simulation with value and policy networks. Using this search algorithm, the program AlphaGo (see: http://deepmind.com/alpha-go.html) achieved a 99.8% winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0. This is the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away.
There is also a news report on BBC:
Congrats to the Google Deepmind people!
Date: Tuesday, 26th January 2016, Start: 10:00, End: 17:00; Venue: Graz University of Technology,
Institute of Computer Graphics and Knowledge Visualization CGV, hosted by Prof. Tobias SCHRECK
Address: Inffeldgasse 16c, A-8010 Graz <maps and directions>
Machine learning is the most growing field in computer science [Jordan, M. I. & Mitchell, T. M. 2015. Machine learning: Trends, perspectives, and prospects. Science, 349, (6245), 255-260], and it is well accepted that health informatics is amongst the greatest challenges [LeCun, Y., Bengio, Y. & Hinton, G. 2015. Deep learning. Nature, 521, (7553), 436-444 ].
Sucessful Machine Learning for Health Informatics requires a comprehensive understanding of the data ecosystem and a multi-disciplinary skill-set, from seven specializations: 1) data science, 2) algorithms, 3) network science, 4) graphs/topology, 5) time/entropy, 6) data visualization and visual analytics, and 7) privacy, data protection, safety and security – as supported by the international expert network HCI-KDD.