The need for deep understanding of algorithms

There are many different machine learning algorithms for a certain problem, but which one to chose for solving a practical problem? The comparison of learning algorithms is very difficult and is highly dependent of the quality of the data!

Miniconf Thursday, 20th December 2018: Raphaël Marée

Raphaël MARÉE  from the Montefiori Institute, Unviersity of Liege will visit us in week 51 and give a lecture on

Open and Collaborative Digital Pathology using Cytomine

When: Thursday, 20th December, 2018, at 10:00
Where: BBMRI Conference Room (joint invitation of BBMRI, ADOPT and HCI-KDD)
Address: Neue Stiftingtalstrasse 2/B/6, A-8010 Graz, Austria

Download pdf, 72kB

Abstract:

In this talk Raphael Maree will present the past, present, and future of Cytomine.
Cytomine [1], [2]  is an open-source software, continuously developed since 2010. It is based on modern web and distributed software development methodologies and machine learning, i.e. deep learning. It provides remote and collaborative features so that users can readily and securely share their large-scale imaging data worldwide. It relies on data models that allow to easily organize and semantically annotate imaging datasets in a standardized way (e.g. to build pathology atlases for training courses or ground-truth datasets for machine learning). It efficiently supports digital slides produced by most scanner vendors. It provides mechanisms to proofread and share image quantifications produced by machine/deep learning-based algorithms. Cytomine can be used free of charge and it is distributed under a permissive license. It has been installed at various institutes worldwide and it is used by thousands of users in research and educational settings.

Recent research and developments will be presented such as our new web user interfaces and new modules for multimodal and multispectral data (Proteomics Clin Appl, 2019), object recognition in histology and cytology using deep transfer learning (CVMI 2018), user behavior analytics in educational settings (ECDP 2018), as well as our new reproducible architecture to benchmark bioimage analysis workflows.

Short Bio:

Raphaël Marée received the PhD degree in computer science in 2005 from the University of Liège, Belgium, where he is now working at the Montefiore EE&CS Institute (https://www.montefiore.ulg.ac.be/~maree/). In 2010 he initiated the CYTOMINE research project (https://uliege.cytomine.org/), and since 2017 he is also co-founder of the not-for-profit Cytomine cooperative (https://cytomine.coop). His research interests are in the broad area of machine learning, computer vision techniques, and web-based software development, with specific focus on their applications on big imaging data such as in digital pathology and life science research, while following open science principles.

[1]       Raphaël Marée, Loïc Rollus, Benjamin Stévens, Renaud Hoyoux, Gilles Louppe, Rémy Vandaele, Jean-Michel Begon, Philipp Kainz, Pierre Geurts & Louis Wehenkel 2016. Collaborative analysis of multi-gigapixel imaging data using Cytomine. Bioinformatics, 32, (9), 1395-1401, doi:10.1093/bioinformatics/btw013.

[2] https://www.cytomine.org 

Google Scholar Profile of Raphael Maree:
https://scholar.google.com/citations?user=qG66mF8AAAAJ&hl=en

Homepage of Raphael Maree:
https://www.montefiore.ulg.ac.be/~maree/

Interactive Machine Learning: Experimental Evidence for the human-in-the-loop

Recent advances in automatic machine learning (aML) allow solving problems without any human intervention, which is excellent in certain domains, e.g. in autonomous cars, where we want to exclude the human from the loop and want fully automatic learning. However, sometimes a human-in-the-loop can be beneficial – particularly in solving computationally hard problems. We provide new experimental insights [1] on how we can improve computational intelligence by complementing it with human intelligence in an interactive machine learning approach (iML). For this purpose, an Ant Colony Optimization (ACO) framework was used, because this fosters multi-agent approaches with human agents in the loop. We propose unification between the human intelligence and interaction skills and the computational power of an artificial system. The ACO framework is used on a case study solving the Traveling Salesman Problem, because of its many practical implications, e.g. in the medical domain. We used ACO due to the fact that it is one of the best algorithms used in many applied intelligence problems. For the evaluation we used gamification, i.e. we implemented a snake-like game called Traveling Snakesman with the MAX–MIN Ant System (MMAS) in the background. We extended the MMAS–Algorithm in a way, that the human can directly interact and influence the ants. This is done by “traveling” with the snake across the graph. Each time the human travels over an ant, the current pheromone value of the edge is multiplied by 5. This manipulation has an impact on the ant’s behavior (the probability that this edge is taken by the ant increases). The results show that the humans performing one tour through the graphs have a significant impact on the shortest path found by the MMAS. Consequently, our experiment demonstrates that in our case human intelligence can positively influence machine intelligence. To the best of our knowledge this is the first study of this kind and it is a wonderful experimental platform for explainable AI.

[1] Holzinger, A. et al. (2018). Interactive machine learning: experimental evidence for the human in the algorithmic loop. Springer/Nature: Applied Intelligence, doi:10.1007/s10489-018-1361-5.

Read the full article here:
https://link.springer.com/article/10.1007/s10489-018-1361-5

 

 

 

AI, explain yourself !

“It’s time for AI to move out its adolescent, game-playing phase and take seriously the notions of quality and reliability.”

There is an interesting commentary with interviews by Don MONROE in the recent Communications of the ACM, November 2018, Volume 61, Number 11, Pages 11-13, doi:

Artificial Intelligence (AI) systems are taking over a vast array of tasks that previously depended on human expertise and judgment (only). Often, however, the “reasoning” behind their actions is unclear, and can produce surprising errors or reinforce biased processes. One way to address this issue is to make AI “explainable” to humans—for example, designers who can improve it or let users better know when to trust it. Although the best styles of explanation for different purposes are still being studied, they will profoundly shape how future AI is used.

Some explainable AI, or XAI, has long been familiar, as part of online recommender systems: book purchasers or movie viewers see suggestions for additional selections described as having certain similar attributes, or being chosen by similar users. The stakes are low, however, and occasional misfires are easily ignored, with or without these explanations.

“Considering the internal complexity of modern AI, it may seem unreasonable to hope for a human-scale explanation of its decision-making rationale”.

Read the full article here:
https://cacm.acm.org/magazines/2018/11/232193-ai-explain-yourself/fulltext