Posts

Fairness in Artificial Intelligence Survey

Please take part in our study of Fairness in Artificial Intelligence to help to overcome bias of machine learning

Please take part in our “Causabilometer” Survey

The Human-Centered AI Lab invites to take part in a causability measurement study to test the new causabilometer

Measuring the Quality of Explanations just exceeded 5k downloads

In this paper we introduce our System Causability Scale to measure the quality of explanations. It is based on our notion of Causability (Holzinger et al. in Wiley Interdiscip Rev Data Min Knowl Discov 9(4), 2019) combined with concepts adapted from a widely-accepted usability scale.

Kandinsky Challenge: IQ-Test for Machines is online!

The Human-Centered AI Lab (HCAI) invites the international machine learning community to a challenge on explainable AI and towards IQ-Tests for machines

The Problem with explainable-AI

A very nice and interesting article by Rudina SESERI in the recent TechCrunch blog (read the orginal blog entry below): at first Rudina points out that the main problem is in data; and yes, indeed, data should always be the first consideration. We consider it a big problem that successful ML approaches (e.g. the mentioned deep learning, our PhD students can tell you a thing or two about it 😉 greatly benefit from big data (the bigger the better) with many training sets; However, it certain domain, e.g. in the health domain we sometimes are confronted with a small number of data sets or rare events, where we suffer of insufficient training samples [1]. This calls for more research towards how we can learn from little data (zero-shot learning), similar as we humans do: Rudina does not need to show her children 10 million samples of a dog and a cat, so that her children can safely discriminate a dog from a cat. However, what I miss in this article is something different, the word trust. Can we trust our machine learning results? [2] Whilst, for sure we do not need to explain everything all the time, we need possibilities to make machine decisions transparent on demand and to check if something could be plausible. Consequently, Explainable AI can be very important to foster trust in machine learning specifically and artificial intelligence generally.

[1] https://link.springer.com/article/10.1007/s40708-016-0042-6

[2] https://ercim-news.ercim.eu/en112/r-i/can-we-trust-machine-learning-results-artificial-intelligence-in-safety-critical-decision-support

 

 

MIT emphasizes the importance of HCI for explainable AI

In a joint project “The car can explain” with the TOYOTA Research Institute  the MIT Computer Science & Artificial Intelligence Lab  are working on explainable AI and emphasize the increasing importance of the field of HCI (Human-Computer Interaction) in this regard. Particularly, the group led by Lalana KAGAL is working on monitors for reasoning and explaining: a methodological tool to interpret and detect inconsistent machine behavior by imposing constraints of reasonableness. “Reasonable monitors” are implemented as two types of interfaces around their complex AI/ML frameworks. Local monitors check the behavior of a specific subsystem, and non-local reasonableness monitors watch the behavior of multiple subsystems working together: neighborhoods of interconnected subsystems that share a common task. This enormously interesting monitoring consistently checks that the neighborhood of subsystems are cooperating as expected. Insights of this projects could also be valuable for the health informatics domain:

https://toyota.csail.mit.edu/node/21

Portfolio Items