The use of Artificial Intelligence (AI) in domains that impact human life (agriculture, climate, forestry, health, …) has led to an increased demand for trustworthy AI. The Human-Centered AI Lab at BOKU is working on generic methods to promote robustness and explainability to foster trusted AI solutions and advocates a synergistic approach of Human-Centered AI to provide human control over AI technologies and to align AI with human values, ethical principles, and legal requirements to ensure security and safety.

The HCAI Labs main focus is on explainable AI and interpretable machine learning, particularly in interactive machine learning (iML) with a human-in-the-loop. We work together witht the international research community on innovative solutions, so that a human expert is able to retrace, to understand and to interpret the underlying explanatory factors of data driven AI results – towards multi-modal causality. This answers questions of why an AI decision has been made and enables ethical responsible and trustful AI and transparent, verifiable machine learning solutions. This is relevant for all domains that impact human life.

The concept of “explainable artificial intelligence” promises a solution, as it aims to make AI decisions more transparent. Intensive international research is currently being done in this area. One possibility is to implement the ability to ask “if-then” questions. This requires so-called counterfactual statements, i.e., fictitious “what-if” assumptions intended to challenge decision-making processes. Such interventions of course require interactive user interfaces for which one must have an idea of how good the explanations of decision processes are.

Ultimately, to reach a level of usable computational intelligence, we need

  1. to learn from little prior data of the real world,
  2. to extract relevant (!) knowledge,
  3. to generalize – i.e. guessing where probability mass/density concentrates,
  4. to fight the curse of dimensionality, and
  5. to disentangle independent underlying explanatory factors of the data –
  6. to ensure a causal understanding, i.e. sensemaking in the context of an application domain.

Consequently, interactive machine learning (iML) with a human-in-the-loop, thereby making use of human cognitive abilities, can be of particular interest to solve problems, where learning algorithms suffer due to insufficient training samples, dealing with complex data and/or rare events or computationally hard problems, e.g. subspace clustering, protein folding, or k-anonymization. Here human experience and knowledge can help to reduce an exponential search space through heuristic selection of samples. Therefore, what would otherwise be an NP-hard problem reduces greatly in complexity through the input and the assistance of a human agent involved in the learning phase. This builds on a synergistic combination of methods, techiques and approaches which offer ideal conditions to support human intelligence with computational intelligence: Human–Computer Interaction (HCI) and Knowledge Discovery & Data Mining (KDD).

In our work we always try to address these three sustainable development goals:

1) Arbeitssicherheit = SDG 3 Good Health and Wellbeing

2) Wirtschaftlichkeit = SDG 12 Responsible Consumption and Production, Ensure sustainable consumption and production patterns

3) Ökologie =  SDG 15 Life on Land, Protect, restore and promote sustainable use of terrestrial ecosystems, sustainably manage forests, combat desertification, and halt and reverse land degradation and halt biodiversity loss

Contact

Univ. Prof. Dr. Andreas Holzinger
Head of Human-Centered AI Lab
University of Natural Resources and Life Sciences Vienna, Austria
Mail-Address:  Peter Jordan Strasse 82, 1190 Wien
Lab-Address: IFA-Tulln, HCAI-Lab, Konrad-Lorenz-Str.20, A-3430 Tulln/Donau
e-Mail: andreas.holzinger AT human-centered.ai
Group Home:  https://human-centered.ai
Personal Home: https://www.aholzinger.at
Conference Home: https://cd-make.net