Human Understandable Explanations

The current state of the art in eXplainable AI in most cases only marks salient features in data space. That is, local XAI methods mark the features and feature dimensions most dominantly involved in the decision making of a model per data point. Global XAI approaches conversely aim to identify and visualize the features in the data the model as a whole is most sensitive to. Both approaches work well on e.g., photographic images, where no specialized domain expertise is required for understanding and interpreting features. However, once domain expertise is lacking, the data itself might become hard to understand for humans, or the principle of the model’s information processing is not interpretable, both local and global XAI cease to be informative.

The XAI group is actively developing solutions and tools combining XAI on dataset level, e.g., explanatory heatmaps over pixels, with model level XAI identifying the role and function of internal model components during inference. We are bringing local and global XAI closer towards human-understandable and semantically enriched explanations which do require (significantly) less expert knowledge w.r.t. data domain and modus operandi of the model, which in turn will be a game-changer in human-AI interaction for retrospective and prospective decision analysis.

Publications

  1. Sebastian Lapuschkin, Stephan Wäldchen, Alexander Binder, Grégoire Montavon, Wojciech Samek, Klaus-Robert Müller (2019):
    Unmasking Clever Hans Predictors and Assessing What Machines Really Learn,
    Nature Communications, vol. 10, London, UK, Nature Research, p. 1096, DOI: 10.1038/s41467-019-08987-4, March 2019
  2. Vignesh Srinivasan, Sebastian Lapuschkin, Cornelius Hellge, Klaus-Robert Müller, and Wojciech Samek:
    Interpretable human action recognition in compressed domain,
    Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2017), New Orleans, LA, USA, March 2017
  3. Reduan Achtibat, Maximilian Dreyer, Ilona Eisenbraun, Sebastian Bosse, Thomas Wiegand, Wojciech Samek, Sebastian Lapuschkin:
    From "Where" to "What": Towards Human-Understandable Explanations through Concept Relevance Propagation,
    arXiv e-prints, earXiv:2206.03208, June 2022