Recent publications

June 2023

Explainable Sequence-to-Sequence GRU Neural Network for Pollution Forecasting

Sara Mirzavand Borujeni, Wojciech Samek, Leila Arras, Vignesh Srinivasan

The goal of pollution forecasting models is to allow the prediction and control of the air quality. While such deep learning models were deemed for a long time as black boxes, recent advances in eXplainable AI (XAI) allow to look through the...


June 2023

Fooling State-of-the-Art Deepfake Detection with High-Quality Deepfakes

Arian Beckmann, Peter Eisert, Anna Hilsmann

Due to the rising threat of deepfakes to security and privacy, it is most important to develop robust and reliable detectors. In this paper, we examine the need for high-quality samples in the training datasets of such detectors. Accordingly, we...


June 2023

Assessing the Value of Multimodal Interfaces: A Study on Human–Machine Interaction in Weld Inspection Workstations

Paul Chojecki, Peter Eisert, Sebastian Bosse, Detlef Runde, David Przewozny, Niklas Gard, Niklas Hoerner, Dominykas Strazdas, Ayoub Al-Hamadi

Multimodal user interfaces promise natural and intuitive human–machine interactions. However, is the extra effort for the development of a complex multisensor system justified, or can users also be satisfied with only one input modality? This...


June 2023

Shortcomings of Top-Down Randomization-Based Sanity Checks for Evaluations of Deep Neural Network Explanations

Alexander Binder, Klaus-Robert Müller, Wojciech Samek, Grégoire Montavon, Sebastian Lapuschkin, Leander Weber

While the evaluation of explanations is an important step towards trustworthy models, it needs to be done carefully, and the employed metrics need to be well-understood. Specifically model randomization testing is often overestimated and regarded...


June 2023

Revealing Hidden Context Bias in Segmentation and Object Detection through Concept-specific Explanations

Maximilian Dreyer, Thomas Wiegand, Wojciech Samek, Sebastian Lapuschkin, Reduan Achtibat

Applying traditional post-hoc attribution methods to segmentation or object detection predictors offers only limited insights, as the obtained feature attribution maps at input level typically resemble the models' predicted segmentation mask or...


June 2023

Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models

Frederick Pahde, Wojciech Samek, Sebastian Lapuschkin, Maximilian Dreyer

State-of-the-art machine learning models often learn spurious correlations embedded in the training data. This poses risks when deploying these models for high-stake decision-making, such as in medical applications like skin cancer detection. To...


June 2023

The Meta-Evaluation Problem in Explainable AI: Identifying Reliable Estimators with MetaQuantus

Anna Hedström, Wojciech Samek, Sebastian Lapuschkin, Marina M.-C. Höhne, Philine Bommer, Kristoffer K. Wickstrøm

Explainable AI (XAI) is a rapidly evolving field that aims to improve transparency and trustworthiness of AI systems to humans. One of the unsolved challenges in XAI is estimating the performance of these explanation methods for neural networks,...


June 2023

Beyond Explaining: Opportunities and Challenges of XAI-Based Model Improvement

Leander Weber, Wojciech Samek, Alexander Binder, Sebastian Lapuschkin

Explainable Artificial Intelligence (XAI) is an emerging research field bringing transparency to highly complex and opaque machine learning (ML) models. This paper offers a comprehensive overview over techniques that apply XAI practically to...


June 2023

Sydnone Methides: Intermediates between Mesoionic Compounds and Mesoionic N-Heterocyclic Olefins

Sebastian Mummel, Eike Hübner, Felix Lederle, Jan C. Namyslo, Martin Nieger, Andreas Schmidt

Sydnone methides represent an almost unknown class of mesoionic compounds which possess exocyclic carbon substituents instead of oxygen (sydnones) or nitrogen (sydnone imines) in the 5-position of a 1,2,3-oxadiazolium ring. Unsubstituted...


June 2023

Bridging the Gap: Gaze Events as Interpretable Concepts to Explain Deep Neural Sequence Models

Daniel Krakowczyk, Sebastian Lapuschkin, David Robert Reich, Paul Prasse, Lena Ann Jäger, Tobias Scheffer

Recent work in XAI for eye tracking data has evaluated the suitability of feature attribution methods to explain the output of deep neural sequence models for the task of oculomotric biometric identification. In this work, we employ established...


Items per page10ǀ20ǀ30
Results 141-150 of 345