Recent publications

June 2023

Explainable Sequence-to-Sequence GRU Neural Network for Pollution Forecasting

Sara Mirzavand Borujeni, Wojciech Samek, Leila Arras, Vignesh Srinivasan

The goal of pollution forecasting models is to allow the prediction and control of the air quality. While such deep learning models were deemed for a long time as black boxes, recent advances in eXplainable AI (XAI) allow to look through the...


June 2023

Optimizing Explanations by Network Canonization and Hyperparameter Search

Frederick Pahde, Wojciech Samek, Alexander Binder, Sebastian Lapuschkin, Galip Ümit Yolcu

Rule-based and modified backpropagation XAI methods struggle with innovative layer building blocks and implementation-invariance issues. 

In this work we propose canonizations for popular deep neural network architectures and...


June 2023

Increasing the power and spectral efficiencies of an OFDM-based VLC system through multi-objective optimization

Wesley Da Silva Costa, Volker Jungnickel, Ronald Freund, Anagnostis Paraskevopoulos, Malte Hinrichs, Higor Camporez, Maria Pontes, Marcelo Segatto, Helder Rocha, Jair Silva

In order to minimize power usage and maximize spectral efficiency in visible light communication (VLC), we use a multi-objective optimization algorithm and compare DC-biased optical OFDM (DCO-OFDM) with constant envelope OFDM (CE-OFDM)...


June 2023

Experimental Demonstration of Optical Modulation Format Identification Using SOI-based Photonic Reservoir

Guillermo von Hünefeld, Colja Schubert, Ronald Freund, Johannes Fischer, Isaac Sackey, Gregor Ronniger, Pooyan Safari, Md Mahasin Khan, Rijil Thomas, Enes Seker, Stephan Suckow, Max Lemme, David Stahl

We experimentally show modulation format identification in the optical domain using Silicon-on-Insulator-based Photonic-Integrated-Circuit (PIC) reservoir. Identification of 32 GBd single-polarization signals of 4QAM, 16QAM, 32QAM and 64QAM is...


June 2023

Fooling State-of-the-Art Deepfake Detection with High-Quality Deepfakes

Arian Beckmann, Peter Eisert, Anna Hilsmann

Due to the rising threat of deepfakes to security and privacy, it is most important to develop robust and reliable detectors. In this paper, we examine the need for high-quality samples in the training datasets of such detectors. Accordingly, we...


June 2023

Assessing the Value of Multimodal Interfaces: A Study on Human–Machine Interaction in Weld Inspection Workstations

Paul Chojecki, Peter Eisert, Sebastian Bosse, Detlef Runde, David Przewozny, Niklas Gard, Niklas Hoerner, Dominykas Strazdas, Ayoub Al-Hamadi

Multimodal user interfaces promise natural and intuitive human–machine interactions. However, is the extra effort for the development of a complex multisensor system justified, or can users also be satisfied with only one input modality? This...


June 2023

Shortcomings of Top-Down Randomization-Based Sanity Checks for Evaluations of Deep Neural Network Explanations

Alexander Binder, Klaus-Robert Müller, Wojciech Samek, Grégoire Montavon, Sebastian Lapuschkin, Leander Weber

While the evaluation of explanations is an important step towards trustworthy models, it needs to be done carefully, and the employed metrics need to be well-understood. Specifically model randomization testing is often overestimated and regarded...


June 2023

Revealing Hidden Context Bias in Segmentation and Object Detection through Concept-specific Explanations

Maximilian Dreyer, Thomas Wiegand, Wojciech Samek, Sebastian Lapuschkin, Reduan Achtibat

Applying traditional post-hoc attribution methods to segmentation or object detection predictors offers only limited insights, as the obtained feature attribution maps at input level typically resemble the models' predicted segmentation mask or...


June 2023

Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models

Frederick Pahde, Wojciech Samek, Sebastian Lapuschkin, Maximilian Dreyer

State-of-the-art machine learning models often learn spurious correlations embedded in the training data. This poses risks when deploying these models for high-stake decision-making, such as in medical applications like skin cancer detection. To...


June 2023

The Meta-Evaluation Problem in Explainable AI: Identifying Reliable Estimators with MetaQuantus

Anna Hedström, Wojciech Samek, Sebastian Lapuschkin, Marina M.-C. Höhne, Philine Bommer, Kristoffer K. Wickstrøm

Explainable AI (XAI) is a rapidly evolving field that aims to improve transparency and trustworthiness of AI systems to humans. One of the unsolved challenges in XAI is estimating the performance of these explanation methods for neural networks,...



Items per page10ǀ20ǀ30
Results 61-70 of 268
<< < 4 5 6 7 8 9 10 11 > >>