Aktuelle Publikationen

Juni 2023

Increasing the power and spectral efficiencies of an OFDM-based VLC system through multi-objective optimization

Wesley Da Silva Costa, Volker Jungnickel, Ronald Freund, Anagnostis Paraskevopoulos, Malte Hinrichs, Higor Camporez, Maria Pontes, Marcelo Segatto, Helder Rocha, Jair Silva

In order to minimize power usage and maximize spectral efficiency in visible light communication (VLC), we use a multi-objective optimization algorithm and compare DC-biased optical OFDM (DCO-OFDM) with constant envelope OFDM (CE-OFDM)...

Juni 2023

Fooling State-of-the-Art Deepfake Detection with High-Quality Deepfakes

Arian Beckmann, Peter Eisert, Anna Hilsmann

Due to the rising threat of deepfakes to security and privacy, it is most important to develop robust and reliable detectors. In this paper, we examine the need for high-quality samples in the training datasets of such detectors. Accordingly, we...

Juni 2023

Assessing the Value of Multimodal Interfaces: A Study on Human–Machine Interaction in Weld Inspection Workstations

Paul Chojecki, Peter Eisert, Sebastian Bosse, Detlef Runde, David Przewozny, Niklas Gard, Niklas Hoerner, Dominykas Strazdas, Ayoub Al-Hamadi

Multimodal user interfaces promise natural and intuitive human–machine interactions. However, is the extra effort for the development of a complex multisensor system justified, or can users also be satisfied with only one input modality? This...

Juni 2023

Shortcomings of Top-Down Randomization-Based Sanity Checks for Evaluations of Deep Neural Network Explanations

Alexander Binder, Klaus-Robert Müller, Wojciech Samek, Grégoire Montavon, Sebastian Lapuschkin, Leander Weber

While the evaluation of explanations is an important step towards trustworthy models, it needs to be done carefully, and the employed metrics need to be well-understood. Specifically model randomization testing is often overestimated and regarded...

Juni 2023

Revealing Hidden Context Bias in Segmentation and Object Detection through Concept-specific Explanations

Maximilian Dreyer, Thomas Wiegand, Wojciech Samek, Sebastian Lapuschkin, Reduan Achtibat

Applying traditional post-hoc attribution methods to segmentation or object detection predictors offers only limited insights, as the obtained feature attribution maps at input level typically resemble the models' predicted segmentation mask or...

Juni 2023

Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models

Frederick Pahde, Wojciech Samek, Sebastian Lapuschkin, Maximilian Dreyer

State-of-the-art machine learning models often learn spurious correlations embedded in the training data. This poses risks when deploying these models for high-stake decision-making, such as in medical applications like skin cancer detection. To...

Juni 2023

The Meta-Evaluation Problem in Explainable AI: Identifying Reliable Estimators with MetaQuantus

Anna Hedström, Wojciech Samek, Sebastian Lapuschkin, Marina M.-C. Höhne, Philine Bommer, Kristoffer K. Wickstrøm

Explainable AI (XAI) is a rapidly evolving field that aims to improve transparency and trustworthiness of AI systems to humans. One of the unsolved challenges in XAI is estimating the performance of these explanation methods for neural networks,...

Juni 2023

Beyond Explaining: Opportunities and Challenges of XAI-Based Model Improvement

Leander Weber, Wojciech Samek, Alexander Binder, Sebastian Lapuschkin

Explainable Artificial Intelligence (XAI) is an emerging research field bringing transparency to highly complex and opaque machine learning (ML) models. This paper offers a comprehensive overview over techniques that apply XAI practically to...

Juni 2023

Sydnone Methides: Intermediates between Mesoionic Compounds and Mesoionic N-Heterocyclic Olefins

Sebastian Mummel, Eike Hübner, Felix Lederle, Jan C. Namyslo, Martin Nieger, Andreas Schmidt

Sydnone methides represent an almost unknown class of mesoionic compounds which possess exocyclic carbon substituents instead of oxygen (sydnones) or nitrogen (sydnone imines) in the 5-position of a 1,2,3-oxadiazolium ring. Unsubstituted...

Juni 2023

Bridging the Gap: Gaze Events as Interpretable Concepts to Explain Deep Neural Sequence Models

Daniel Krakowczyk, Sebastian Lapuschkin, David Robert Reich, Paul Prasse, Lena Ann Jäger, Tobias Scheffer

Recent work in XAI for eye tracking data has evaluated the suitability of feature attribution methods to explain the output of deep neural sequence models for the task of oculomotric biometric identification. In this work, we employ established...

Ergebnisse pro Seite10ǀ20ǀ30
Ergebnisse 21-30 von 225
<< < 1 2 3 4 5 6 7 8 > >>