Explainable Sequence-to-Sequence GRU Neural Network for Pollution Forecasting
The goal of pollution forecasting models is to allow the prediction and control of the air quality. While such deep learning models were deemed for a long time as black boxes, recent advances in eXplainable AI (XAI) allow to look through the...
Fooling State-of-the-Art Deepfake Detection with High-Quality Deepfakes
Due to the rising threat of deepfakes to security and privacy, it is most important to develop robust and reliable detectors. In this paper, we examine the need for high-quality samples in the training datasets of such detectors. Accordingly, we...
Assessing the Value of Multimodal Interfaces: A Study on Human–Machine Interaction in Weld Inspection Workstations
Multimodal user interfaces promise natural and intuitive human–machine interactions. However, is the extra effort for the development of a complex multisensor system justified, or can users also be satisfied with only one input modality? This...
Shortcomings of Top-Down Randomization-Based Sanity Checks for Evaluations of Deep Neural Network Explanations
While the evaluation of explanations is an important step towards trustworthy models, it needs to be done carefully, and the employed metrics need to be well-understood. Specifically model randomization testing is often overestimated and regarded...
Revealing Hidden Context Bias in Segmentation and Object Detection through Concept-specific Explanations
Applying traditional post-hoc attribution methods to segmentation or object detection predictors offers only limited insights, as the obtained feature attribution maps at input level typically resemble the models' predicted segmentation mask or...
Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models
State-of-the-art machine learning models often learn spurious correlations embedded in the training data. This poses risks when deploying these models for high-stake decision-making, such as in medical applications like skin cancer detection. To...
The Meta-Evaluation Problem in Explainable AI: Identifying Reliable Estimators with MetaQuantus
Explainable AI (XAI) is a rapidly evolving field that aims to improve transparency and trustworthiness of AI systems to humans. One of the unsolved challenges in XAI is estimating the performance of these explanation methods for neural networks,...
Beyond Explaining: Opportunities and Challenges of XAI-Based Model Improvement
Explainable Artificial Intelligence (XAI) is an emerging research field bringing transparency to highly complex and opaque machine learning (ML) models. This paper offers a comprehensive overview over techniques that apply XAI practically to...
Sydnone Methides: Intermediates between Mesoionic Compounds and Mesoionic N-Heterocyclic Olefins
Sydnone methides represent an almost unknown class of mesoionic compounds which possess exocyclic carbon substituents instead of oxygen (sydnones) or nitrogen (sydnone imines) in the 5-position of a 1,2,3-oxadiazolium ring. Unsubstituted...
Bridging the Gap: Gaze Events as Interpretable Concepts to Explain Deep Neural Sequence Models
Recent work in XAI for eye tracking data has evaluated the suitability of feature attribution methods to explain the output of deep neural sequence models for the task of oculomotric biometric identification. In this work, we employ established...