Shortcomings of Top-Down Randomization-Based Sanity Checks for Evaluations of Deep Neural Network Explanations
While the evaluation of explanations is an important step towards trustworthy models, it needs to be done carefully, and the employed metrics need to be well-understood. Specifically model randomization testing is often overestimated and regarded...
Revealing Hidden Context Bias in Segmentation and Object Detection through Concept-specific Explanations
Applying traditional post-hoc attribution methods to segmentation or object detection predictors offers only limited insights, as the obtained feature attribution maps at input level typically resemble the models' predicted segmentation mask or...
Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models
State-of-the-art machine learning models often learn spurious correlations embedded in the training data. This poses risks when deploying these models for high-stake decision-making, such as in medical applications like skin cancer detection. To...
The Meta-Evaluation Problem in Explainable AI: Identifying Reliable Estimators with MetaQuantus
Explainable AI (XAI) is a rapidly evolving field that aims to improve transparency and trustworthiness of AI systems to humans. One of the unsolved challenges in XAI is estimating the performance of these explanation methods for neural networks,...
Beyond Explaining: Opportunities and Challenges of XAI-Based Model Improvement
Explainable Artificial Intelligence (XAI) is an emerging research field bringing transparency to highly complex and opaque machine learning (ML) models. This paper offers a comprehensive overview over techniques that apply XAI practically to...
Sydnone Methides: Intermediates between Mesoionic Compounds and Mesoionic N-Heterocyclic Olefins
Sydnone methides represent an almost unknown class of mesoionic compounds which possess exocyclic carbon substituents instead of oxygen (sydnones) or nitrogen (sydnone imines) in the 5-position of a 1,2,3-oxadiazolium ring. Unsubstituted...
Bridging the Gap: Gaze Events as Interpretable Concepts to Explain Deep Neural Sequence Models
Recent work in XAI for eye tracking data has evaluated the suitability of feature attribution methods to explain the output of deep neural sequence models for the task of oculomotric biometric identification. In this work, we employ established...
Semantic modeling of cell damage prediction: A machine learning approach at human-level performance in dermatology
In this work we investigate cell damage in whole slice images of the epidermis. A common way for pathologists to annotate a score, characterising the degree of damage for these samples, is the ratio between healthy and unhealthy nuclei. The...
Data Models for Dataset Drift Controls in Machine Learning With Optical Images
In this study, we pair traditional machine learning with physical optics to obtain explicit and differentiable data models. We demonstrate how such data models can be constructed for image data and used to control downstream machine learning...
Demonstration of a 15-Mode Network Node Supported by a Field-Deployed 15-Mode Fiber
Researchers from NICT, University of L’Aquila, Finisar, Prysmian and Nokia Bell Lab demonstrate a 2-line side 15-mode spatial division multiplexing network node based on fifteen 2×2 wavelength cross-connects to direct up to six 5 Tb/s, 15-mode,...