April 7, 2021

Fraunhofer HHI and TU Berlin researchers receive award for "best scientific contribution" at medical imaging conference BVM

April 7, 2021

The Fraunhofer Heinrich Hertz Institute (HHI) researchers Dr. Wojciech Samek and Luis Oala together with Jan Macdonald and Maximilian März (TU Berlin) were honored with the award for "best scientific contribution" at this year's medical imaging conference BVM. The scientists received the award for their paper "Interval Neural Networks as Instability Detectors for Image Reconstructions". The paper demonstrates how uncertainty quantification can be used to detect errors in deep learning models.

The award winners were announced during the virtual BVM conference on March 9, 2021. The award for "best scientific contribution" is granted each year by the BVM Award Committee. It honors innovative research with a methodological focus on medical image processing in a medically relevant application context. The aim of the BVM conference is to present current research results and to expand discussions between medical as well as technical scientists, industry and clinical users. The conference is organized on an annual basis by Regensburg Medical Image Computing (ReMIC) and Ostbayersiche Technische Hochschule (OTH Regensburg).

Modern AI systems based on deep learning constitute a flexible, complex and often opaque technology. Limits in the understanding of an AI system’s behavior create risks for system failure. Hence, the identification of failure modes in AI systems is an important pre-requisite for their reliable deployment to real-world settings.

The award winning paper looks at methods to identify these failure modes. It utilizes uncertainty quantification, a research area of quantifying, characterizing, tracing, and managing uncertainty in computational and real world systems. The researchers successfully employed so-called interval neural networks, a new uncertainty quantification method they developed that can be used to detect different error types in deep image reconstruction models. The paper’s results underline the potential of uncertainty quantification as a fine-grained alarm system to monitor deep learning models during deployment. This is an important contribution to making the use of AI systems safer and more reliable.