MATH+ Project

MATH+ Project

Quantifying Uncertainties in Explainable AI

Duration: January 2019 – December 2021

Funded by MATH+

The omnipresence of training data and enhancement of computing capabilities have made deep learning algorithms feasible. Thus far, most deep learning studies have been empirically driven and are usually viewed as "black boxes": they can produce a decision but the grounds for this decision are unclear. In MATH+, a theoretical understanding of the explainability of deep neural networks is developed. For a given decision, the features of input data that played the largest role are identified and the associated uncertainties of a decision are quantified.

Project partners:

Prof. Gitta Kutyniok, Prof. Klaus-Robert Müller, and Dr. Wojciech Samek

Publications:

Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.-R., Samek, W. (2015). On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation.
PLOS ONE, 10(7), e0130140.

Montavon, G., Lapuschkin, S., Binder, A., Samek, W., Müller, K.-R. (2017). Explaining nonlinear classification decisions with deep Taylor decomposition. Pattern Recognition, 65, 211–222.

Bubba, T.A., Kutyniok, G., Lassas, M., März, M., Samek, W., Siltanen, S., Srinivasan, V. (2018). Learning the invisible: A hybrid deep learning-shearlet framework for limited angle computed tomography. Preprint at arXiv:1811.04602.

Montavon, G., Samek, W., Müller, K.-R. (2018). Methods for interpreting and understanding deep neural networks. Digital Signal Processing, 73, 1–15.