MATH+ Project

Quantifying Uncertainties in Explainable AI

Duration: January 2019 – December 2021

Funded by MATH+

The omnipresence of training data and enhancement of computing capabilities have made deep learning algorithms feasible. Thus far, most deep learning studies have been empirically driven and are usually viewed as "black boxes": they can produce a decision but the grounds for this decision are unclear. In MATH+, a theoretical understanding of the explainability of deep neural networks is developed. For a given decision, the features of input data that played the largest role are identified and the associated uncertainties of a decision are quantified.

Project partners:

Prof. Gitta Kutyniok (TU Berlin), Prof. Klaus-Robert Müller (TU Berlin)


S.-K. Yeom, P. Seegerer, S. Lapuschkin, A. Binder, S. Wiedemann, K.-R. Müller, W. Samek. Pruning by explaining: A novel criterion for deep neural network pruning. Pattern Recognition. 2021.

W. Samek, G. Montavon, S. Lapuschkin, C. J. Anders, K.-R. Müller. Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications. Proceedings of the IEEE, vol. 109, no. 3, pp. 247-278, March 2021.

Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.-R., Samek, W. (2015). On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation.
PLOS ONE, 10(7), e0130140.

Montavon, G., Lapuschkin, S., Binder, A., Samek, W., Müller, K.-R. (2017). Explaining nonlinear classification decisions with deep Taylor decomposition. Pattern Recognition, 65, 211–222.

Bubba, T.A., Kutyniok, G., Lassas, M., März, M., Samek, W., Siltanen, S., Srinivasan, V. (2018). Learning the invisible: A hybrid deep learning-shearlet framework for limited angle computed tomography. Preprint at arXiv:1811.04602.

Montavon, G., Samek, W., Müller, K.-R. (2018). Methods for interpreting and understanding deep neural networks. Digital Signal Processing, 73, 1–15.