Transparent Medical Expert Companion

Duration: September 2018 – August 2021

Funded by Federal Ministry of Education and Research (BMBF)

The quantity of data in the health sector (e.g., imagery and ECG time series) is growing exponentially. Machine learning can support the analysis and interpretation of these data so that medical practitioners can create diagnoses more efficiently (see, e.g., MIT Technology Review). In TraMeExCo, the robustness and transparency of diagnostic prediction through machine learning is explored. For two clinical fields (pathology and pain analysis), machine learning methods are tested with three types of data (microscopy images, pain videos, and ECG time series). First, deep learning methods are combined with "white-box" learning methods. Then, clearly defined learned classifiers are used to understand the decisions made by the system. Finally, Bayesian deep learning is used to explore the underlying uncertainties of the system and data. For these applications, two prototype "Transparent Companions for Medical Applications" are developed.

Project partners:

Fraunhofer IIS and University of Bamberg


C. J. Anders, L. Weber, D. Neumannc, W. Samek, K.-R. Müller, S. Lapuschkin. Finding and removing Clever Hans: Using explanation methods to debug and improve deep models. Information Fusion. 2022.

S.-K. Yeom, P. Seegerer, S. Lapuschkin, A. Binder, S. Wiedemann, K.-R. Müller, W. Samek. Pruning by explaining: A novel criterion for deep neural network pruning. Pattern Recognition. 2021.

J. Sun, S. Lapuschkin, W. Samek, A. Binder. Explain and improve: LRP-inference fine-tuning for image captioning models. Information Fusion. 2022.

Samek, W., Binder, A., Montavon, G., Lapuschkin, S., Müller, K.-R. (2017). Evaluating the visualization of what a deep neural network has learned. IEEE TNNLS, 28(11), 2660–2673.

Montavon, G., Samek, W., Müller, K.-R. (2018). Methods for interpreting and understanding deep neural networks. Digital Signal Processing, 73, 1–15.

Strodthoff, N., Strodthoff, C. (2019). Detecting and interpreting myocardial infarctions using fully convolutional neural networks. Physiological measurement, 40(1), 015001

Horst, F., Lapuschkin, S., Samek, W., Müller, K.-R., Schöllhorn WI. (2019). Explaining the unique nature of individual gait patterns with deep learning. Scientific reports 9 (1), 1-13

Kohlbrenner M., Bauer A., Nakajima S., Binder A., Samek W., Lapuschkin S. (2020). Towards best practice in explaining neural network decisions with LRP. Proceedings of the 2020 International Joint Conference on Neural Networks (IJCNN), 1-7

Hägele, M., Seegerer, P., Lapuschkin, S., Bockmayr, M., Samek, W., Klauschen, F., Müller, K.-R., Binder A. (2020). Resolving challenges in deep learning-based analyses of histopathological images using explanation methods. Scientific reports 10 (1), 1-12

Aeles, J., Horst, F., Lapuschkin, S., Lacourpaille, L., Hug, F. (2021). Revealing the unique features of each individual's muscle activation signatures. Journal of the Royal Society Interface 18 (174), 20200770