Transparent Medical Expert Companion

Duration: September 2018 – August 2021

Funded by Federal Ministry of Education and Research (BMBF)

The quantity of data in the health sector (e.g., imagery and ECG time series) is growing exponentially. Machine learning can support the analysis and interpretation of these data so that medical practitioners can create diagnoses more efficiently (see, e.g., MIT Technology Review). In TraMeExCo, the robustness and transparency of diagnostic prediction through machine learning is explored. For two clinical fields (pathology and pain analysis), machine learning methods are tested with three types of data (microscopy images, pain videos, and ECG time series). First, deep learning methods are combined with "white-box" learning methods. Then, clearly defined learned classifiers are used to understand the decisions made by the system. Finally, Bayesian deep learning is used to explore the underlying uncertainties of the system and data. For these applications, two prototype "Transparent Companions for Medical Applications" are developed.

Project partners:

Fraunhofer IIS and University of Bamberg


Samek, W., Binder, A., Montavon, G., Lapuschkin, S., Müller, K.-R. (2017). Evaluating the visualization of what a deep neural network has learned. IEEE TNNLS, 28(11), 2660–2673.

Montavon, G., Samek, W., Müller, K.-R. (2018). Methods for interpreting and understanding deep neural networks. Digital Signal Processing, 73, 1–15.

Strodthoff, N., Strodthoff, C. (2018). Detecting and interpreting myocardial infarctions using fully convolutional neural networks. Preprint at arXiv:1806.07385.