Deep learning methods, in particular artificial neural networks, are nowadays frequently applied with tremendous success, which crucially relies on their capabilities of identifying patterns within very complex data in a data-driven way. The AML group is very experienced in applying and adapting state-of-the-art deep learning algorithms to a very diverse set of data modalities ranging from text data over imaging/video data to timeseries data, which was acknowledged by publications in high quality journals. During this process, we have built a foundational knowledge-basis that allows us to develop cutting-edge technology solutions based on our research.
Supervised learning algorithms based on deep learning have led to spectacular success in different application domains. A significant drawback is the dependency on large labeled datasets that are required to train these algorithms, which is particularly pronounced in the medical domain, where labels are expensive and clinical ground truth hard to determine. One approach to alleviate this problem of label scarcity are self-supervised learning algorithms that learn useful representations from input data alone that can subsequently be used for downstream tasks. The AML group is interested in developing and adapting such algorithms to different data modalities, in the evaluation of the learned representations but also in their impact on quality criteria of algorithms finetuned on downstream tasks.
It is inevitable to have a quality-approved AI model if it is to be deployed in practice. In this regard, the notion of quality has to cover much more than just quantitative performance alone and should include aspects of explainability, robustness, uncertainty quantification as well as data quality assessment. Here, we draw on huge amounts of research efforts invested in, for example the explainability and robustness of AI models, in order to have a better understanding and more trustworthy usage of AI. This is in particular important in sensitive application areas such as healthcare. The AML group is interested and experienced in evaluating and improving AI models with respect to quality criteria such as robustness, explainability, and uncertainty.