In this multi-disciplinary project, contributions are made to reliable machine learning (ML). Specifically, methods are explored that can integrate a priori information in machine learning models to enhance their performance (e.g., their reliability and trustworthiness). One possible application of this research is to enhance resistance to adversarial attacks on artificial intelligence.
FU Berlin, HU Berlin, TU Berlin, University of Potsdam, Max Planck Institute for the History of Science, Max Planck Institute for Molecular Genetics, MDC, Zuse Institute Berlin, Charité, WIAS Berlin, and DHZB.
Samek, W., Wiegand, T., Müller, K.-R. (2018). Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. ITU Journal: ICT Discoveries, 1(1), 39–48.
Seibold, C., Samek, W., Hilsmann, A., Eisert, P. (2018). Accurate and robust neural networks for security related applications exampled by face morphing attacks. Preprint at arXiv:1806.04265.
Srinivasan, V., Marban, A., Müller, K.-R., Samek, W., Nakajima, S. (2018). Counterstrike: Defending deep learning architectures against adversarial samples by Langevin dynamics with supervised denoising autoencoder. Preprint at arXiv:1805.12017.