Reliable estimation of parameters from noisy and high-dimensional observations is a prerequisite for successful application of machine learning algorithms. This challenging task becomes significantly more difficult if the data set contains outliers as they may heavily bias the estimation and the subsequent statistical analysis. Changes in the data distribution over time, so-called nonstationary, may also negatively affect performance when the classification algorithm is not robust.
We develop machine learning methods which are robust against outliers and nonstationarity.
The main focus of our work lies on
- Detection of individual and structural outliers in real-world data.
- Investigating the advantages of robust divergences for parameter estimation.
- Formulation of machine learning algorithms as divergence maximization problems.
- Efficient use of methods from information geometry in practice.
- W. Samek, M. Kawanabe, and K.-R. Müller, “Divergence-based Framework for Common Spatial Patterns Algorithms”, IEEE Reviews in Biomedical Engineering, 7:50-72, April 2014,
- W. Samek, D. Blythe, K.-R. Müller, and M. Kawanabe, “Robust Spatial Filtering with Beta Divergence”, Advances in Neural Information Processing Systems 26 (NIPS), 1007-15, December 2013,
- W. Samek and M. Kawanabe, “Robust Common Spatial Patterns by Minimum Divergence Covariance Estimator”, Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2059-2062, May 2014.