The main research focus of the Efficient Deep Learning group is the development of neural network and artificial intelligence distribution scenarios and applications. This includes distributed and federated learning and training across many devices, allowing to overcome single-device hardware, processing or operation restrictions.
Also, the group develops efficient neural network compression methods and standardizes them through international bodies such as ISO/IEC or 3GPP. These methods allow trained neural networks to be efficiently transmitted and distributed at a fraction of the original size while maintaining the same inference capabilities. From these methods, all distributed artificial intelligence applications benefit that require all types of neural network transmission: individual trained network transmissions, network updates, and in particular online distributed scenarios with their needs of constant communication rounds.