The main research focus of the Efficient Deep Learning group is the development of neural network and artificial intelligence distribution scenarios and applications. This includes distributed and federated learning and training across many devices, allowing to overcome single-device hardware, processing or operation restrictions.
Also, the group develops efficient neural network compression methods and standardizes them through international bodies such as ISO/IEC or 3GPP. These methods allow trained neural networks to be efficiently transmitted and distributed at a fraction of the original size while maintaining the same inference capabilities. From these methods, all distributed artificial intelligence applications benefit that require all types of neural network transmission: individual trained network transmissions, network updates, and in particular online distributed scenarios with their needs of constant communication rounds.
The research topics address different fields in the areas of federated learning, compression of neural networks and distributed neural network transmission for signal processing, computer vision and communication.
Journal papers, conference proceedings, talks and tutorials, standardization contributions and books. Find out about the publications of our group.
Your partner for research and product development: Get in contact with our scientists.