At the “International Conference on Machine Learning (ICML) 2019 Workshop – Joint Workshop on On-Device Machine Learning & Compact Deep Neural Network Representations” Fraunhofer HHI received the "Best Paper Award" on the topic: “DeepCABAC: Context-adaptive binary arithmetic coding for deep neural network compression“. Through its adaptive, context-based rate modeling DeepCABAC allows an optimal quantization and coding of the weight matrices of the neural network and thus a very strong compression without performance losses.
There are many reasons to use compression for highly complex neural networks. The efficient design plays a significant role especially in practical applications, for example to conserve the battery and memory of the mobile phone. At this year's ICML workshop, researchers, developers and field practitioners came together to present their results and develop joint solutions.
For the development of DeepCABAC, researchers from the two fields of Video Coding and Machine Learning work in close cooperation at Fraunhofer HHI. As a result of the exchange of knowledge and experience, the many years of expertise in the field of Video Coding could also be applied to the compression of neural networks. Thus, it was shown that Machine Learning is not only successfully used in Video Coding, but also that Machine Learning can in turn benefit from Video Coding.
The compression of neural networks is particularly concerned with the avoidance of performance losses. Fraunhofer HHI's compression method was able to compress the VGG16 network from 553 MB to 8.7 MB. Therefore, compression of up to 1.5 percent of the original size is achieved without performance loss. Up to now, this is the best compression result for this network.
Currently, MPEG is also working on the compression and representation of neural networks and is developing an internationally valid standard. Fraunhofer HHI is involved in this project and has already achieved excellent results with DeepCABAC.