December 3, 2021

Fraunhofer HHI's "Artificial Intelligence" department receives honors for highly cited research papers

December 3, 2021

The media group Thomson Reuters has recognized two scientific papers from the "Artificial Intelligence" department of the Fraunhofer Heinrich Hertz Institute (HHI). They received the labels "highly cited" and "hot paper" in the fields of "Engineering" and "Computer Science".  The honored publications deal with the use of artificial intelligence (AI) in anomaly detection and communication optimization of federated learning systems.

Thomson Reuters, a partnership between the news agency Reuters and the media company The Thomson Corporation, has now published the list entitled "Essential Science Indicators' (ESI) Highly Cited Papers". It ranks the scientific papers that have been cited most frequently within the past year. "Highly cited" papers are among the top one percent of the most cited publications in a specific subject area.

Lukas Ruff, Jacob R. Kauffmann, Robert A. Vandermeulen, Grégoire Montavon, Wojciech Samek, Marius Kloft, Thomas G. Dietterich and Klaus-Robert Müller:
A Unifying Review of Deep and Shallow Anomaly Detection

Recent machine-learning (ML) approaches to anomaly detection (AD) have improved the state of research for examining complex datasets, such as large collections of images or text.  These results have sparked a renewed interest in the AD problem and led to the introduction of a great variety of new methods. With the emergence of numerous methods, including approaches based on generative models, one-class classification, and reconstruction, there is a growing need to bring methods of this field into a systematic and unified perspective.

In the honored paper, the researchers present a systematic overview of the common basic principles of different AD methods. For this purpose, they elaborate the implicit mathematical assumptions. They draw connections between classic "shallow" and novel deep approaches and show how this relation might cross-fertilize or extend both directions. In addition, the authors further provide an empirical assessment of major existing methods that are enriched by the use of recent explainability techniques. Finally, they present specific worked-through examples together with practical advice.

The paper was published at Proceedings of IEEE, a scientific journal with an impact factor of 15.15. Papers published there are selected because they summarize a sub-research area of all fields of interest. The mentioned paper was cited so often within two months that it is among the top one percent compared to other papers in the research field of "Engineering".

Felix Sattler, Simon Wiedemann, Klaus-Robert Müller and Wojciech Samek:
Robust and Communication-Efficient Federated Learning from Non-IID Data

Federated learning allows multiple parties to jointly train a deep learning model on their combined data, without any of the participants having to reveal their local data to a centralized server. This form of privacy-preserving collaborative learning, however, comes at the cost of a significant communication overhead during training. To address this problem, the authors of the honored paper propose Sparse Ternary Compression (STC). This is a new compression technique specifically designed to meet the requirements of the federated learning environment. It combines different techniques (sparsification, quantization and encoding) to achieve state-of-the-art results.

To test this technique, the researchers conducted extensive, systematic experiments analyzing different scenarios and effects. The experiments demonstrate that STC distinctively outperforms federated averaging in common federated learning scenarios. The results advocate for a paradigm shift in federated optimization towards high-frequency, low-bitwidth communication, especially in bandwidth-constrained learning environments. Another special feature of the paper is the critical error analysis in the chapter "Lessons Learned". In this section, the researchers summarize exactly which combination of compression methods works best.

The paper was published in IEEE Transactions on Neural Networks and Learning Systems, a journal with an impact factor of 10.451. In addition to "highly cited," this paper received the label "hot paper". This distinction is granted to papers that were published within the last two years and have subsequently been cited rapidly. The paper by the Fraunhofer HHI researchers has been cited so frequently within two months that it is in the top 0.1 percent compared to other papers in the research field of "Computer Science".

The labels "highly cited" and "hot paper" are considered as indicators of scientifically outstanding papers.