Technologies and Solutions

Auditing and Certification of AI Systems

The Fraunhofer Heinrich Hertz Institute (HHI), together with TÜV Association and the Federal Office for Information Security (BSI) have published the jointly developed whitepaper entitled "Towards Auditable AI Systems" . The whitepaper outlines a roadmap to examine artificial intelligence (AI) models throughout their entire lifecycle.

You can download the whitepaper here: [pdf]

Neural Network Compression


The international NNR standard (ISO/IEC 15938-17) allows efficient compression and transmission of neural networks with millions of parameters. NNR is based on our DeepCABAC technology. For this, we have developed an easy to use software implementation.

Layer-wise Relevance Propagation (LRP)

Layer-wise Relevance Propagation (LRP) is a patented technology for explaining predictions from deep neural networks and other "black box" models. The explanations produced by LRP (so-called heatmaps) allow the user to validate the predictions of the AI model and to identify potential failure modes.

Read more

Spectral Relevance Analysis (SpRAy)

XAI methods such as LRP aim to make the prediction of ML models transparent by providing interpretable feedback on individual predictions of the model and by evaluating the importance of input characteristics in relation to specific samples. Based on these individual explanations, SpRAy allows to obtain a general understanding of the sensitivities of a model, its learned features and concept codes.

Read more

Class Artifact Compensation (ClArC)

Today's AI models are usually trained with extremely large, but not always high-quality, data sets. Undetected errors in the data or incorrect correlations often prevent the predictor from learning a valid and fair strategy for solving the task at hand. The ClArC technology identifies potential errors in the models based on their (LRP) explanations and retrains the AI in a targeted manner in order to solve the problem.

Read more