REFRAME

REFRAME

REFRAME: ExploRing and Expanding the FrontierRs of FoundAtion ModEls

Funded by the BMBF - Federal Ministry of Education and Research

Duration: October 2024 - September 2027

Project Description

In the realm of artificial intelligence (AI), Foundation Models (FM) have ushered in a new era of innovation in deep learning (DL). Despite the advancements of Vision Foundation Models (VFM), critical questions about their trustworthiness remain unanswered. It is unclear when a model operates beyond the scope of its training data, how accuracy varies across different domains, and how fine-tuning and adaptation affect performance.

REFRAME addresses these open challenges. The overarching goal is to enable the sustainable, robust, flexible, and efficient use of VFM for specific tasks. Achieving this requires:

  1. Developing methods to assess model limitations and identify uncertainties in predictions, providing these tools to end users.
  2. Enhancing trustworthiness and explainability through approaches such as bias detection and mitigation.
  3. Creating resilient and efficient adaptation techniques that allow VFM to be tailored to specialized domains and tasks, even with limited data.

REFRAME focuses on key aspects of method development that will help improve the reliability and trust in VFM-based predictions and models. Each of the three objectives contributes to a more flexible, resilient, and efficient use of VFM in real-world applications. Through these contributions, REFRAME plays a crucial role in establishing VFM as a valuable resource for downstream AI systems.

By ensuring efficiency, reliability, and trustworthiness, REFRAME opens new avenues for the application of VFM in industry and society. The project's outcomes will serve as a scientific and technological foundation for the robust, flexible, and efficient use of large Vision Foundation Models, which hold significant potential for social and economic impact and the development of new markets.

The methods developed within REFRAME are enabling technologies that will empower a broad range of users to deploy AI in a more trustworthy manner. Techniques for uncertainty quantification, explainability, and flexible domain adaptation will make large VFM applicable in fields where data is underrepresented, including media technology, multimedia, security, automation, industrial production, medical technology, and mobility. Moreover, interpretability, robustness, and explainability are essential for applications where reliability is critical.

Further Information

Read more about REFRAME in the HHI News.

REFRAME is funded by the Federal Ministry of Education and Research (BMBF) under the funding focus "Flexible, resiliente und effiziente Machine-Learning-Modelle"