Fraunhofer HHI helps embed international AI standards through global landslide challenge

How can artificial intelligence innovation keep pace with the demand for transparency, sustainability, and trust—especially in disaster risk reduction? A new study published in npj Natural Hazards examines how online AI competitions can help introduce developers to international standards early in the innovation process.

The Comment, “Introducing AI practitioners to international standards through online competitions”, analyses a large-scale global landslide detection challenge coordinated under the Global Initiative on Resilience to Natural Hazards through AI Solutions, which is chaired by Monique Kuglitsch, Innovation Manager at Fraunhofer Heinrich-Hertz-Institute. The Initiative is actively supported by Fraunhofer HHI researchers and staff, including Katharina Weitz and Jennifer Selby, who are also co-authors of the study.

Bridging AI innovation and standardization

Online AI challenges typically reward performance above all else—often encouraging developers to optimise accuracy without considering broader issues such as bias, explainability, or energy efficiency. The study explores whether these challenges can instead serve a dual purpose: fostering technical innovation while familiarising participants with internationally agreed best practices for responsible AI.

The landslide classification challenge ran from April to August 2025 on the Zindi platform, in partnership with the International Telecommunication Union and its AI for Good Initiative, the European Space Agency (ESA), the World Meteorological Organization (WMO), the University of Padua, and the University of Cambridge.

This challenge saw the largest participation of AI for Good-Zindi challenges to date. Nearly 1,000 participants from more than 90 countries submitted over 8,600 machine-learning models using multi-source satellite data from Sentinel-1 and Sentinel-2 to detect landslides accurately from space, including under cloud cover—a major challenge in Earth observation. 

Fraunhofer-led focus on responsible AI

Beyond a traditional leaderboard, the challenge introduced an additional evaluation step: top-performing teams were required to document how their solutions aligned with best practices drawn from international technical reports developed under ITU/WMO/UNEP/Fraunhofer Focus Group on AI for Natural Disaster Management, the predecessor to the Global Initiative. These practices covered: data and model bias, model transparency, approach reusability, sustainability and efficiency, innovation, and practicality and robustness.

The analysis shows that while innovation and robustness were addressed extensively, other dimensions—such as bias mitigation, explainability, and sustainability and efficiency—received less systematic attention. Only a small number of finalists quantified the energy or carbon footprint of their models, highlighting a persistent gap between technical performance and responsible AI considerations. The findings indicate the need for clearer guidance and targeted capacity-building to help AI developers integrate these considerations into their workflow.

“International standards are often developed through a top-down approach to promote the responsible use of AI,” says Monique Kuglitsch. “In this challenge, we flipped that model—engaging directly with AI developers to understand how they perceive and adopt key concepts of responsible AI in practice.”

A scalable model for capacity building

Drawing on these findings, the authors propose a prototype framework for future AI challenges that embeds standards awareness directly into competition design. By doing so, challenges can function as scalable, bottom-up capacity-building mechanisms—particularly valuable for self-taught practitioners and developers in regions with limited access to formal AI education.

The Comment is available open access in npj Natural Hazards and complements ongoing activities within the Global Initiative, supported by Fraunhofer, including international workshops, technical reports, and educational materials aimed at strengthening the responsible use of AI for disaster risk reduction and climate resilience.

Read the paper here

Read more about the Global Initiative