Dr. Katharina Weitz is a project manager at the Applied Machine Learing Group in the AI Department of Fraunhofer HHI. With a background in education, psychology, and computer science, she develops explainable, human-centered AI for sensitive fields such as healthcare and disaster management with a focus on end-user perspectives. Since 2024, she has been Vice President of the German Computer Science Society and is active in science communication through books, talks, podcasts, and videos.
For further details about her activities please check the subsections below.
Biography
Dr. Katharina Weitz is a Project Manager in the Applied Machine Learning Group at the AI Department of Fraunhofer HHI. With a multidisciplinary background spanning education, psychology, and computer science, she leverages her expertise to advance the application of explainable and Human-Centered AI across diverse domains. Her research focused on applying explainable human-centered AI in various contexts, emphasizing end-user perspectives – see e.g., her research on human-centered AI in the healthcare domain and in the development of the user-centered design of AI systems.
From 2018 to 2024, she worked as a research associate at the chair of Human-Centered Artificial Intelligence at the University of Augsburg. During this time, she completed her doctoral thesis on Explainable Human-Centered AI and focused on medical, educational, and industrial AI-system use cases from an end-user perspective.
Since 2024, Dr. Weitz has been serving as Vice President of the German Computer Science Society, the largest professional association for computer science in Germany, which fosters innovation, research, and public engagement in the field. Beyond academia, Dr. Weitz is dedicated to public engagement, sharing insights on AI to the general public through books, talks, podcasts, and videos.
Research Topics
User-Centered Design of AI Systems: In participarty design workshops with various target groups (e.g., public sector employees, first responders in disaster management), initial prototypes are developed together with users—with the goal of optimizing the interaction between humans and AI.
Mental Models & Trust in Human–AI Interaction: What mental models do people have of AI? How do these models affect their trust and their usage of AI systems?
Explainability & Transparency of AI Systems: Investigating how different forms of explanations (e.g., white-box vs. black-box, interactive vs. static) are perceived by users and how they influence their trust and understanding.
Domain-Specific Applications:
- Disaster & Crisis Management: Research on human-centered and explainable AI to support early-warning and disaster-management systems.
- Healthcare: Exploration and improvement of AI-based systems in the healthcare sector. Application of XAI methods in medical contexts. Integration of the needs and requirements of diverse user groups in the development of standards for healthcare
- Education: Teaching computer science and AI fundamentals to children, adolescents, and educators, as well as designing workshops and learning materials.
- Industry & Business: Integration of AI systems in organizations—What perspectives do employees hold, and how can human-centered AI be implemented in the workplace.
Research Projects
Project Manager for the EU-Horizon project ARTEMis (AleRT and impact-forecast standards for Emergency Management)
Awards
CHI Honorable Mention Award 2024
Honorable Mention award at 2024 CHI Conference on Human Factors in Computing Systems for the paper "Explaining it your way-findings from a co-creative design workshop on designing XAI applications with AI end-users from the public sector." received in May 2024 (together with Schlagowski, R., André, E., Männiste, M., & George, C.)
GI Junior Fellow
For my work and knowledge dissemination in the field of Human-Centered AI, I was recognized in 2020 as an outstanding early-career researcher and appointed Junior Fellow of the German Computer Science Society.