AI-based recording of work processes in the operating theatre for automated report creation
Duration August 2022 - July 2025
Funded by Federal Ministry of Education and Research (BMBF)
Motivation
Operating theatre reports document all relevant information during surgical interventions. They serve to ensure therapeutic safety and accountability as well as to provide proof of performance. The preparation of the OR report is time-consuming and ties up valuable working time - time that is then not available for the treatment of patients.
Ziele und Vorgehen
In the KIARA project, researchers are working on a system that automatically drafts operating theatre reports. The KIARA system is intended to relieve medical staff: it documents operating theatre activities and creates a draft of the report, which then only needs to be checked, completed and approved. The system works via cameras integrated into operating theatre lamps. Their image data is then analysed with the help of artificial intelligence to recognise and record objects, people and all operating theatre activities. The ambitious system is to be developed and tested in a user-centred manner for procedures in the abdominal cavity and in oral and maxillofacial surgery.
Innovationen und Perspektiven
KIARA is designed to continuously learn through human feedback and to simplify clinical processes for the benefit of medical staff by automating the creation of operating theatre reports. The system can also be used for other operating theatre areas in the future.
HHI contributions
The scientific and technical goals of the sub-project of the HHI researchers focus in particular on:
- Contactless AI-based detection of users, processes and work equipment.
- Semantic interpretation and situation understanding
- Sterile intra- surgery -human AI interaction
First, the level of non-contact and thus sterile sensing will be considered and issues of sensor modalities, image quality and camera perspective will be discussed in order to capture the surgery situation accurately and in real-time.
The extracted information on the type of detected objects and their spatiotemporal relationships (position and trajectory) are represented in graphs and fused using graph neural networks in the context of the operating theatre in order to derive documentation-relevant situations (social signal, situation awareness) in the next step.
In the application context, only small amounts of training data can be generated for this purpose, so R&D work must be carried out to optimise the learning procedures (e.g. transfer or one-shot learning), to generate synthesised or simulated data and check their quality, and to integrate expert knowledge.
Interactive learning methods that use prototypes in latent space and provide the user with time-efficient tools for annotating data and correcting AI methods are currently the subject of research at the HHI and can contribute in this project to meeting the challenge of small training data sets and allowing the system to continuously learn through human feedback (continual learning).
In the process-accompanying interaction between users and the AI assistant, it must be examined how intensive the interaction is necessary and possible during the operation and by means of which modalities (contactless speech, body and hand gestures, or contact-sensitive device inputs) this can take place efficiently, effectively and satisfactorily. For this purpose, three phases of the process-accompanying human-AI interaction are considered: the pre-surgery interaction, intra-surgery interaction and post- surgery interaction. This sub-project contributes in particular to intra-surgery interaction. In this context, research questions on distraction- and stress-free interaction design will be addressed.
First, the above-mentioned contactless input modalities will be implemented prototypically for initial evaluations and then the most suitable input technologies or a mixture of voice and gestural input will be further developed in the course of the project. For example, operating theatre staff can acknowledge queries from the system with a "thumbs-up" hand gesture and/or an "OK" voice command.
Konsortium
- Karl Leibinger Medizintechnik GmbH & Co. KG, Mühlheim an der Donau
- Gebrüder Martin GmbH & Co. KG, Tuttlingen
- Charité - Universitätsmedizin Berlin, Chirurgische Klinik, CCM|CVK & Mund-, Kiefer-, Gesichtschirugie
- HFC Human-Factors-Consult GmbH, Berlin
- Fraunhofer HHI, Berlin