Sylvain Renault

Sylvain Renault is a scientist and project manager at the Immersive Media and Communication group in the Vision and Imaging Technologies department.

His R&D activities are related to the following keywords: 3D visualizations, real-time rendering, virtual and augmented reality, panorama visualizations, touchless interaction technologies, multimodal user-centred applications, DirectX, OpenGL, Unity3D, autostereoscopic displays, GPU-based algorithms.

For further details about his activities please check the subsections below.


Sylvain Renault received his diploma degree in Computer Science from the Technical University of Berlin, Germany, in 1997. After his studies, he has been working for the Fraunhofer HHI in the Interactive Media and Human Factors department in the research field of 3D Visualization with autostereoscopic Displays, Virtual Reality application developments and novel human-machine interaction technologies.

He developed from 1997 until 2002 the Visual Operation System VOS as part of the national R&D projects “BLICK” and “mUltimo3D”. VOS ran on Silicon Graphics (SGI) workstations O2, ONYX and ONYX 2 and was implemented under the VR toolkit “dVS” from DIVISION, an API for industrial and scientific visualization. VOS integrated a visual programming interface, a multimodal graphical interface with head and gaze sensors and a stereo output without glasses. He also integrated network capabilities allowing two users to work together in virtual rooms via two autostereoscopic displays. Thus he developed many interactive applications in the field of virtual reality.

From 2003 (start time of the R&D projects “Mixed3D”) until 2015 he developed a new 3D real-time software framework Workbench3D supporting quick design and implementation of multi-modal virtual environments. He early used the DirectX .NET API and continuously expanded the system for new user modalities with devices like: force-feedback PHANTOM from SenSable Technologies, 3D space mice, head-eye tracking devices, hand/head-position, gesture input, voice control, and natural speech output via micro-segment TTS. Further. Further he developed autostereoscopic drivers with GPU programming for a big range of multi-view and single-person 3D displays (active stereo, passive stereo and autostereoscopic) and for 180° panoramic projections. The WB3D became the common SDK of the department for many research projects and industrial and commercial solutions.

Currently he is working in the Vision and Imaging Technologies department in the research group “Immersive Media & Communication”. His current objectives are focused on novel rendering concepts for immersive media, VR and AR technologies, human-body-reconstruction (HBR), user-centered visualizations and applications for 3D displays and 3D panoramic screens. Today, he uses the UNITY platform to design and develop 3D applications in the industrial, teleconference, infotainment, cultural-heritage, and medical research fields. Further, he develops new modules for framework Veye, a capturing and reconstruction workflow from the Fraunhofer HHI for HBRs and static objects.

In the last 20 years, he realized various interactive 3D prototypes in many research projects and delivered special and innovative solutions for (i) cultural heritage e.g. 3D Book Explorer (Bavarian State Library), Apian Globus (Bavarian State Library), Trabi Car-Simulator (DDR Museum of Berlin), Autostereoscopic Web-Viewer (Fraunhofer-Allianz) – and (ii) industrial customers e.g. Intelligent Living Kiosk (T-Systems), many 3D-Kiosk (for MTU, Siemens, RITTAL, Shell, Uhren Lange, TESAT, VW, Lufthansa, BBVA), InfoSpace (Adidas), MI-Report (Karl Storz, HowToOrganize), Foto-Kiosk (CEWE), 3D-Cockpit (BMW), 3D-Indoor-Navigation-Display (3D-Berlin), HUD Visualization (Bosch), ZED3D Explorer (Zeutschel).




Advanced Imaging Society (AIS) Technology Award 2017, Ingo Feldmann, Dr. Oliver Schreer, Peter Kauff, Christian Weissig, Danny Tatzelt, Thomas Ebner, Sylvain Renault, "3D Human Body Reconstruction"; Los Angeles, CA, USA, January 16, 2018.

Nomination for the Berlin Brandenburg Innovation Price 2017 Oliver Schreer. Ingo Feldmann, Thomas Ebner, Sylvain Renault, Peter, Kauff, D. Brückner (UFA Lab), E. Feiler (UFA Lab), F. Mrongowius (UFA Lab), F. Govaere (UFA Lab), "Volumetrisches Video - Schlüsseltechnologien für den begehbaren Film (Volumetric video - key technology for the walkable movie)"; 10 selecetd nominations out of 134 submissions, Berlin, Germany, December 1, 2017.

Distinguished Paper Award of SID Symposium Digest of Technical Papers 2017 to Thomas Ebner, Ingo Feldmann, Sylvain Renault, and Oliver Schreer for their paper "Dynamic Real World Objects in Augmented and Virtual Reality Applications" published at SID Symposium Digest of Technical Papers, Los Angeles, USA, May 2017.