In December 2019, Wieland Morgenstern, research associate in the Vision and Imaging Technologies department at the Fraunhofer Heinrich Hertz Institute HHI, won the "Best Paper Award". His paper "Progressive Non-rigid Registration of Temporal Mesh Sequences" received the price at the European Conference on Visual Media Production (CVMP) in London. The first prize is endowed with 750 euros and is sponsored by the ACM Europe Council. Dr.-Ing. Anna Hilsmann, group leader Computer Vision and Graphics at Fraunhofer HHI, and Prof. Dr.-Ing. Peter Eisert, head of the Vision & Imaging Technologies department at Fraunhofer HHI, received the award as co-authors.
With the increasing use of Virtual and Augmented Reality devices, there is a growing need for immersive 3D content. Applications such as e-learning and entertainment benefit from using representations of real people. A volumetric recording studio offers the opportunity to capture actors and actresses in a natural environment, recording tiny details of their facial expressions and their life-like interactions with props. It records the scene from all directions simultaneously, and the capture pipeline produces a sequence of 3D surface meshes. Frames are processed individually. Thus, the scene changes gradually over time, the mesh connectivity may be different from frame to frame. While the mesh structure is invisible when rendered with a texture, the vertices of neighboring frames may sample the object surface differently. Editing textures over multiple consecutive frames, editing is hindered by the texture atlas changing from frame to frame.
Morgenstern's paper introduces an algorithm that unifies the inner mesh structure for temporal sequences while preserving the rendered impression. Thereby, the same mesh connectivity is kept stable over a group of frames. The connectivity is transferred to adjacent frames by progressive registration. A consistent topology over several frames allows for better compression of the volumetric video stream, as the connectivity does not have to be encoded for every frame. Changes to the geometry can be expressed as changes to the vertex coordinates. Furthermore, using the same texture atlas over several frames allows for the texture images to be encoded as a video stream instead of single images. Editing consecutive frames in the texture space is facilitated by the texture atlas staying constant within the mesh connectivity group.
You can find the complete paper on Open Access .