Image-based rendering is a technique which has received a considerable interest in computer graphics for the realistic rendering of complex scenes. Instead of modeling shape, material, reflection of objects as well as light sources and light exchange with high accuracy and sophisticated physical models, image-based rendering synthesizes new views of a scene by interpolating among multiple images taken with one or multiple cameras. The use of real pictures leads to naturally looking scenes and allow the reproduction of fine structures (e.g., hair, fur, leaves) that are difficult to model with polygonal representations. Also, the rendering complexity is independent from the scene content since interpolation is performed on pixels instead of polygons. As a result, sophisticated scenes can naturally be rendered with limited computational complexity.
Although image-based rendering has traditionally been applied to view synthesis of virtual environments, the method can also be applied to dynamic scenes with more degrees of freedom. We use IBR techniques for natural animation of faces. In contrast to existing approaches, we combine geometry warping with image-based rendering in order to describe global head motion and to render a correct outline even in presence of hair. In order to reduce the memory requirements, only head turning with the most dominant image changes is interpolated from a set of initially captured views, whereas other global head motions are represented with a geometry model. Similarly, the jaw movement which affects the silhouette of the person viewed from the side is also represented by geometry deformations. Local expressions and motion of the mouth and eyes are directly extracted from the video, warped to the correct position using the 3D head model, and smoothly blended into the global head texture. The additional use of geometry in image- based rendering severely restricts the number of images required but enables head rotation of the person as a postprocessing step in applications like virtual conferencing.
The rendering of new frames is performed by image-based interpolation combined with geometry-based warping. Given a set of facial animation parameters, the frame of the image cube having the closest value of head rotation is selected as reference frame for warping. Thus, the dominant motion changes are already represented by a real image without any synthetic warping. Deviations of the deisred global motion parameters from the stored values of the initialization step are compensated using 3D geometry. This combination of geometry warping with image-based interpolation allows a very flexible trade-off between accuracy and size of the image cube.
Head translation and head roll can be addressed by pure 2D motion, only head pitch needs some depth dependent warping. As long as the rotation angles are small which is true in most practical situations, the quality of the geometry can be rather poor. Also local deformations due to jaw movements are here represented by head model deformations. In order to combine both sources, alpha blending is used to smoothly blend between the warped image and the 3D model.
Realistic rendering of moving eyes and mouth is difficult to achieve. We therefore use the original image data from the camera to achieve realistic animation of face features. The area around the eyes and the mouth is cut out from the camera frames, warped to the correct position of the person in the virtual scene using the 3D head model, and smoothly merged into the synthetic representation using alpha blending.
P. Eisert, J. Rurainsky
Geometry Assisted Image-based Rendering for Facial Analysis and Synthesis, Signal Processing: Image Communication, vol. 21, no. 6, pp. 493-505, July 2006.