Current Projects

RE@CT

https://react-project.eu
Duration: Dec 2011 - Nov 2014

RE@CT will introduce a new production methodology to create film-quality interactive characters from 3D video capture of actor performance. The project aims to revolutionise the production of realistic characters and significantly reduce costs by developing an automated process to extract and represent animated characters from actor performance capture in a multiple camera studio. The key innovation is the development of methods for analysis and representation of 3D video to allow reuse for real-time interactive animation. This will enable efficient authoring of interactive characters with video quality appearance and motion.

SCENE

3d-scene.eu
Duration: Oct 2011 - Sept 2014

SCENE will develop novel scene representations for digital media that go beyond the ability of either sample based (video) or model-based (CGI) methods to create and deliver richer media experiences. The SCENE representation and its associated tools will make it possible to capture 3D video, combine video seamlessly with CGI, manipulate and deliver it to either 2D or 3D platforms in either linear or interactive form.

VENTURI

https://venturi.fbk.eu
Duration: Oct 2011 - Sept 2014

To date, convincing AR has only been demonstrated on small mock-ups in controlled spaces; we haven’t yet seen key conditions being met to make AR a booming technology: seamless persistence and pervasiveness. VENTURI addresses such issues, creating a user appropriate, contextually aware AR system, through a seamless integration of core technologies and applications on a state-of-the-art mobile platform. VENTURI will exploit, optimize and extend current and next generation mobile platforms; verifying platform and QoE performance through life-enriching use-cases and applications to ensure device-to-user continuity.

EMC²

www.emc-square.org
Duration: Oct 2011 - Feb 2014

The 3DLife consortium partners committed on building upon the project’s collaborative activities and establishing a sustainable European Competence Centre, namely Excellence in Media Computing and Communication (EMC²). Within the scope of EMC², 3DLife will promote additional collaborative activities such as an Open Call for Fellowships a yearly Grand Challenge, and a series of Distinguished Lectures.

Reverie

www.reveriefp7.eu
Duration: Sept 2011 - Feb 2015

Reverie is a Large Scale Integrating Project funded by the European Union. The main objective of Reverie is to develop an advanced framework for immersive media capturing, representation, encoding and semi-automated collaborative content production, as well as transmission and adaptation to heterogeneous displays as a key instrument to push social networking towards the next logical step in its evolution: to immersive collaborative environments that support realistic inter-personal communication.

FreeFace

Duration: Jun 2011 – Dec 2013

FreeFace will develop a system for assisting the visual authentication of persons employing novel security documents which can store 3D representations of the human head. A person passing a security gate will be recorded by multiple cameras and a 3D representation of the person’s head will be created. Based on this representation, different types of queries such as pose and lighting adaption of either the generated or the stored 3D data will ease manual as well as automatic authentication.

3DLife

www.3dlife-noe.eu/3DLife
Duration: Jan 2010 - June 2013

3DLife is a funded by the European Union research project, a Network of Excellence (NoE), which aims at integrating research that is currently conducted by leading European research groups in the field of Media Internet. 3DLife's ultimate target is to lay the foundations of a European Competence Centre under the name "Excellence in Media Computing & Communication" or simply EMC².  Collaboration is in the core of the 3DLife Network of Excellence.

Virtual Mirror

Website: Virtual Mirror
In cooperation with adidas, a Virtual Mirror has been created that allows the user to view him/herself in a mirror with individually designed shoes. For that purpose, Fraunhofer HHI has developed a system that tracks the 3D motion of the left and right shoes in real time using a single camera. The real shoes are exchanged by 3D computer graphics models giving the user the impression of actually wearing the virtual shoes.

Past Projects

Fraunhofer Secure Identity Innovation Cluster

www.sichere-identitaet.de
Duration: Jan 2009 - Dec 2011


The Fraunhofer Secure Identity Innovation Cluster is an alliance of five Fraunhofer Institutes, five universities and 12 private sector companies, supported by the federal states of Berlin and Brandenburg. The aim of this joint research & development project is to deliver technologies, processes and products that enable clear and unambiguous identification of persons, objects and intellectual property both in the real and the virtual world, thus enabling owners and users of identity to have individual control over clearly defined, recognizable identities. HHI is working on the passive 3D capture of faces for security documents of the future.

Camera deshaking for endoscopic video

Duration: Juni 2011- February 2011

Endoscopic videokymography is a method for visualizing the motion of the plica vocalis (vocal folds) for medical diagnosis with time slice images from endoscopic video. The diagnostic interpretability of a kymogram deteriorates if camera motion interferes with vocal fold motion, which is hard to avoid in practice. For XION GmbH, a manufacturer of endoscopic systems, we developed an algorithm for compensating strong camera-to-scene motion in endoscopic video. Our approach is robust to low image quality, optimized to work with highly nonrigid scenes, and significantly improves the quality of vocal fold kymograms.

Cloud Rendering

Duration: December 2009 - August 2010

In the project CloudRendering, we investigated methods to efficiently encode synthetically produced image sequences in cloud computing environments, enabling interactive 3D graphics applications on computationally weak end devices. One goal was to investigate possibilities to speedup the encoding process by exploiting different levels of parallelism: SIMD, multi-core CPUs/GPUs and multiple connected computers. Additional speedup was achieved by exploiting knowledge from the synthetic nature of the images paired with access to the 3D image generation machinery. The study was performed for Alcatel-Lucent.

Games@Large

Games@Large is a European Integrated Project funded under the 6th framework IST programme. The project targets at designing a platform for running an interactive rich content multimedia application such as games over local networks. The Fraunhofer HHI contributs to this project with low delay video and 3D graphics streaming.

VisionIC

The BMBF funded project VisionIC aims at the development of an intelligent vision platform including startup applications for the mass market. Within this project, the Fraunhofer HHI developed an Advanced Videophone System which enables multiple partners to meet and discuss in a virtual room. Image-based rendering techniques in combination with 3D head model animation allow head pose correction and enhances communication in comparison to traditional video conferencing systems.

VISNETII

VISNET II builds on the success and achievements of the VISNET NoE to continue the progress towards achieving the NoE mission of creating a sustainable world force in Networked Audiovisual (AV) Media Technologies. VISNET II is a network of excellence with a clear vision for integration, research and dissemination plans. The research activities within VISNET II will cover 3 major thematic areas related to networked 2D/3D AV systems and home platforms. These are: video coding, audiovisual media processing, and security. VISNET II brings together 12 leading European organisations in the field of Networked Audiovisual Media Technologies. The 12 integrated organisations represent 7 European states spanning across a major part of Europe, thereby promising the efficient dissemination of resulting technological development and exploitation to larger communities.

VISNET

VISNET is a European Network of Excellence funded under the 6th framework programme. Its strategic objectives are revolving around its integration, research and dissemination activities. VISNET aims to create a sustainable world force of leading research groups in the field of networked audiovisual (AV) media technologies applied to home platforms. The member institutions have grouped together to set up a network of excellence with a clear vision for integration, research and dissemination plans. The research activities within VISNET will cover several disciplines related to networked AV systems and home platforms.

Text2Video

Text2Video Conversion
In the Text2Video project, we have developed a system for the automatic conversion of SMS messages into video animations. From the written text, speech is synthesized and a 3D head model is synchronously animated. The recipient obtains a MMS message with a short video, where the chosen character reads the text. Both photorealistic images or cartoons can be selected for animation. Camera changes, additional head and eye motion as well as pitch shift enhance the variability of the output. The system is, e.g., used by digitalVanity.

Bundesdruckerei GmbH

In cooperation with the Bundesdruckerei GmbH, we have constructed a multi-view camera array for the synchronous capturing of people from different viewing directions and under varying illumination. Methods for calibration, image enhancements, interpolation, and background substitution have been developed in order to create large databases of faces with calibrated known properties.

Deutsche Flugsicherung GmbH

For the Deutsche Flugsicherung GmbH (German Airtraffic Control), we have created an MPEG-4 panorama from the tower of the Berlin Schönefeld airport. The interactive panorama was demonstrated at the Internationale Luftfahrt Ausstellung (ILA) in the context of the presentation of the future Berlin-Brandenburg International airport. For the creation of the virtual environment, image warping for the removal of objects and people and high dynamic range imaging techniques for local contrast adaptation were developed.

Bitfilm

In cooperation with Bitfilm, short video clips of robots were created for the distribution on mobile phones via MMS. From 2D images of the robots, 3D models are created and animated using MPEG-4 facial animation parameters which are derived from text input. Automatic pan and zoom as well as speech alternation are applied in order to enhance the variability of the video clips.