Immersive video communication

We investigate concepts and develop prototype demonstrators for future immersive videocommunication that convey the sense of being there in a natural and realistic way since more than 15 years. The research includes multi-view video processing, rendering of novel views for 2D and auto-stereoscopic displays, preserving eye contact and gestural behaviour as well as interaction concepts for immersive videocommunication. From 2000-2003, we have been involved in the first European research project IST VIRTUE dealing with 3D videoconferencing. After that, the im.point represented in 2004 a proof of concept for a real-time 3D videoconferencing system offering one person at three different sites to communicate to each other as they would do in a real-round table setup. From 2007-2009, we have been involved in the European FP7 research project “3DPresence”. The project developed a prototype demonstrator offering a multi-party, high-end 3D videoconferencing concept tackling the problem of transmitting the feeling of physical presence in real-time to multiple remote locations in a transparent and natural way. Currently, we are investigating novel concepts for immersive videocommunication focusing on large wall displays and 3D video communication using autostereoscopic displays.

The expertise in this area is as follows:

  • setup and calibration of multi-view camera systems,
  • synchronous multi-view capture,
  • real-time high-definition depth estimation of stereo and trifocal and multi-view camera setups,
  • image rectification and warping,
  • high-quality and high-resolution novel view synthesis for preservation of eye contact and pointing gestures,
  • porting of algorithms to GPU to achieve real-time and low delay video communication, and
  • mixing of virtual environments and life video content.

Selected publications

  1. W. Waizenegger, I. Feldmann, O. Schreer: Real-time Patch Sweeping for High-Quality Depth Estimation in 3D Videoconferencing Applications, SPIE 2011 Conference on Real-Time Image and Video Processing, San Francisco, CA, USA, January 23-27, 2011, Invited Paper.
  2. W. Waizenegger, I. Feldmann: Calibration of a Synchronized Multi-Camera Setup for 3D Videoconferencing, Proceedings of 3DTV Conference 2010, Tampere, Finland, June 7-9, 2010.
  3. I. Feldmann, W. Waizenegger, N. Atzpadin, O. Schreer: Real-Time Depth Estimation for Immersive 3D Videoconferencing, Proceedings of 3DTV Conference 2010, Tampere, Finland, June 07-09, 2010.
  4. O. Divorra Escoda, J. Civit, F. Zuo, H. Belt, I. Feldmann, O. Schreer, E. Yellin, W. Ijsselsteijn, R. van Eijk, D. Espinola, P. Hagendorf, W. Waizennegger, R. Braspenning: Towards 3D-Aware Telepresence: Working on Technologies Behind the Scene, Proceedings of ACM Conference on Computer Supported Cooperative Work (CSCW), New Frontiers in Telepresence, Savannah, Georgia, USA, February 06-10, 2010.
  5. I. Feldmann, O. Schreer, P. Kauff, R. Schäfer, Z. Fei, H.J.W. Belt, Ò. Divorra Escoda: Immersive Multi-User 3D Video Communication, Proceedings of International Broadcast Conference (IBC 2009), Amsterdam, NL, September 2009.
  6. O. Schreer, N. Atzapadin, I. Feldmann: Multi-baseline Disparity Fusion for Immersive Videoconferencing, 2nd International Conference on Immersive Telecommunications (IMMERSCOM 2009), University of California, Berkeley, CA, USA, May 27-29, 2009.
  7. O. Schreer, I. Feldmann, N. Atzpadin, P. Eisert, P. Kauff, Harm Belt: 3DPresence – A system concept for multi-user and multi-party immersive 3D videoconferencing, 5th European Conf. on Visual Media Production (CVMP 2008), London, UK, November 2008.