Immersive Streaming

Low-latency streaming techniques for transmission of animatable volumetric videos over next-generation mobile networks

Duration: March 2019 – present

Volumetric video contains considerably more data compared to traditional 2D video. Processing these data introduces challenges including: high bit rates, inefficient processing on end devices due to a lack of hardware decoders for volumetric data and high processing requirements for rendering multiple volumetric objects.

The hardware found in current mobile devices is inadequate for efficiently decoding and rendering volumetric video content. A workaround is to offload the processing to the cloud. Here, a view from the 3D object can be rendered into a 2D video. This 2D video can be compressed and transmitted to a user’s device. The rendered view is dynamically updated according to user interaction. Benefits include: leveraging the existing 2D video processing pipeline on mobile phones, enabling display of complex scenes on legacy devices, and achieving practicable bitrates for transmission of complex volumetric scenes. However, network latency brought by cloud-based rendering causes an increased interaction latency.

To reduce the effective interaction latency, we employ efficient hardware encoders, real-time streaming protocols such as WebRTC, (in addition to using low-latency 5G networks), and develop algorithms that can accurately predict the user’s 6DoF head movements.

Project partners:

  • Fraunhofer HHI
  • Deutsche Telekom AG
  • Volucap GmbH
  • MobiledgeX

Publications:

S. Gül, C. Hellge, P. Eisert, "Latency Compensation Through Image Warping for Remote Rendering-based Volumetric Video Streaming", IEEE International Conference on Image Processing (ICIP), October 2022.

J. Son, S. Gül, G.S. Bhullar, G. Hege, W. Morgenstern, A. Hilsmann, T. Ebner, S. Bliedung, P. Eisert, T. Schierl, T. Buchholz, C. Hellge, "Split Rendering for Mixed Reality: Interactive Volumetric Video in Action" In Proceedings of SIGGRAPH Asia 2020 XR (SA ’20 XR), December 2020.

S. Gül, S. Bosse, D. Podborski, T. Schierl, C. Hellge, "Kalman Filter-based Head Motion Prediction for Cloud-based Mixed Reality", In Proceedings of the 28th ACM International Conference on Multimedia (ACMMM), October 2020.

S. Gül, D. Podborski, A. Hilsmann, W. Morgenstern, P. Eisert, O. Schreer, T. Buchholz , T. Schierl , C. Hellge, "Interactive Volumetric Video from the Cloud", International Broadcasting Convention (IBC), September 2020.

S. Gül, D. Podborski, T. Buchholz, T. Schierl, C. Hellge, "Low-latency Cloud-based Volumetric Video Streaming Using Head Motion Prediction", In Proceedings of the 30th ACM Workshop on Network and Operating Systems Support for Digital Audio and Video (NOSSDAV ’20). Association for Computing Machinery, Istanbul, Turkey, June 2020.

S. Gül, D. Podborski, J. Son, G.S. Bhullar, T. Buchholz, T. Schierl, C. Hellge, "Cloud Rendering-based Volumetric Video Streaming System for Mixed Reality Services", Proceedings of the 11th ACM Multimedia Systems Conference (MMSys), June 2020.

A. Hilsmann, P. Fechteler, W. Morgenstern, S. Gül, D. Podborski, C. Hellge, T. Schierl, P. Eisert, "Interactive Volumetric Video Rendering and Streaming", In: Culture and Computer Science – Extended Reality, Proceedings of KUI 2020, ISBN: 978-3-86488-169-5.

D. Podborski, S. Gül, J. Son, G.S. Bhullar, R. Skupin, Y. Sanchez, T. Schierl, C. Hellge, "Interactive Low Latency Video Streaming Of Volumetric Content", ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, May 2020.