VoluProf

Content

The goal of the project VoluProf, Volumetric professor for omnipresent and user-optimized teaching in mixed reality, is to develop a Mixed Reality application that is suitable for everyday use within the context of knowledge transfer. The idea is to create a novel and more efficient way of direct, interactive and user-oriented teaching via photo- and audiorealistic avatars of real lecturers for remote learning.

The project is funded by the Federal Ministry of Education and Research (BMBF).

Overview

Traditional online learning, which provides content via static video and audio scripts, is limited in its effects since learners can hardly interact with teachers and give any feedback. Mixed Reality (MR) combined with animatable volumetric video has the potential to provide an immersive and interactive learning medium that enables a completely new level of learning experience.

VoluProf aims to develop a user-oriented, interactive MR application that enables discourses between teachers, represented as avatars, and users by means of multimodal chatbots. To achieve this, high-quality volumetric videos of the teachers are generated and animated as a basis for interaction. Voice of the avatar is generated realistically from a text provided by the teacher. The avatar is then streamed to the MR headsets of the learners over next-generation mobile networks using low-latency streaming techniques. The development is continuously accompanied by user studies that investigate the requirements and effects of the new teaching medium. Ethical, legal and social implications are monitored and continously integrated into the technical development.

The duration of the project is set for September 2021 – August 2024

Project Details

The project partners want to create a space of future learning where both traditional lecturing and online learning environments are combined in a way, where lecturers can be experienced not only on video but also virtually and physically. In other words, the user participates in a lesson with a virtual professor by means of Augmented Reality glasses for example in their living room, while also being able to interact with them via eye contact and verbal communication.

The aim is to develop the following key features:

  • Generating a photorealistic and animatable volumetric representation of the lecturer.
  • Location-independent applicability through transmission via 5G networks and direct integration into mixed reality devices on the market, as well as subjective optimization of image quality at minimal bit rate.
  • Natural and fast synthesis of the lecturer’s original voice.
  • Answering ethical, legal and social questions about the implications of the new way of knowledge transfer.
  • Media reception and effects research for the analysis and optimization of user acceptance.
  • Educational research on the optimal and user-adaptive use of the new way of knowledge transfer

Publications

Serhan Gül, Cornelius Hellge, Peter Eisert: Latency Compensation Through Image Warping for Remote Rendering-based Volumetric Video Streaming. Proceedings of the IEEE International Conference on Image Processing (ICIP), October 2022.

Dr. H.J. van Gils-Schmidt, JProf. J.-C. Põder, Dr. A. Räder, J. Wegner, MA: A Mixed-Method Approach to Investigate Value Change by Technological Innovations (in context of an AI-based Mixed Reality Education Application). Workshop Machines of Change: Robots, AI and Value Change, TU Delft, the Netherlands (Online), 1–3 February, 2022.

Jangwoo Son, Yago Sanchez, Christian Hampe, Dominik Schnieders, Thomas Schierl, Cornelius Hellge: L4S congestion Control algorithm for interactive low latency applications over 5G. Submitted to ICME 2022.   

Esther Greussing, Franziska Gaiser, Stefanie Helene Klein, Carolin Straßmann, Carolin Ischen, Sabrina Eimler, Katharina Frehmann, Miriam Gieselmann, Charlotte Knorr, Angelica Lermann Henestrosa, Andy Räder & Sonja Utz (2022): Researching interactions between humans and machines: methodological challenges. In:  Publizistik. Vierteljahreshefte für Kommunikationsforschung. DOI: 10.1007/s11616-022-00759-3. 

Peter Eisert, Oliver Schreer, Ingo Feldmann, Cornelius Hellge, Anna Hilsmann: Volumetric Video – Acquisition, Interaction, Streaming and Rendering, Immersive Video Technologies, pp. 289-326, Academic Press, UK, ISBN: 978-0-323-91755-1, September 2022.

MWC 2023, Mixed Reality Video Streaming with L4S (planned) 

MWC 2022, Mixed Reality Streaming over 5G 

IBC 2022, Interactive Volumetric Video Streaming over 5G

Cornelius Hellge: Interactive Volumetric Video Streaming over 5G using L4S, IEEE BTS Pulse, July 14 2022. (https://www.5g-mag.com/post/12-14-07-22-ieee-bts-pulse).

Cornelius Hellge: Holoportation & Avatars, FutureHotels Innovation Breakfast, June 2022. (https://www.linkedin.com/company/futurehotel/).

Featured image: © iStock.com/TheItern /Ljupco /klikk edit by Fraunhofer HHI

Be a part of our success

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leoLorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.

Discover other projects of 3IT

TeleSTAR

The project TeleSTAR uses a full digital surgical microscope and augmented/mixed reality to provide new tools that permit teaching and intraoperative assistance. EIT Health is supported by EIT, a body

More Info

VoluProf

The goal of the project VoluProf, Volumetric professor for omnipresent and user-optimized teaching in mixed reality, is to develop a Mixed Reality application that is suitable for everyday use within

More Info

Virtual LiVe

The project Virtual LiVe aims at digitizing classic event formats that require physical presence via the use of new immersive media formats (e.g. 3D-Audio, 360°-Video, Light-fields, Volumetric Video).

More Info