New media formats to increase the feeling of realism and (co-)presence
The objective relates to the development and integration of new media formats to enable high quality photo-realistic experiences, maximizing the feeling of realism and (co-)presence
A key goal for next generation Virtual Reality (VR) experiences is to maximize the feeling of realism and (co-)presence. VRTogether is fully aligned with this objective, by developing new media formats able to provide volumetric content with photo-realistic high quality, even in distributed networked scenarios and using affordable capturing systems. In this context, the project is advancing the state-of-the-art with respect to the real-time volumetric capture of users, and their real-time integration in realistic VR environments, both as Time Varying Meshes (TVM) and Point Clouds. Efficient distribution and rendering strategies (e.g. Level of Detail adjustment and tiling) for these new content formats are being devised as well. The project is additionally contributing with datasets to enable the community to further experiment with these formats, e.g. https://vcl.iti.gr/dataset/human4d/
Beyond concrete novel formats, VRTogether is researching efficient strategies for a seamless integration of heterogeneous media formats in VR experiences, with the goal of maximizing the overall Quality of Experience (QoE), while taking cost efficiency into account. Examples are combinations of 3D environments, Computer Generated Imagery (CGI) content, 2D videos and stereoscopic billboards for scene dynamics, and volumetric formats for the users’ representations. The insights from conducted experiments are being adopted in the offered VR experiences, and are under review in top scientific venues.
Enrich existing production pipelines and practices
The objective relates to the adaptation of existing (post-)production pipelines and practices to capture and encode multiple media formats and integrate them with state-of-the-art tools.
The VR community is currently offered a plethora of production pipelines and tools. These range from the use of 3D scanners (e.g. LIDAR sensors), photogrammetry, 3D modeling, motion capture and facial tracking techniques, to the use of high-quality (stereoscopic) video cameras. The same is true for advanced post-production processes, like background removal, color correction, illumination techniques, and point cloud fusion and cleaning.
In this context, VRTogether is exploring how to adapt and combine current (post-)production pipelines and practices, with the goal of delivering improved and cost-effective immersive experiences. The project is demonstrating how to adopt existing advanced post-production techniques, while reducing costs, resources and complexity, still delivering high quality work. Examples of such techniques are:
- Motion capture for realistic and accurate animation in real time. With the technology we are developing, the importance of volumetric capture and its simplicity when capturing animations in isolation is demonstrated. The precision of movements must be kept and therefore the capture of the animation scheme, but the achievements simplify the process.
- Capture of volumes and replacement in real time for previz in shooting
- Scanning of objects: volumetric capture for modeling (simplification of the process)
- Complimentary tools for photogrammetry or 3D scanning
- Real time rendering techniques
The project also demonstrates the impact of new storytelling techniques on content production and the user experience. An effective combination between content formats and the appropriate narrative opens the door toward hyper-realism, fidelity and immediacy, while at the same time enabling a rich set of interaction modalities. These new stories not only have potential in the entertainment sector, but also in other relevant sectors like education and culture.
Re-Designing the distribution chain for the new innovative and shared immersive media
The objective relates to the distribution and media orchestration of new immersive media formats and streams. This includes media capability negotiation, synchronization (content, space, and time) and scalability aspects.
Currently Social VR and VR communication platforms have a gap when it comes to photo-realistic immersive media formats. Enabling photorealistic environments that generate co-presence requires innovative software architectures for multi-user synchronized consumption of content, together with additional distribution paths to connect the end-users that are experiencing this content together. To make VRTogether accessible for everyone it also requires that the infrastructure can be deployed and utilised in a cost-effective manner.
VRTogether is advancing the-state-of-the-art with regard to the provision of shared photo-realistic immersive experiences, combining a rich set of traditional and immersive formats. To this end, the project is re-shaping the distribution chain in order to provide and seamlessly integrate:
- New end-to-end pipelines for a real-time and tiled distribution of immersive content, like Time Varying Meshes (TVMs) and Point Clouds.
- Novel Orchestration services to enable session management and the delivery of synchronized multi-party experiences.
- Advanced media rendering capabilities to seamlessly blend heterogeneous content formats in real-time.
- Cloud-based processing capabilities and advanced trans-coding and delivery strategies to enable scalability. In particular, the project is for the first time bringing the concept of Multi-Control Unit (MCU) to the volumetric world.
All these contributions are being developed with a focus on interoperability, simplicity to deploy, scalability, cost efficiency and backwards compatibility with current network infrastructures and resources.
New metrics and evaluation methods for Social VR
The objective relates to the development of novel Quality of Experience (QoE) metrics and evaluation methods for the evaluation of Social VR
VRTogether has designed a novel protocol, a set of metrics, and the associated analysis toolset for evaluating Social VR. The impact of these results may go beyond the project, since they can become de-facto standardized processes for evaluating Social VR as a new medium.
The protocol and metrics include both quantitative and qualitative methods: a new questionnaire that combines presence, immersion and togetherness; a set of objective metrics based on system performance; and user perceived quality (e.g., point cloud objective quality metrics) and user behaviour (e.g., navigation patterns of the user) modeling.
In order to maximize the impact, the project has followed a valorisation strategy based on standardisation and open science. The core results of the project on novel protocols and metrics for Social VR have been published in top venues (e.g., ACM CHI, IEEE VR, and IEEE Signal Processing Letters). These results have been used as inputs for both MPEG (ad-hoc group on Quality of Immersive Media) and ITU (TU-T P.360-VR “Subjective test methodologies for 360º video on HMD”) standardisation bodies, and have been published as open source code (https://github.com/cwi-dis/point-cloud-color-metric). Maximize the impact of VR-Together can have on content creators, producers, distributors, tooling companies, service providers and the general audience.
Finally, the project is conducting a set of UX activities (objective and subjective testing, focus groups with professionals…) to accurately gather and validate requirements for Social VR.
Maximize the impact of the project outcomes
The objective relates to maximize the impact of VR-Together in the social VR market through a clear and validated valorization and exploitation strategy, and influence the Social VR ecosystem, using different types of communications channels.
VRTogether project has structured its business plan activities into three phases. During the first phase, the main objective was to set up a continuous process of analysis and evolution of the market while reassessing the initial operating plan. The second phase was dedicated to develop and validate different business model canvas, associated with the technological components identified in the project to prepare potential technology transfer or exploitation. In the third phase, the consortium is organizing dissemination activities, like webinars, and Joint Business Clinics, with the following goals:
- Analyze the existing exploitation plans, profile customer segments, re-assess market needs and revisit value propositions.
- Identify how different stakeholders valorize the (potential) outputs of the project, and to set up both common and individual strategies to maximize the exploitation outputs.