Delivering your new VR social experience: our commitment in VRTogether
Although VR has been a buzz word for the past few years, VR experience delivery is still at its infancy. There are some products starting to use tiles and other techniques to broadcast VR videos but real deployments still stream flat equirectangular videos. There are no standards able to stream and merge 3D models (e.g. “3D videos” or persons) in a unified social VR experience. VRTogether comes to the rescue.
VRTogether is a consortium about social virtual reality. Our goal is to provide a social virtual reality experience that mixes up natural and artificial (e.g. a movie and 3D-reconstructed persons). In one of the content samples developed, users are able to reenact themselves an experience which looks like one they could have seen in a thriller.
The world of VR has acted strange lately. After the sudden boom (from 2014’s Oculus Rift acquisition by Facebook for $2bn to 2017’s reality check from the market), new HMDs have been presented at the CES 2018, solving real issues (increased field-of-view, better displays, less motion-sickness, autonomous). However while problems are getting solved, the news about the solutions receive little coverage.
Social VR has been teased a lot during this VR frenzy. Who doesn’t remember this Facebook/Oculus conference with a room full of people wearing a Rift?
Beyond the awe that provoked this image on many people, there is an undeniable fact: social VR is one of the futures of VR.
Social VR is one of the futures of VR.
To get a full experience of social VR, you need your friends to be represented in full 3D with a feeling of interaction with them. There are some technical catches:
- As of now, there are no standards to stream 3D videos. The two main technologies that compete are Meshes and Point Clouds. Both approaches are at different stages of standardization. Without a specification handled by a serious body (MPEG, W3C, etc.), device and browser integrations are absent. These are key points as of 2018 and we are working on it.
- Bandwidth limitation. 3D representation of the users needs to be sent in real time, which requires a lot of bandwidth at the moment to get a smooth experience. We are talking about 50mbps for a realistic representation.
- Latency. We have to ensure that, to be able to interact convincingly with your friends, you’ll be able to see yourself and your friends moving instantly, without any perceptible delay. A self-representation should show almost zero-delay.
- Synchronization: the users should not perceive strange artifacts (e.g. lip sync, freezes, zombies, etc.).
The dust is settling down, the VR mania is down, and now, we can start to get really serious about it.
Who we are
Motion Spell, the company under GPAC Licensing, will support that objective at NAB 2018, and at MPEG 122 in San Diego.
The technical objectives of the consortium are to provide more reliable technological bricks to support packaging and delivery of such experiences. As such, Motion Spell will deliver a first prototype as part of the VRTogether Pilot one, which will be available this semester.
Come and follow us in this VR journey with i2Cat, CWI, TNO, Future Lighthouse, CERTH, Artanim, Viaccess Orca, Entropy Studio.
This project has been funded by the European Commission as part of the H2020 program, under the grant agreement 762111.
Text and pictures: Rodolphe Fouquet – Motion Spell