<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Motion Spell &#8211; VRTogether</title>
	<atom:link href="https://vrtogether.eu/author/motion-spell/feed/" rel="self" type="application/rss+xml" />
	<link>https://vrtogether.eu</link>
	<description>An end-to-end system for the production and delivery of photorealistic and social virtual reality experiences</description>
	<lastBuildDate>Mon, 24 Aug 2020 13:58:15 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	

 
	<item>
		<title>Towards a generic orchestrator: The VRTogether project experiencePart 3/3: API Detail </title>
		<link>https://vrtogether.eu/2020/08/20/towards-a-generic-orchestrator-the-vrtogether-project-experiencepart-3-3-api-detail/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=towards-a-generic-orchestrator-the-vrtogether-project-experiencepart-3-3-api-detail</link>
		
		<dc:creator><![CDATA[Motion Spell]]></dc:creator>
		<pubDate>Thu, 20 Aug 2020 14:56:47 +0000</pubDate>
				<category><![CDATA[News]]></category>
		<guid isPermaLink="false">https://vrtogether.eu/?p=2433</guid>

					<description><![CDATA[<p>The post <a rel="nofollow" href="https://vrtogether.eu/2020/08/20/towards-a-generic-orchestrator-the-vrtogether-project-experiencepart-3-3-api-detail/">Towards a generic orchestrator: The VRTogether project experience&lt;br&gt;&lt;h4&gt;Part 3/3: API Detail &lt;/h4&gt;</a> appeared first on <a rel="nofollow" href="https://vrtogether.eu">VRTogether</a>.</p>
]]></description>
										<content:encoded><![CDATA[<div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>In the third and last of our series of articles on the VRTogether Orchestrator (<em>you can read <a href="https://vrtogether.eu/2020/06/05/towards-a-generic-orchestrator-the-vrtogether-project-experiencepart-1-3-overview/">Part 1</a> and <a href="https://vrtogether.eu/2020/07/13/towards-a-generic-orchestrator-the-vrtogether-project-experiencepart-2-3-technological-choices/">Part 2</a> if you missed them</em>) we cover the details of its current API.</p>
<p>In the following paragraphs, we will go into details of each categories of services:</p>
<h5><strong>Authentication and logging</strong></h5>
<p>The first step a user takes to access the VRT platform is to try to authenticate by using its credentials. This authentication phase is very simple so far as it is assumed to be executed in a safe environment. The orchestrator provides so far 2 main functions:</p>
<pre><em>Login(userName, userPassword)</em> 
<em>Logout()
</em></pre>
<h5><strong>Session management</strong></h5>
<p>Regarding the management of the platform sessions, the orchestrator provides a set of exposed API functions to handle the multiple aspects of a collaborative and scripted experience. The Core orchestrator integrates an internal data model based on four main objects:</p>
<ul>
<li><strong>User</strong>: a person who wants to share an immersive social experience with others persons.</li>
<li><strong>Session</strong>: a session gathers users that want to share an immersive social experience together based on a Scenario instance</li>
<li><strong>Scenario</strong>: a scenario refers to a virtual world composed with different locations called rooms. The scenario can include the description of the underlying logic.</li>
<li><strong>Room</strong>: a room is a virtual location which is part of the scenario instance.</li>
</ul>
<p>When a user creates a session, a scenario must be attached to the session. A scenario, chosen between the available ones, is then instantiated and bound to the session. This attached scenario is called a ScenarioInstance. Several sessions can create a ScenarioInstance from the same Scenario. Each ScenarioInstance will run its own logics independently (scene events, gathering users into rooms, etc.).</p>
<p>Each session can have at most one “master” user (which might be different from the user that created the session). The master user is responsible for gathering and resolving the various interaction events emitted by all the users within the session. In practice, the master user is running the “server game loop” that ensures the scheduling and the consistency of the scenario logics during the session.</p>
<p>The internal data model basic representation is based on the following schema:</p>
<p>Regarding session management, the orchestrator provides a set of function, that allow first to get information on scenarios and instantiated scenarios.</p>
<pre>GetScenarios()
GetScenarioInfo(scenarioId)
GetScenarioInstanceInfo(scenarioId)
</pre>
<p>Then there are several functions to create, join, leave, delete, or get information on session and rooms:</p>
<pre><em>AddSession(sessionName, sessionDescription, scenarioId, [canBeMaster])</em>
<em>DeleteSession(sessionId)</em>
<em>GetSessions()</em>
<em>GetSessionInfo()</em>
<em>JoinSession(sessionId, [canBeMaster])</em>
<em>LeaveSession()</em>

<em>GetRooms()</em>
<em>GetRoomInfo()</em>
<em>JoinRoom(roomId)</em>
<em>LeaveRoom()
</em></pre>
<p>And finally, there are some functions to start and stop a scenario which means a scenario have an internal clock and specific logic launching events and actions at specific time.</p>
<pre><em>StartScenario()
RestartScenario()
StopScenario()
</em></pre>
<p>The state diagram below aims at showing how a user is getting connected to the orchestrator, then create a session with a scenario, join this session and then join a room of the scenario.</p>
<h3><img loading="lazy" class="size-full wp-image-2439 aligncenter" src="https://vrtogether.eu/wp-content/uploads/2020/08/diagram1.jpg" alt="" width="606" height="418" srcset="https://vrtogether.eu/wp-content/uploads/2020/08/diagram1.jpg 606w, https://vrtogether.eu/wp-content/uploads/2020/08/diagram1-300x207.jpg 300w, https://vrtogether.eu/wp-content/uploads/2020/08/diagram1-410x283.jpg 410w, https://vrtogether.eu/wp-content/uploads/2020/08/diagram1-100x69.jpg 100w, https://vrtogether.eu/wp-content/uploads/2020/08/diagram1-275x190.jpg 275w" sizes="(max-width: 606px) 100vw, 606px" /></h3>
<p>&nbsp;</p>
<h5><strong>User data and message communication</strong></h5>
<p>This category includes simple but very powerful functions to handle specific user data and how user can communicate.</p>
<p>First, there are functions to get information on connected users or users in a session.</p>
<pre><em>GetUsers()</em>
<em>GetUserInfo([userId])
</em></pre>
<p>The orchestrator provides then a set of functions to manage user data. User data hold a set of properties bound to each user (e.g. the URLs to access the user audio / video / point cloud streams).</p>
<p>Those data are store and retrieve by the user after logging. Those data can be also updated.</p>
<pre><em>GetUserData([userId])</em>
<em>UpdateUserData(userDataKey, userDataValue)</em>
<em>UpdateUserDataArray(userDataArray)</em>
<em>UpdateUserDataJson(userDataJson)</em>
<em>ClearUserData()
</em></pre>
<p>A human behind the user has the means to communicate to other human users by sending textual messages to one specific user or to all users (“chat” functionality).</p>
<pre><em>SendMessageToAll(message)
SendMessage(userId, message)
</em></pre>
<p>The orchestrator includes also several functions to handle scene events dispatching between users. The orchestrator has no direct knowledge on the scene event commands or even the format itself of the event. Its role is just to dispatch and handle them with a logic similar to what can be found in some game engines: within a session, one user can be declared as the master. The master is the one that takes decisions regarding a session. Users are able to send events to the master. Then, the latter is able to process them and then dispatch processed events to one user or all users.</p>
<pre><em>SendSceneEventToMaster(sceneEventData)
SendSceneEventToUser(userId, sceneEventData)
SendSceneEventToAllUsers(sceneEventData)
SendSceneEventToUserDirect(userId, sceneEventData)
SendSceneEventToAllUsersDirect(sceneEventData)
</em></pre>
<h5><strong>Pilot and monitor delivery components</strong></h5>
<p>VRTogether features an SFU (Stream Forwarding Unit) that duplicates streams to all the users of the media session:</p>
<pre><em>GetSfuInfo()
GetSfuData()
</em></pre>
<p>The VRTogether experience also includes some external user (or fake user called “Live Presenter”) that is handled separately from the other users:</p>
<pre><em>GetLivePresenterInfo()
GetLivePresenterData()
</em></pre>
<p>For now, the SFU “pool” (set of available SFU units) configuration is static by declaring ports and other information. But the instantiation of SFU instances is dynamically performed either by following  a simple algorithm (like one SFU by session) or more complex algorithm (like round-robin) to optimize the resource in term of VM or server to the number of session and users. The module that pilot other components open the way to explore new challenges with regards to scalability.<strong> </strong></p>
<h5><strong>Common time provider</strong></h5>
<p>The time API is rather simple because the role of the orchestrator is to forward messages to different components. The orchestrator has no responsibility to synchronize objects. However it provides this clock API for convenience.</p>
<pre><em>GetNTPTime()
</em></pre>
<p>Please note that the NTP clock was also evaluated against other clocks (PTP…) and integrated clock mechanisms (DVB-CSS). It was decided to keep the orchestration clock layer as thin as possible.</p>
<h5><strong>Logs and Analytics framework</strong></h5>
<p>Some logs functions have been added allowing to collect logs from other modules like SFU or Live Presenter. The retrieval is not provided by a socket.io function but thanks to a specific webserver on another port which a request like http://&lt;server_url&gt;:8081</p>
<p>The access to those logs is actually relevant for client developers that have then a way to retrieve in live logs from media distribution modules and validate or debug their own client implementation.</p>
<p>Other specific logs functions are going to be added to provide analytics on specific parameters (synchronization, bandwidth, number of streams etc..). Those function will allow developers to improve and tune their own client implementation.</p>
<h5><strong>Media transmission backup layer</strong></h5>
<p>The orchestrator also integrates some specific backup functions to be able to transmit any kind of streams from one user to other users. Those did not integrate any synchronization signalization (or transcoding capabilities) since it forwards raw packets</p>
<p>This layer allows a user to declare any number of typed streams (the type is a simple textual descriptor that must help the receiver to decode the stream) that can be transmitted through the Orchestrator API.</p>
<p>Then, any other user within the same session can register to these streams and then be notified for incoming data from these streams.</p>
<p>Typed streams declaration is made by the following API methods:</p>
<pre><em>DeclareDataStream(dataStreamKind, dataStreamDescription)
RemoveDataStream(dataStreamKind)
RemoveAllDataStreams()
</em></pre>
<p>Then, data can be pushed to the Orchestrator with the following API method:</p>
<pre><em>SendData(dataStreamKind, data)
</em></pre>
<p>Other users can be informed of available data streams with the following API method:</p>
<pre><em>GetAvailableDataStreams(dataStreamUserId)
</em></pre>
<p>Then, they can manage registration to data streams by the following API methods:</p>
<pre><em>RegisterForDataStream(dataStreamUserId, dataStreamKind)
UnregisterFromDataStream(dataStreamUserId, dataStreamKind)
UnregisterFromAllDataStreams()
GetRegisteredDataStreams()
</em></pre>
<p>For audio management, a more simple and direct approach can also be used, with the following method:</p>
<pre><em>PushAudio(audioData)
</em></pre>
<p>When a user uses this method for pushing audio, any other user within the same session is notified with the audio data, with no registration needed.</p>
<h5><strong>Towards a generic orchestrator and conclusion</strong></h5>
<p>This series of articles was our deep overview of VRTogether orchestration challenges and how we overcame them. Orchestration is neither a new problem nor a solved one.</p>
<p>At the beginning of the project we hoped to find a project that would provide us with some generic orchestration layer. Or to be able to reuse one of the partner’s existing components. However, this has proven for us to be a dead-end. Giving a deep thought about what our core business was allowed us to draft a quite extensive but really simple API.</p>
<p>Stay safe and looking forward to seeing you soon!</p>

		</div>
	</div>
</div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="vc_row wpb_row vc_inner vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="vc_separator wpb_content_element vc_separator_align_center vc_sep_width_100 vc_sep_pos_align_center vc_separator_no_text vc_sep_color_grey" ><span class="vc_sep_holder vc_sep_holder_l"><span  class="vc_sep_line"></span></span><span class="vc_sep_holder vc_sep_holder_r"><span  class="vc_sep_line"></span></span>
</div></div></div></div></div>
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>Author: <a href="https://vrtogether.eu/consortium/motion-spell/">Motion Spell</a></p>
<p>Come and follow us in this VR journey with <a href="https://vrtogether.eu/consortium/i2cat/">i2CAT</a>, <a href="https://vrtogether.eu/consortium/cwi/">CWI</a>, <a href="https://vrtogether.eu/consortium/tno/">TNO</a>, <a href="https://vrtogether.eu/consortium/certh/">CERTH</a>, <a href="https://vrtogether.eu/consortium/artanim/">Artanim</a>, <a href="https://vrtogether.eu/consortium/viaccess-orca/">Viaccess-Orca</a>, <a href="https://vrtogether.eu/consortium/the_mo/">TheMo</a> and <a href="https://vrtogether.eu/consortium/motion-spell/">Motion Spell</a>.</p>

		</div>
	</div>
</div></div></div></div><section class="vc_section"><div class="vc_row wpb_row vc_row-fluid vc_column-gap-10"><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="vc_row wpb_row vc_inner vc_row-fluid vc_custom_1547799200116 vc_row-has-fill vc_row-o-content-middle vc_row-flex"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_center">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="75" height="50" src="https://vrtogether.eu/wp-content/uploads/2019/01/EU_flag_75px.png" class="vc_single_image-img attachment-thumbnail" alt="" loading="lazy" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>This project has been funded by the European Commission as part of the H2020 program, under the grant agreement 762111.</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div></div></div></div></div></section>
<p>The post <a rel="nofollow" href="https://vrtogether.eu/2020/08/20/towards-a-generic-orchestrator-the-vrtogether-project-experiencepart-3-3-api-detail/">Towards a generic orchestrator: The VRTogether project experience&lt;br&gt;&lt;h4&gt;Part 3/3: API Detail &lt;/h4&gt;</a> appeared first on <a rel="nofollow" href="https://vrtogether.eu">VRTogether</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Towards a generic orchestrator: The VRTogether project experiencePart 2/3: technological choices </title>
		<link>https://vrtogether.eu/2020/07/13/towards-a-generic-orchestrator-the-vrtogether-project-experiencepart-2-3-technological-choices/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=towards-a-generic-orchestrator-the-vrtogether-project-experiencepart-2-3-technological-choices</link>
		
		<dc:creator><![CDATA[Motion Spell]]></dc:creator>
		<pubDate>Mon, 13 Jul 2020 09:05:18 +0000</pubDate>
				<category><![CDATA[News]]></category>
		<guid isPermaLink="false">https://vrtogether.eu/?p=2361</guid>

					<description><![CDATA[<p>The post <a rel="nofollow" href="https://vrtogether.eu/2020/07/13/towards-a-generic-orchestrator-the-vrtogether-project-experiencepart-2-3-technological-choices/">Towards a generic orchestrator: The VRTogether project experience&lt;br&gt;&lt;h4&gt;Part 2/3: technological choices &lt;/h4&gt;</a> appeared first on <a rel="nofollow" href="https://vrtogether.eu">VRTogether</a>.</p>
]]></description>
										<content:encoded><![CDATA[<div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<h6><em>This is the second of a series of articles about the VRTogeter Orchestrator. <a href="https://vrtogether.eu/2020/06/05/towards-a-generic-orchestrator-the-vrtogether-project-experiencepart-1-3-overview/">Read Part 1 here</a>.</em></h6>

		</div>
	</div>

	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>In terms of technologies the central Orchestrator component relies on the Node.js open-source platform where its integrated API is implemented in JavaScript. The protocol exchange benefits from the socket.io API, distributed under MIT licensing model, which enables real-time communication over connected nodes. The Core orchestrator has been implemented by <a href="https://vrtogether.eu/consortium/motion-spell/">Motion Spell</a> and is provided as an open source module.</p>
<p>On the user side, the orchestrator framework is integrated thanks to a glue layer. This layer developed within the Unity Game Engine is implemented in C# to provide the necessary client-side API to be wrapped in order to connect the Orchestrator. The glue layer is a proprietary software component which can be requested from <a href="https://vrtogether.eu/consortium/viaccess-orca/">Viaccess-Orca</a> and relies on the BestHTTP Unity framework dependency to support the socket.io communication.</p>
<p>The API uses JSON schemas to define the structure of client requests and server response.<br />
This way, it will be possible to validate the compliance of the messages exchanged between the orchestrator (server) and the users (clients). This means that for each command, a JSON schema will be defined for both the client request and the server response.</p>
<p>The scheme below shows how the orchestrator has been integrated into the full architecture of the VRT platform.</p>
<p><img loading="lazy" class="alignnone wp-image-2362 size-full" src="https://vrtogether.eu/wp-content/uploads/2020/07/diagram2.png" alt="" width="1816" height="1386" srcset="https://vrtogether.eu/wp-content/uploads/2020/07/diagram2.png 1816w, https://vrtogether.eu/wp-content/uploads/2020/07/diagram2-300x229.png 300w, https://vrtogether.eu/wp-content/uploads/2020/07/diagram2-1024x782.png 1024w, https://vrtogether.eu/wp-content/uploads/2020/07/diagram2-768x586.png 768w, https://vrtogether.eu/wp-content/uploads/2020/07/diagram2-1536x1172.png 1536w, https://vrtogether.eu/wp-content/uploads/2020/07/diagram2-700x534.png 700w, https://vrtogether.eu/wp-content/uploads/2020/07/diagram2-410x313.png 410w, https://vrtogether.eu/wp-content/uploads/2020/07/diagram2-100x76.png 100w, https://vrtogether.eu/wp-content/uploads/2020/07/diagram2-275x210.png 275w" sizes="(max-width: 1816px) 100vw, 1816px" /></p>
<p>In the next article we’ll investigate the details of the current orchestrator API. Stay safe and looking forward to seeing you soon!</p>

		</div>
	</div>
</div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="vc_row wpb_row vc_inner vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="vc_separator wpb_content_element vc_separator_align_center vc_sep_width_100 vc_sep_pos_align_center vc_separator_no_text vc_sep_color_grey" ><span class="vc_sep_holder vc_sep_holder_l"><span  class="vc_sep_line"></span></span><span class="vc_sep_holder vc_sep_holder_r"><span  class="vc_sep_line"></span></span>
</div></div></div></div></div>
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>Author: <a href="https://vrtogether.eu/consortium/motion-spell/">Motion Spell</a></p>
<p>Cover image: <a href="https://www.freepik.com/free-photos-vectors/technology" target="_blank" rel="noopener noreferrer">Freepik</a></p>
<p>Come and follow us in this VR journey with <a href="https://vrtogether.eu/consortium/i2cat/">i2CAT</a>, <a href="https://vrtogether.eu/consortium/cwi/">CWI</a>, <a href="https://vrtogether.eu/consortium/tno/">TNO</a>, <a href="https://vrtogether.eu/consortium/certh/">CERTH</a>, <a href="https://vrtogether.eu/consortium/artanim/">Artanim</a>, <a href="https://vrtogether.eu/consortium/viaccess-orca/">Viaccess-Orca</a>, <a href="https://vrtogether.eu/consortium/the_mo/">TheMo</a> and <a href="https://vrtogether.eu/consortium/motion-spell/">Motion Spell</a>.</p>

		</div>
	</div>
</div></div></div></div><section class="vc_section"><div class="vc_row wpb_row vc_row-fluid vc_column-gap-10"><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="vc_row wpb_row vc_inner vc_row-fluid vc_custom_1547799200116 vc_row-has-fill vc_row-o-content-middle vc_row-flex"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_center">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="75" height="50" src="https://vrtogether.eu/wp-content/uploads/2019/01/EU_flag_75px.png" class="vc_single_image-img attachment-thumbnail" alt="" loading="lazy" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>This project has been funded by the European Commission as part of the H2020 program, under the grant agreement 762111.</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div></div></div></div></div></section>
<p>The post <a rel="nofollow" href="https://vrtogether.eu/2020/07/13/towards-a-generic-orchestrator-the-vrtogether-project-experiencepart-2-3-technological-choices/">Towards a generic orchestrator: The VRTogether project experience&lt;br&gt;&lt;h4&gt;Part 2/3: technological choices &lt;/h4&gt;</a> appeared first on <a rel="nofollow" href="https://vrtogether.eu">VRTogether</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Towards a generic orchestrator: The VRTogether project experiencePart 1/3: overview</title>
		<link>https://vrtogether.eu/2020/06/05/towards-a-generic-orchestrator-the-vrtogether-project-experiencepart-1-3-overview/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=towards-a-generic-orchestrator-the-vrtogether-project-experiencepart-1-3-overview</link>
		
		<dc:creator><![CDATA[Motion Spell]]></dc:creator>
		<pubDate>Fri, 05 Jun 2020 06:32:15 +0000</pubDate>
				<category><![CDATA[News]]></category>
		<guid isPermaLink="false">https://vrtogether.eu/?p=2214</guid>

					<description><![CDATA[<p>The post <a rel="nofollow" href="https://vrtogether.eu/2020/06/05/towards-a-generic-orchestrator-the-vrtogether-project-experiencepart-1-3-overview/">Towards a generic orchestrator: The VRTogether project experience&lt;br&gt;&lt;h4&gt;Part 1/3: overview&lt;/h4&gt;</a> appeared first on <a rel="nofollow" href="https://vrtogether.eu">VRTogether</a>.</p>
]]></description>
										<content:encoded><![CDATA[<div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>VRTogether is a world class <a href="https://vrtogether.eu/consortium/">consortium of research entities and companies</a> looking at evaluating the “togetherness” in VR. The VRTogether project is focused on the<strong> real-time photo-realistic 3D capture</strong> and the integration of a <strong>live-avatar</strong> into complex scenarios allowing to <strong>share virtual experience together</strong>.</p>
<p>Since the main goal of the VRTogether project is to share virtual experience between multiples users that provide to and receive from other users real-time photo-realistic 3D capture live avatar, the maximum of the effort has been concentrated on the research and development of user<strong> 3D capture &amp; encoding modules </strong>and the<strong> delivery chain</strong> of all live 3D avatar streams between users.</p>
<p>But having a component to orchestrate the users, to handle the initialization of sessions with scenarios, to manage delivery chain that deals with streams transmission, to take care of scenario events and users events appeared to be essential. Like a gaming engine,<strong> the Orchestrator component became the heart of the complete VRTogether</strong> solution that schedules and controls the full execution of components and communication between users.The schema below shows the central role of the orchestrator component:</p>
<p><a href="https://vrtogether.eu/wp-content/uploads/2020/06/orchestrator.jpg"><img loading="lazy" class="aligncenter wp-image-2217" src="https://vrtogether.eu/wp-content/uploads/2020/06/orchestrator.jpg" alt="VRTogether Orchestrator diagram" width="601" height="313" srcset="https://vrtogether.eu/wp-content/uploads/2020/06/orchestrator.jpg 1178w, https://vrtogether.eu/wp-content/uploads/2020/06/orchestrator-300x156.jpg 300w, https://vrtogether.eu/wp-content/uploads/2020/06/orchestrator-1024x534.jpg 1024w, https://vrtogether.eu/wp-content/uploads/2020/06/orchestrator-768x400.jpg 768w, https://vrtogether.eu/wp-content/uploads/2020/06/orchestrator-700x365.jpg 700w, https://vrtogether.eu/wp-content/uploads/2020/06/orchestrator-410x214.jpg 410w, https://vrtogether.eu/wp-content/uploads/2020/06/orchestrator-100x52.jpg 100w, https://vrtogether.eu/wp-content/uploads/2020/06/orchestrator-275x143.jpg 275w" sizes="(max-width: 601px) 100vw, 601px" /></a></p>
<p>As shown above the orchestrator provides a complete set of services to build and deploy easily a <strong>smart and unified collaborative platform</strong> and to let the developers concentrate on user modules implementation and scenarios specification for having a great final shared virtual experience. The different services provided by our VRT orchestrator can be split into seven categories of services:</p>
<ol>
<li><strong>Authentication and logging</strong>: Validating credentials sent by the users and log them on the orchestrator.</li>
<li><strong>Session management</strong>: Handling users into sessions and room according to scenarios and manage the execution of a scenario.</li>
<li><strong>Messages communication</strong>: handle messages communication between users.</li>
<li><strong>Pilot and monitor delivery components</strong>: instantiate media distribution component according to session and monitoring the streams management.</li>
<li><strong>Common time provider</strong>: for helping synchronization of media on user side.</li>
<li><strong>Logs and Analytics framework</strong>: the orchestrator provides full logging and analytic services to help developers to tune their user components.</li>
<li><strong>Media transmission backup layer</strong>: orchestration includes some specific functions to provide backup generic streams transmission functions.</li>
</ol>
<p>In the <a href="https://vrtogether.eu/2020/07/13/towards-a-generic-orchestrator-the-vrtogether-project-experiencepart-2-3-technological-choices/" target="_blank" rel="noopener noreferrer">next article</a> we’ll investigate the technological choices for the VRT orchestration component. Stay safe and looking forward to seeing you soon!</p>

		</div>
	</div>
</div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="vc_row wpb_row vc_inner vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="vc_separator wpb_content_element vc_separator_align_center vc_sep_width_100 vc_sep_pos_align_center vc_separator_no_text vc_sep_color_grey" ><span class="vc_sep_holder vc_sep_holder_l"><span  class="vc_sep_line"></span></span><span class="vc_sep_holder vc_sep_holder_r"><span  class="vc_sep_line"></span></span>
</div></div></div></div></div>
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>Author: <a href="https://vrtogether.eu/consortium/motion-spell/">Motion Spell</a></p>
<p>Cover image: <a href="https://www.freepik.com/free-photos-vectors/background" target="_blank" rel="noopener noreferrer">Freepik</a></p>
<p>Come and follow us in this VR journey with <a href="https://vrtogether.eu/consortium/i2cat/">i2CAT</a>, <a href="https://vrtogether.eu/consortium/cwi/">CWI</a>, <a href="https://vrtogether.eu/consortium/tno/">TNO</a>, <a href="https://vrtogether.eu/consortium/certh/">CERTH</a>, <a href="https://vrtogether.eu/consortium/artanim/">Artanim</a>, <a href="https://vrtogether.eu/consortium/viaccess-orca/">Viaccess-Orca</a>, <a href="https://vrtogether.eu/consortium/the_mo/">TheMo</a> and <a href="https://vrtogether.eu/consortium/motion-spell/">Motion Spell</a>.</p>

		</div>
	</div>
</div></div></div></div><section class="vc_section"><div class="vc_row wpb_row vc_row-fluid vc_column-gap-10"><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="vc_row wpb_row vc_inner vc_row-fluid vc_custom_1547799200116 vc_row-has-fill vc_row-o-content-middle vc_row-flex"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_center">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="75" height="50" src="https://vrtogether.eu/wp-content/uploads/2019/01/EU_flag_75px.png" class="vc_single_image-img attachment-thumbnail" alt="" loading="lazy" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>This project has been funded by the European Commission as part of the H2020 program, under the grant agreement 762111.</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div></div></div></div></div></section>
<p>The post <a rel="nofollow" href="https://vrtogether.eu/2020/06/05/towards-a-generic-orchestrator-the-vrtogether-project-experiencepart-1-3-overview/">Towards a generic orchestrator: The VRTogether project experience&lt;br&gt;&lt;h4&gt;Part 1/3: overview&lt;/h4&gt;</a> appeared first on <a rel="nofollow" href="https://vrtogether.eu">VRTogether</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>IBC 2019 report: VR still leads the innovation</title>
		<link>https://vrtogether.eu/2019/10/15/ibc-2019-report-vr-still-leads-the-innovation/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=ibc-2019-report-vr-still-leads-the-innovation</link>
		
		<dc:creator><![CDATA[Motion Spell]]></dc:creator>
		<pubDate>Tue, 15 Oct 2019 09:02:40 +0000</pubDate>
				<category><![CDATA[Events]]></category>
		<category><![CDATA[News]]></category>
		<guid isPermaLink="false">https://vrtogether.eu/?p=1818</guid>

					<description><![CDATA[<p>The post <a rel="nofollow" href="https://vrtogether.eu/2019/10/15/ibc-2019-report-vr-still-leads-the-innovation/">IBC 2019 report: VR still leads the innovation</a> appeared first on <a rel="nofollow" href="https://vrtogether.eu">VRTogether</a>.</p>
]]></description>
										<content:encoded><![CDATA[<div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="vc_row wpb_row vc_inner vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><span style="font-weight: 400;">The IBC (International Broadcasting Convention) taking place at Amsterdam in September is still a major milestone for any company in the video technological industry and also for individual who wants to take part in the future of media. This show still appears to be the world&#8217;s most influential media &amp; technology show. This year (2019), the quantity and high quality of focused exhibitors drove a large number of attendees (over 56 000) which is the largest of all previous years. </span></p>
<p><span style="font-weight: 400;">VRTogether is a world class consortium of research entities and companies looking at evaluating the “togetherness” in VR. The results have been so promising that IBC granted the consortium a free booth in 2018 and one of the largest booth of the Future Zone in 2019. </span></p>
<p><span style="font-weight: 400;">This article first presents VR Together’s demonstrations and summarizes the most interesting developments and innovations we experienced at IBC that will impact Social VR in the coming months and years.</span></p>

		</div>
	</div>
</div></div></div></div><div class="vc_row wpb_row vc_inner vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper"><h5 style="text-align: left" class="vc_custom_heading" >VRTogether at IBC</h5>
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><span style="font-weight: 400;">The </span><a href="https://show.ibc.org/features-overview/future-zone" target="_blank" rel="noopener"><span style="font-weight: 400;">Future Zone</span></a><span style="font-weight: 400;"> brought together the very latest ideas, innovations and concept technologies from international industry and academia, and showcases them in a single specially curated area where innovators meet. Along with booths, This zone offered an open access to the </span><a href="https://www.theiabm.org/future-trends-theatre/" target="_blank" rel="noopener"><span style="font-weight: 400;">IABM Future Trends Theater</span></a><span style="font-weight: 400;"> with an </span><a href="https://www.theiabm.org/ibc-future-trends-theatre/" target="_blank" rel="noopener"><span style="font-weight: 400;">interesting program</span></a><span style="font-weight: 400;">. Major trends this year included 8K, 5G, and immersive media (VR/AR/XR). The VR Together booth was ideally located in the center of the Zone and high visibility, generating a lot of attention.</span></p>
<p>The VRT project focused on the real-time photo-realistic 3D capture and the integration of a live-avatar into a several minutes scenario. The booth attendance was high and qualified and VRT project got lots of feedback from industry leaders, researchers, and tech wanderers who immediately perceived the potential of such a media for the future. Along with an ecological trend, from healthcare to manufacturing, from education to sport, from training to social gaming, all industries plan to use VR.</p>
<p>Beside this booth, other VRT partner presents a paper introducing some other outcome: a web-based social VR framework, that allows to rapidly develop, test and evaluate social VR experiences.</p>
<p><span style="font-weight: 400;">Based on this framework, the paper presents an evaluation of six user experiences in both 360-degree and 3D volumetric VR. The paper is accessible here:</span></p>
<p><a href="https://show.ibc.org/__media/Files/Tech%20Papers%202019/-Simon-Gunkel.pdf" target="_blank" rel="noopener"><span style="font-weight: 400;">Everyday photo-realistic social VR: Communicate and collaborate with an enhanced co-presence and immersion</span></a></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_center  vc_custom_1571127569663">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="1071" height="725" src="https://vrtogether.eu/wp-content/uploads/2019/10/image6.jpg" class="vc_single_image-img attachment-full" alt="" loading="lazy" srcset="https://vrtogether.eu/wp-content/uploads/2019/10/image6.jpg 1071w, https://vrtogether.eu/wp-content/uploads/2019/10/image6-300x203.jpg 300w, https://vrtogether.eu/wp-content/uploads/2019/10/image6-768x520.jpg 768w, https://vrtogether.eu/wp-content/uploads/2019/10/image6-1024x693.jpg 1024w, https://vrtogether.eu/wp-content/uploads/2019/10/image6-700x474.jpg 700w, https://vrtogether.eu/wp-content/uploads/2019/10/image6-410x278.jpg 410w, https://vrtogether.eu/wp-content/uploads/2019/10/image6-100x68.jpg 100w, https://vrtogether.eu/wp-content/uploads/2019/10/image6-275x186.jpg 275w" sizes="(max-width: 1071px) 100vw, 1071px" /></div>
		</figure>
	</div>

	<div  class="wpb_single_image wpb_content_element vc_align_left">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="596" height="596" src="https://vrtogether.eu/wp-content/uploads/2019/10/image5-e1571129529334.jpg" class="vc_single_image-img attachment-full" alt="" loading="lazy" srcset="https://vrtogether.eu/wp-content/uploads/2019/10/image5-e1571129529334.jpg 596w, https://vrtogether.eu/wp-content/uploads/2019/10/image5-e1571129529334-150x150.jpg 150w, https://vrtogether.eu/wp-content/uploads/2019/10/image5-e1571129529334-300x300.jpg 300w, https://vrtogether.eu/wp-content/uploads/2019/10/image5-e1571129529334-410x410.jpg 410w, https://vrtogether.eu/wp-content/uploads/2019/10/image5-e1571129529334-100x100.jpg 100w, https://vrtogether.eu/wp-content/uploads/2019/10/image5-e1571129529334-275x275.jpg 275w" sizes="(max-width: 596px) 100vw, 596px" /></div>
		</figure>
	</div>
</div></div></div></div><h5 style="text-align: left" class="vc_custom_heading" >Top star VR innovating technologies</h5>
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><span style="font-weight: 400;">Besides the VRT booth, there were many others relevant booths around VR. Some of them caught our attention. </span></p>

		</div>
	</div>
<h6 style="text-align: left" class="vc_custom_heading" >VR reconstruction for sport replays by canon</h6>
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><span style="font-weight: 400;">The Canon Group develops cutting-edge technologies to create a unique immersive experience placing the spectator in the middle of the action : the Free Viewpoint Video System. This system provides a video experience like never before using Canon’s unrivaled imaging technologies (</span><span style="font-weight: 400;">https://global.canon/en/technology/frontier18.html</span><span style="font-weight: 400;">) </span></p>

		</div>
	</div>

	<div  class="wpb_single_image wpb_content_element vc_align_center  vc_custom_1571127754168">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="1178" height="412" src="https://vrtogether.eu/wp-content/uploads/2019/10/image8.png" class="vc_single_image-img attachment-full" alt="" loading="lazy" srcset="https://vrtogether.eu/wp-content/uploads/2019/10/image8.png 1178w, https://vrtogether.eu/wp-content/uploads/2019/10/image8-300x105.png 300w, https://vrtogether.eu/wp-content/uploads/2019/10/image8-768x269.png 768w, https://vrtogether.eu/wp-content/uploads/2019/10/image8-1024x358.png 1024w, https://vrtogether.eu/wp-content/uploads/2019/10/image8-700x245.png 700w, https://vrtogether.eu/wp-content/uploads/2019/10/image8-410x143.png 410w, https://vrtogether.eu/wp-content/uploads/2019/10/image8-100x35.png 100w, https://vrtogether.eu/wp-content/uploads/2019/10/image8-275x96.png 275w" sizes="(max-width: 1178px) 100vw, 1178px" /></div>
		</figure>
	</div>

	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><span style="font-weight: 400;">You can view the same scene from various angles, changing to the perspective of an athlete on the field or any number of alternate viewpoints. Additionally, viewers can control both viewpoint and game time at will. This revolutionary technology dramatically changes how sports will be viewed. </span></p>

		</div>
	</div>
<h6 style="text-align: left" class="vc_custom_heading" >OMAF4CLOUD: Standards-enabled 360° video creation as a service</h6>
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><span style="font-weight: 400;">Omnidirectional Media Format (OMAF) is a media standard for 360° media content developed by the Moving Picture Experts Group (MPEG). In order to address the needs of advanced media processing and delivery services, MPEG is developing a new standard called Network based Media Processing (NBMP), a standard that aims at increased media processing efficiency, faster and lower cost by leveraging the public, private or hybrid cloud services. MPEG presents a very interesting paper that covers both OMAF and NBMP standards. It also exposes an end-to-end design and proof of concept enabling immersive virtual reality experience to the end users: </span><a href="https://show.ibc.org/__media/Files/Tech%20Papers%202019/G3-201-Yu-You.pdf" target="_blank" rel="noopener"><span style="font-weight: 400;">https://show.ibc.org/__media/Files/Tech%20Papers%202019/G3-201-Yu-You.pdf</span></a><span style="font-weight: 400;"> </span></p>

		</div>
	</div>
<div class="vc_row wpb_row vc_inner vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper"><h6 style="text-align: left" class="vc_custom_heading" >Nokia: Real-time decoding and ar playback of the emerging MPEG video-based point cloud compression standard</h6>
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><span style="font-weight: 400;">Nokia presented the world’s first implementation of the upcoming MPEG standard for video-based point cloud compression (V-PCC) on today’s mobile hardware.  </span></p>
<p><span style="font-weight: 400;">As ISO/IEC 23090-5 is about to be published as an international standard, this first V-PCC implementation is an important asset to prove its relevancy to the public.</span></p>
<p><span style="font-weight: 400;">Nokia, that won an award, describes in a paper all their works on this topic : </span></p>
<p><a href="https://show.ibc.org/__media/Files/Tech%20Papers%202019/Sebastian-Schwarz.pdf"><span style="font-weight: 400;">https://show.ibc.org/__media/Files/Tech%20Papers%202019/Sebastian-Schwarz.pdf</span></a></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="304" height="449" src="https://vrtogether.eu/wp-content/uploads/2019/10/image2.jpg" class="vc_single_image-img attachment-full" alt="" loading="lazy" srcset="https://vrtogether.eu/wp-content/uploads/2019/10/image2.jpg 304w, https://vrtogether.eu/wp-content/uploads/2019/10/image2-203x300.jpg 203w, https://vrtogether.eu/wp-content/uploads/2019/10/image2-100x148.jpg 100w, https://vrtogether.eu/wp-content/uploads/2019/10/image2-275x406.jpg 275w" sizes="(max-width: 304px) 100vw, 304px" /></div>
		</figure>
	</div>
</div></div></div></div><div class="vc_row wpb_row vc_inner vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper"><h6 style="text-align: left" class="vc_custom_heading" >ImAc : Immersive Accessibility</h6>
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><span style="font-weight: 400;">The goal of Immersive Accessibility (ImAc), funded by the EU as part of the H2020 framework, is to explore how accessibility tools and access services can be integrated into immersive media and in particular 360-degree content. It is not acceptable that accessibility is regarded as an afterthought; rather it should be considered throughout the design, production and delivery process.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="665" height="339" src="https://vrtogether.eu/wp-content/uploads/2019/10/image1.jpg" class="vc_single_image-img attachment-full" alt="" loading="lazy" srcset="https://vrtogether.eu/wp-content/uploads/2019/10/image1.jpg 665w, https://vrtogether.eu/wp-content/uploads/2019/10/image1-300x153.jpg 300w, https://vrtogether.eu/wp-content/uploads/2019/10/image1-410x209.jpg 410w, https://vrtogether.eu/wp-content/uploads/2019/10/image1-100x51.jpg 100w, https://vrtogether.eu/wp-content/uploads/2019/10/image1-275x140.jpg 275w" sizes="(max-width: 665px) 100vw, 665px" /></div>
		</figure>
	</div>

	<div  class="wpb_single_image wpb_content_element vc_align_left">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="669" height="501" src="https://vrtogether.eu/wp-content/uploads/2019/10/image3.jpg" class="vc_single_image-img attachment-full" alt="" loading="lazy" srcset="https://vrtogether.eu/wp-content/uploads/2019/10/image3.jpg 669w, https://vrtogether.eu/wp-content/uploads/2019/10/image3-300x225.jpg 300w, https://vrtogether.eu/wp-content/uploads/2019/10/image3-410x307.jpg 410w, https://vrtogether.eu/wp-content/uploads/2019/10/image3-100x75.jpg 100w, https://vrtogether.eu/wp-content/uploads/2019/10/image3-275x206.jpg 275w" sizes="(max-width: 669px) 100vw, 669px" /></div>
		</figure>
	</div>
</div></div></div></div><h5 style="text-align: left" class="vc_custom_heading" >What are the next challenges?</h5>
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><span style="font-weight: 400;">Some professionals of the video industry see VR as a decreasing trend. From VRT project point of view, we don’t and we are not alone with this vision. Head mounted displays are cumbersome and this is the main reason why AR and VR crossed into XR. </span></p>
<p><span style="font-weight: 400;">One of the common denominator of many VR projects are their ability to handle 3D volumetric data, particularly to manage the compression of dynamic 3D point cloud data. The main philosophy behind V-PCC is to leverage existing video codecs for compressing the geometry and texture information of dynamic point clouds. This is essentially achieved by converting the  point cloud into a set of different video sequences. In particular, three video sequences, one that captures the geometry information, one that captures the texture information of the point cloud data, and another that describes the occupancy in 3D space, are generated and compressed using existing video codecs, such as MPEG-4 AVC, HEVC, AV1, or similar. </span></p>
<p><span style="font-weight: 400;">V-PCC has seduced at IBC for several reasons: </span></p>
<p><span style="font-weight: 400;">1) The video based approach rely on existing video codecs (HEVC) and can benefit from existing hardware accelerations. MPEG provides a </span><a href="https://github.com/MPEGGroup/mpeg-pcc-tmc2" target="_blank" rel="noopener"><span style="font-weight: 400;">reference software</span></a><span style="font-weight: 400;">.</span></p>
<p><span style="font-weight: 400;">2) The standardization part (codec and transport) will be soon finalized, by the end of 2019.</span></p>
<p><span style="font-weight: 400;">3) The compression ratio is incredible. The lack of material makes bandwidth predictions complex but we are talking of a few Mbps for a realistic human representation.</span></p>
<p><span style="font-weight: 400;">4) This approach is AR/VR compatible.</span></p>
<p><span style="font-weight: 400;">The only drawback at this stage is about the capture. Either the capture takes time, or it is expensive, or it is low quality. The main future challenge is actually to be able to capture, in real time, high quality 3D photo realistic volumetric data. We are actively looking for a solution. Don’t hesitate to</span><a href="https://www.gpac-licensing.com/contact/" target="_blank" rel="noopener"> <span style="font-weight: 400;">share your thoughts with us</span></a><span style="font-weight: 400;">!</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="vc_separator wpb_content_element vc_separator_align_center vc_sep_width_100 vc_sep_pos_align_center vc_separator_no_text vc_sep_color_grey" ><span class="vc_sep_holder vc_sep_holder_l"><span  class="vc_sep_line"></span></span><span class="vc_sep_holder vc_sep_holder_r"><span  class="vc_sep_line"></span></span>
</div>
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><em>Text: Marc Brelot, Romain Bouqueau (<a href="https://vrtogether.eu/consortium/motion-spell/">Motion Spell</a>)</em></p>

		</div>
	</div>

	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>Come and follow us in this VR journey with <a href="http://www.i2cat.net/en" target="_blank" rel="noopener">i2CAT</a>, <a href="https://www.cwi.nl/" target="_blank" rel="noopener">CWI</a>, <a href="https://www.tno.nl/en/" target="_blank" rel="noopener">TNO</a>, <a href="https://www.certh.gr/root.en.aspx" target="_blank" rel="noopener">CERTH</a>, <a href="http://www.artanim.ch/" target="_blank" rel="noopener">Artanim</a>, <a href="https://www.viaccess-orca.com/" target="_blank" rel="noopener">Viaccess-Orca</a>, <a href="https://www.entropystudio.net/" target="_blank" rel="noopener">Entropy Studio</a> and <a href="https://www.gpac-licensing.com/" target="_blank" rel="noopener">Motion Spell</a>.</p>

		</div>
	</div>
</div></div></div></div><section class="vc_section"><div class="vc_row wpb_row vc_row-fluid vc_column-gap-10"><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="vc_row wpb_row vc_inner vc_row-fluid vc_custom_1547799200116 vc_row-has-fill vc_row-o-content-middle vc_row-flex"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_center">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="75" height="50" src="https://vrtogether.eu/wp-content/uploads/2019/01/EU_flag_75px.png" class="vc_single_image-img attachment-thumbnail" alt="" loading="lazy" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>This project has been funded by the European Commission as part of the H2020 program, under the grant agreement 762111.</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div></div></div></div></div></section><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div>
<p>The post <a rel="nofollow" href="https://vrtogether.eu/2019/10/15/ibc-2019-report-vr-still-leads-the-innovation/">IBC 2019 report: VR still leads the innovation</a> appeared first on <a rel="nofollow" href="https://vrtogether.eu">VRTogether</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>MPEG Meeting #126: Virtual Reality has the wind in its sails</title>
		<link>https://vrtogether.eu/2019/05/31/mpeg-meeting-126-virtual-reality-has-the-wind-in-its-sails/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=mpeg-meeting-126-virtual-reality-has-the-wind-in-its-sails</link>
		
		<dc:creator><![CDATA[Motion Spell]]></dc:creator>
		<pubDate>Fri, 31 May 2019 08:07:53 +0000</pubDate>
				<category><![CDATA[News]]></category>
		<guid isPermaLink="false">https://vrtogether.eu/?p=1478</guid>

					<description><![CDATA[<p>The post <a rel="nofollow" href="https://vrtogether.eu/2019/05/31/mpeg-meeting-126-virtual-reality-has-the-wind-in-its-sails/">MPEG Meeting #126: Virtual Reality has the wind in its sails</a> appeared first on <a rel="nofollow" href="https://vrtogether.eu">VRTogether</a>.</p>
]]></description>
										<content:encoded><![CDATA[<div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><span style="font-weight: 400;">VR-Together is an ambitious project aiming at pushing the boundaries of Social VR. This requires partners to keep up to date and participate in <strong>research and standardization activities</strong>.</span></p>
<p><span style="font-weight: 400;">VR-Together project partner <strong>Motion Spell</strong> joined the last <a href="https://mpeg.chiariglione.org/" target="_blank" rel="noopener"><strong>MPEG</strong></a> meeting (#126) in Geneva. Following VR and Immersive Media activities, Motion Spell put in light many developments regarding video standardization that are quite relevant for the VR-Together project: MPEG-H, MPEG-I VVC, MPEG-5, and MPEG-I Immersive Video.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="668" height="459" src="https://vrtogether.eu/wp-content/uploads/2019/05/05.png" class="vc_single_image-img attachment-full" alt="" loading="lazy" srcset="https://vrtogether.eu/wp-content/uploads/2019/05/05.png 668w, https://vrtogether.eu/wp-content/uploads/2019/05/05-300x206.png 300w, https://vrtogether.eu/wp-content/uploads/2019/05/05-410x282.png 410w, https://vrtogether.eu/wp-content/uploads/2019/05/05-100x69.png 100w, https://vrtogether.eu/wp-content/uploads/2019/05/05-275x189.png 275w" sizes="(max-width: 668px) 100vw, 668px" /></div>
		</figure>
	</div>
</div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<h4>Video</h4>
<p><span style="font-weight: 400;">In the Video coding area, the activities are quite intense since MPEG is currently developing specifications for 3 standards: MPEG-H, MPEG-I VVC and MPEG-5.</span></p>
<ul>
<li style="font-weight: 400;"><span style="font-weight: 400;"><strong>MPEG-H</strong>: In Part 2 – High Efficiency Video Coding, the 4th edition specifies a new profile of HEVC which enables to encode single color plan video with some restriction on bits per sample, and which includes additional Supplemental Enhancement Information (SEI) messages.</span></li>
<li style="font-weight: 400;"><strong>MPEG-I: In Part 3</strong> – Versatile Video Coding, jointly developed with VCEG, MPEG is working on the new video compression standard after HEVC. This new standard that appears to be perceptually 50% better than HEVC, provides a high-level syntax allowing, for example, to access sub-parts of picture which is needed for future applications with scalability, GDR (gradual-decoding refresh) widely used for game streaming. Many other functionalities will be also provided like: coded picture regions, header info, parameters sets, access mechanisms, reference picture signaling, buffer management, capability signaling, sub profile signaling. A Comity Draft is expected in July 2019 and VVC standardization could reach FDIS stage in July 2010 for the core compression engine.</li>
<li style="font-weight: 400;"><span style="font-weight: 400;"><strong>MPEG-5</strong>: this set of technologies are still being heavily discussed, but MPEG has already obtained all technologies necessary to develop standards with the intended functionalities and performance from the Calls for Proposals (CfP). </span>
<ul>
<li style="font-weight: 400;"><span style="font-weight: 400;">Part 1 &#8211; Essential Video Coding (EVC) will specify a video codec with two layers: layer 1 significantly improves over AVC but performs significantly less than HEVC, and layer 2 significantly improves over HEVC but performs significantly less than VVC. A working draft (WD) was submitted during the MPEG Meeting. Some evaluation of the reference software AV1 versus EVC has been started: EVC seems to be a bit faster for the same quality. </span></li>
<li style="font-weight: 400;"><span style="font-weight: 400;">Part 2 &#8211; Low Complexity Video Coding Enhancements (mainly pushed by V-Nova with improvements from Divideon) will specify a data stream structure defined by two component streams: stream 1 is decodable by a hardware decoder, and stream 2 can be decoded in software with sustainable power consumption. Stream 2 provides new features such as compression capability extension to existing codecs, lower encoding and decoding complexity, for on demand and live streaming applications. This new Low Complexity Enhancement Video Coding (LCEVC) standard is aimed at bridging the gaps between two successive generations of codecs by providing a codec-agile extension to existing video codecs that improves coding efficiency and can be readily deployed via software upgrade and with sustainable power consumption. The planned timeline foresees a CD in October 2019 and a FDIS a year after. </span></li>
</ul>
</li>
</ul>

		</div>
	</div>

	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>Since mid-2017 MPEG has started to work on MPEG-I (Coded Representation of Immersive Media) that targets future immersive applications. The goal of this new standard is to enable various forms of audio-visual immersion, including panoramic video with 2D and 3D audio, with various degrees of true 3D visual perception (leaning toward 6 degrees of freedom). MPEG evaluates responses to the Call for Proposals and starts a new project on Metadata for Immersive Video linked with the Three Degrees of Freedom Plus feature (3DoF+). This aspect might be very relevant to follow for the VRT Project.</p>
<p>Support for 360-degree video, also called omnidirectional video, has been standardized in the MPEG-I Part 2: <strong>Omnidirectional Media Format</strong> (OMAF; ISO/IEC 23090-2) and <strong>Supplemental Enhancement Information (SEI) messages for High Efficiency Video Coding</strong> (HEVC; ISO/IEC 23008-2). These standards can be used for delivering immersive visual content. However, rendering flat 360-degree video may generate visual discomfort when objects close to the viewer are rendered. The interactive parallax feature of Three Degrees of Freedom Plus (3DoF+) will provide viewers with visual content that more closely mimics natural vision, but within a limited range of viewer motion. A typical 3DoF+ use case is a user sitting on a chair (or similar position) looking at stereoscopic omnidirectional virtual reality (VR) content on a head mounted display (HMD) with the capability to move her head in any direction.</p>
<p>At its 126th meeting, MPEG received five responses to the Call for Proposals (CfP) on 3DoF+ Visual. Subjective evaluations showed that adding the interactive motion parallax to 360-degree video will be possible. Based on the subjective and objective evaluation, a new project was launched, which will be named Metadata for Immersive Video. A first version of a Working Draft (WD) and corresponding Test Model (TM) were designed to combine technical aspects from multiple responses to the call. The current schedule for the project anticipates Final Draft International Standard (FDIS) of ISO/IEC 23090-7 Immersive Metadata in July 2020.</p>
<p>Beside this MPEG standardization process 3DoF+ Visual, one can mention a collaboration between Facebook and RED Digital Cinema started last year that lead to the first studio-ready camera system for immersive 3DoF+ storytelling based on a end-to-end solution for 3D and 360 video capture with the <a href="https://facebook360.fb.com/2018/09/26/film-the-future-with-red-and-facebook-360/" target="_blank" rel="noopener"><strong>Manifold camera</strong></a>. Manifold is a single product that redefines immersive cinematography with an all-in-one capture and distribution framework, giving creative professionals complete ownership of their 3D video projects, from conception to curtain call. What this means for audiences is total narrative immersion in anything shot on the new camera system and viewed through 3DoF VR headsets. The schema below shows the process pipeline put in place first studio-ready camera system for immersive 3DoF+ storytelling :</p>

		</div>
	</div>

	<div  class="wpb_single_image wpb_content_element vc_align_left">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="1200" height="495" src="https://vrtogether.eu/wp-content/uploads/2019/05/07_b.jpg" class="vc_single_image-img attachment-full" alt="" loading="lazy" srcset="https://vrtogether.eu/wp-content/uploads/2019/05/07_b.jpg 1200w, https://vrtogether.eu/wp-content/uploads/2019/05/07_b-300x124.jpg 300w, https://vrtogether.eu/wp-content/uploads/2019/05/07_b-768x317.jpg 768w, https://vrtogether.eu/wp-content/uploads/2019/05/07_b-1024x422.jpg 1024w, https://vrtogether.eu/wp-content/uploads/2019/05/07_b-700x289.jpg 700w, https://vrtogether.eu/wp-content/uploads/2019/05/07_b-410x169.jpg 410w, https://vrtogether.eu/wp-content/uploads/2019/05/07_b-100x41.jpg 100w, https://vrtogether.eu/wp-content/uploads/2019/05/07_b-275x113.jpg 275w" sizes="(max-width: 1200px) 100vw, 1200px" /></div>
		</figure>
	</div>

	<div  class="wpb_single_image wpb_content_element vc_align_left">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="790" height="445" src="https://vrtogether.eu/wp-content/uploads/2019/05/09.jpg" class="vc_single_image-img attachment-full" alt="" loading="lazy" srcset="https://vrtogether.eu/wp-content/uploads/2019/05/09.jpg 790w, https://vrtogether.eu/wp-content/uploads/2019/05/09-300x169.jpg 300w, https://vrtogether.eu/wp-content/uploads/2019/05/09-768x433.jpg 768w, https://vrtogether.eu/wp-content/uploads/2019/05/09-700x394.jpg 700w, https://vrtogether.eu/wp-content/uploads/2019/05/09-410x231.jpg 410w, https://vrtogether.eu/wp-content/uploads/2019/05/09-100x56.jpg 100w, https://vrtogether.eu/wp-content/uploads/2019/05/09-275x155.jpg 275w" sizes="(max-width: 790px) 100vw, 790px" /></div>
		</figure>
	</div>

	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<h4>Next MPEG meeting</h4>
<p><span style="font-weight: 400;">The next meeting (127th) will be held on July 8-12, 2019 in Gothenburg, Sweden.</span></p>

		</div>
	</div>
<div class="vc_separator wpb_content_element vc_separator_align_center vc_sep_width_100 vc_sep_pos_align_center vc_separator_no_text vc_sep_color_grey" ><span class="vc_sep_holder vc_sep_holder_l"><span  class="vc_sep_line"></span></span><span class="vc_sep_holder vc_sep_holder_r"><span  class="vc_sep_line"></span></span>
</div>
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>Come and follow us in this VR journey with <a href="http://www.i2cat.net/en" target="_blank" rel="noopener">i2CAT</a>, <a href="https://www.cwi.nl/" target="_blank" rel="noopener">CWI</a>, <a href="https://www.tno.nl/en/" target="_blank" rel="noopener">TNO</a>, <a href="https://www.certh.gr/root.en.aspx" target="_blank" rel="noopener">CERTH</a>, <a href="http://www.artanim.ch/" target="_blank" rel="noopener">Artanim</a>, <a href="https://www.viaccess-orca.com/" target="_blank" rel="noopener">Viaccess-Orca</a>, <a href="https://www.entropystudio.net/" target="_blank" rel="noopener">Entropy Studio</a> and <a href="https://www.gpac-licensing.com/" target="_blank" rel="noopener">Motion Spell</a>.</p>

		</div>
	</div>
</div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper"><div  class="vc_tweetmeme-element"><a href="https://twitter.com/share" class="twitter-share-button" data-via="VRTogether_EU">Tweet</a><script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+'://platform.twitter.com/widgets.js';fjs.parentNode.insertBefore(js,fjs);}}(document, 'script', 'twitter-wjs');</script></div></div></div></div></div><section class="vc_section"><div class="vc_row wpb_row vc_row-fluid vc_column-gap-10"><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="vc_row wpb_row vc_inner vc_row-fluid vc_custom_1547799200116 vc_row-has-fill vc_row-o-content-middle vc_row-flex"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_center">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="75" height="50" src="https://vrtogether.eu/wp-content/uploads/2019/01/EU_flag_75px.png" class="vc_single_image-img attachment-thumbnail" alt="" loading="lazy" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>This project has been funded by the European Commission as part of the H2020 program, under the grant agreement 762111.</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div></div></div></div></div></section>
<p>The post <a rel="nofollow" href="https://vrtogether.eu/2019/05/31/mpeg-meeting-126-virtual-reality-has-the-wind-in-its-sails/">MPEG Meeting #126: Virtual Reality has the wind in its sails</a> appeared first on <a rel="nofollow" href="https://vrtogether.eu">VRTogether</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>VR strikes back at MPEG</title>
		<link>https://vrtogether.eu/2018/10/02/vr-strikes-back-mpeg/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=vr-strikes-back-mpeg</link>
		
		<dc:creator><![CDATA[Motion Spell]]></dc:creator>
		<pubDate>Tue, 02 Oct 2018 14:38:48 +0000</pubDate>
				<category><![CDATA[Events]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[Compression]]></category>
		<category><![CDATA[MPEG]]></category>
		<category><![CDATA[Point Cloud]]></category>
		<category><![CDATA[VR]]></category>
		<guid isPermaLink="false">http://vrtogether.eu/?p=727</guid>

					<description><![CDATA[<p>VRTogether project partners Motion Spell, TNO and CWI participated at the last MPEG meetings (#122 and #123) in San Diego and Ljubljana with the intention of getting brand-new feedback around Virtual Reality. It appears that VR activities blossom in many fields:  MPEG-I, OMAF, Point clouds, NBMP, MPEG-MORE. The long-term trend shows that VR is coming [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://vrtogether.eu/2018/10/02/vr-strikes-back-mpeg/">VR strikes back at MPEG</a> appeared first on <a rel="nofollow" href="https://vrtogether.eu">VRTogether</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>VRTogether project partners <strong>Motion Spell, TNO and CWI</strong> participated at the last <strong>MPEG meetings</strong> (<a href="http://2018isoiec.regstep.com/home/page/index">#122</a> and <a href="http://www.kcmweb.de/conferences/MPEG123_alias/www.kcmweb.de/conferences/mpeg123/">#123</a>) in San Diego and Ljubljana with the intention of getting brand-new feedback around Virtual Reality. It appears that VR activities blossom in many fields:  MPEG-I, OMAF, Point clouds, NBMP, MPEG-MORE. The long-term trend shows that VR is coming back on the scene and will soon catch up onto the market.</p>
<p>This article focuses on Point Clouds and MPEG-MORE. The <a href="http://vrtogether.eu/2018/07/18/comeback-vr-mpeg/">first part of this article</a> is covering all the other technologies.</p>
<h4>Point Cloud Compression</h4>
<p><a href="https://mpeg.chiariglione.org/standards/mpeg-i">MPEG-I</a> targets<strong> future immersive applications</strong>. Part 5 of this standard specifies <strong>Point Cloud Compression</strong> (PCC).</p>
<p>A point cloud is defined as a set of points in the 3D space. Each point is identified by its cartesian coordinates (x,y,z), referred to as spatial attributes, as well as other attributes, such as a color, a normal, a reflectance value, etc. There are no restrictions on the attributes associated with each point.</p>
<p>Point clouds allow representing <strong>volumetric signals</strong>. Because of their simplicity and versatility, they are important for emerging AR and VR applications. Point clouds are usually captured using multiple RGB plus depth sensors. A point cloud can contain millions of points in order to create a photorealistic reconstruction of an object. Compression of point clouds is essential to efficiently store and transmit volumetric data for applications such as tele-immersive video and free-viewpoint sports replays, as well as for innovative medical and robotic applications.</p>
<p>MPEG has a separate activity on point cloud compression: in April 2017 MPEG issued a Call for Proposals (CfP) on PCC technologies, seeking compression proposals in three categories:</p>
<ol>
<li>Static frames</li>
<li>Dynamic sequences</li>
<li>Dynamically acquired/fused point clouds</li>
</ol>
<p>Leading technology companies responded to the CfP, and the proposals were assessed in October 2017. In addition to objective metrics, <a href="https://mpeg.chiariglione.org/meetings/120">each proposal was also evaluated through subjective tests</a>, performed at GBTech and CWI. The winning projects were selected as “Test Models” for the next step of the standardization activity.</p>
<p>For the compression of dynamic sequences, it was found that compression performance can be significantly improved by <strong>leveraging existing video codecs after performing a 3D to 2D conversion</strong> using a suitable mapping scheme. This also allows the use of hardware acceleration of existing video codecs, which is supported by many current generation GPUs. Thus, synergies with existing hardware and software infrastructure can allow rapid deployment of new immersive experiences.</p>
<p><img loading="lazy" class="aligncenter wp-image-728" src="http://vrtogether.eu/wp-content/uploads/2018/10/Point-Cloud-Example.jpg" alt="" width="843" height="321" srcset="https://vrtogether.eu/wp-content/uploads/2018/10/Point-Cloud-Example.jpg 1105w, https://vrtogether.eu/wp-content/uploads/2018/10/Point-Cloud-Example-300x114.jpg 300w, https://vrtogether.eu/wp-content/uploads/2018/10/Point-Cloud-Example-768x293.jpg 768w, https://vrtogether.eu/wp-content/uploads/2018/10/Point-Cloud-Example-1024x390.jpg 1024w, https://vrtogether.eu/wp-content/uploads/2018/10/Point-Cloud-Example-700x267.jpg 700w, https://vrtogether.eu/wp-content/uploads/2018/10/Point-Cloud-Example-410x156.jpg 410w, https://vrtogether.eu/wp-content/uploads/2018/10/Point-Cloud-Example-100x38.jpg 100w, https://vrtogether.eu/wp-content/uploads/2018/10/Point-Cloud-Example-275x105.jpg 275w" sizes="(max-width: 843px) 100vw, 843px" /></p>
<p>Figure 1: An example of a perspective view of a point cloud: the original version on the left, and two views of compressed versions of the same point cloud in the middle and on the right.</p>
<p>After the selection of the Test Models that combine the best performing technologies, the activity focused on the identification and investigation of methods to optimize the Test Models, by performing “Core Experiments”. Examples of Core Experiments include the comparison of different schemes for mapping the texture information from 3D to 2D, the analysis of hybrid codecs that combine 3D geometry compression techniques with traditional block-based video compression strategies, and the use of motion field coding. These Core Experiments are still ongoing.</p>
<p>At the last MPEG meetings, the PCC activity has been particularly crowded, <strong>attracting attention from many industrial partners</strong>. The main activities of the group focused on cross-checking test models and reviewing the results of the core experiments. In addition, some new datasets created by commercial companies (8i, Owlii, Samsung, Fraunhofer) were presented, and a proposal to merge two of the Test Models was presented; the goal was to take advantage of the 2D compression technics (HEVC and successors). A preliminary contribution also explored the delivery and transmission of point clouds based on this approach.</p>
<p>In the VRTogether project, CWI is providing a solution for <a href="https://ieeexplore.ieee.org/document/7434610/">lossy compression of dynamic point clouds</a>, based on the open source software for Point Cloud Compression developed at CWI (available at <a href="https://github.com/cwi-dis/cwi-pcl-codec" target="_blank" rel="noopener">https://github.com/cwi-dis/cwi-pcl-codec</a>). This solution will not be competing in the standardization race, but it serves as an <strong>open source tool to benchmark different solutions and experiment research ideas</strong>. Currently, it is being integrated into the VRTogether DASH-based point cloud communication pipeline that will allow multiple users to see each other point cloud representation, captured in real time, and rendered in the same virtual environment. Part of CWI research within the VRTogether project will also focus on the design of new objective quality metrics for evaluating point clouds, based on the study of human perception of volumetric signals.</p>
<h4>MPEG-MORE</h4>
<p><strong>MPEG‘s Media Orchestration standard</strong> (also known as MORE: MPEG-B part 13- <a href="https://mpeg.chiariglione.org/standards/mpeg-b/media-orchestration">https://mpeg.chiariglione.org/standards/mpeg-b/media-orchestration</a>) has been finalized by the committee and the final edited version has been submitted to MPEG’s parent body and ISO for one more yes/no ballot followed by publication. This last step is a mere formality.  It is a bit hard to estimate when the specification will be published by ISO since it requires some secretariat work and this can take quite a few months. The work on Reference Content and SW continues. This work intends to make content available with MORE metadata so that (potential) users of the MORE specification can understand how the specification works and are assisted in creating implementations.</p>
<p>In the meantime, <strong>Social VR has become more important in MPEG</strong>, and it looks like some of the requirements can be fulfilled by the MORE specification. This notably applies to the simpler forms of Social VR, where images of users are composited into a VR360 experience. This requires both temporal synchronization (multiple users should experience the same scene simultaneously) and spatial coordination (the composited images for all users need to have consistent location and size for the experience to be perceived as realistic and compelling). MORE defines the necessary metadata and protocols for this.</p>
<h4>About MPEG</h4>
<p>MPEG is the Moving Picture Experts Group, a group from IEC and ISO which created some of the foundations of the multimedia industry: the MPEG-2 Transport Stream, and the MP4 file format, a series of successful codecs both in video (MPEG-2 Video, AVC/H264) and audio (MP3, AAC). A new generation (MPEG-H) emerged in 2013 with MPEG 3D Audio, HEVC and MMT, and other activities in MPEG-I like Point Cloud, MPEG Orchestration.</p>
<p>&nbsp;</p>
<h5>Who we are</h5>
<p><a href="http://www.motionspell.com" target="_blank" rel="noopener">Motion Spell</a> is an SME specialized in audio-visual media technologies. Motion Spell was created in 2013 in Paris, France. On a conceptual and technical level, Motion Spell will focus on the development of transmission open tools. Furthermore, Motion Spell plans to explore encoding requirements for VR to participate in the current standardization efforts and first implementations. Finally we will also assist on the playback side to ensure the end-to-end workflow is covered.</p>
<p>Come and follow us in this VR journey with <a href="http://www.i2cat.net/en" target="_blank" rel="noopener">i2CAT</a>, <a href="https://www.cwi.nl/" target="_blank" rel="noopener">CWI</a>, <a href="https://www.tno.nl/en/" target="_blank" rel="noopener">TNO</a>, <a href="https://www.certh.gr/root.en.aspx" target="_blank" rel="noopener">CERTH</a>, <a href="http://www.artanim.ch/" target="_blank" rel="noopener">Artanim</a>, <a href="https://www.viaccess-orca.com/" target="_blank" rel="noopener">Viaccess-Orca</a>, <a href="http://www.entropystudio.net/" target="_blank" rel="noopener">Entropy Studio</a>.</p>
<p><img loading="lazy" class="size-full wp-image-380 alignleft" src="http://vrtogether.eu/wp-content/uploads/2018/02/eu-logo.png" alt="" width="226" height="111" srcset="https://vrtogether.eu/wp-content/uploads/2018/02/eu-logo.png 226w, https://vrtogether.eu/wp-content/uploads/2018/02/eu-logo-100x49.png 100w" sizes="(max-width: 226px) 100vw, 226px" /></p>
<p>The post <a rel="nofollow" href="https://vrtogether.eu/2018/10/02/vr-strikes-back-mpeg/">VR strikes back at MPEG</a> appeared first on <a rel="nofollow" href="https://vrtogether.eu">VRTogether</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>The comeback of VR at MPEG</title>
		<link>https://vrtogether.eu/2018/07/18/comeback-vr-mpeg/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=comeback-vr-mpeg</link>
		
		<dc:creator><![CDATA[Motion Spell]]></dc:creator>
		<pubDate>Wed, 18 Jul 2018 07:19:51 +0000</pubDate>
				<category><![CDATA[News]]></category>
		<category><![CDATA[codec]]></category>
		<category><![CDATA[MPEG]]></category>
		<category><![CDATA[standardisation]]></category>
		<category><![CDATA[standards]]></category>
		<guid isPermaLink="false">http://vrtogether.eu/?p=650</guid>

					<description><![CDATA[<p>The general feeling from the MPEG community is that Virtual Reality (VR) made a false start. The Oculus rift’s acquisition (2014) for $2bn created a premature launch of a funding bubble that exploded in early 2017. However, the long-term trend shows that VR is coming back on the scene and will soon catch up into [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://vrtogether.eu/2018/07/18/comeback-vr-mpeg/">The comeback of VR at MPEG</a> appeared first on <a rel="nofollow" href="https://vrtogether.eu">VRTogether</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>The general feeling from the MPEG community is that <strong>Virtual Reality (VR) made a false start</strong>. The <a href="https://www.theguardian.com/technology/2014/jul/22/facebook-oculus-rift-acquisition-virtual-reality" target="_blank" rel="noopener noreferrer">Oculus rift’s acquisition</a> (2014) for $2bn created a premature launch of a funding bubble that exploded in early 2017. However, the long-term trend shows that VR is coming back on the scene and will soon catch up into the market. As a VRTogether project partner, Motion Spell has great expectations about VR. Our participation at the last MPEG meeting (#122) in San Diego confirmed that <strong>activities regarding VR blossom on every field</strong>: MPEG-I, OMAF, Point clouds, NBMP, MPEG-MORE.</p>
<p>&nbsp;</p>
<h4>MPEG</h4>
<p>MPEG is the Motion Picture Expert Group, a group from IEC and ISO that created some of the <strong>foundations of the multimedia industry</strong>: MPEG-2 TS and the MP4 file format, a series of successful codecs both in video (MPEG-2 Video, AVC/H264) and audio (MP3, AAC). A new generation (MPEG-H) emerged in 2013 with MPEG 3D Audio, HEVC and MMT and other activities like MPEG-I (more below).</p>
<p>The <a href="http://gpac.io/" target="_blank" rel="noopener noreferrer">GPAC</a> team and its commercial arm (GPAC Licensing), which is led by <strong>Motion Spell</strong>, are active contributors at MPEG.</p>
<p>MPEG meetings are organized as a set of thematic meeting rooms that represent <strong>different working groups</strong>. Each working group follows its way from requirements to a working draft and then to an international standard. Each MPEG meeting gathers around 500 participants from all over the world.</p>
<p>&nbsp;</p>
<h4>MPEG-I: Coded Representation of Immersive Media</h4>
<p>Since mid-2017, MPEG has started to work on MPEG-I. MPEG-I targets <a href="https://mpeg.chiariglione.org/standards/mpeg-i" target="_blank" rel="noopener noreferrer">future immersive applications</a>. The goal of this new standard is to enable various <strong>forms of audio-visual immersion</strong>, including panoramic video with 2D and 3D audio, with various degrees of true 3D visual perception (leaning toward 6 degrees of freedom). This full standard has already reached a relevant state that forces us to<strong> take it into account in VRTogether</strong>.</p>
<p>MPEG-I is a set of <strong>standards</strong> defining the future of media, which currently comprises eight parts:<br />
• Part 1: Requirements &#8211; Technical Report on Immersive Media<br />
• Part 2: OMAF &#8211; Omnidirectional Media Format<br />
• Part 3: Versatile Video Coding<br />
• Part 4: Immersive Audio Coding<br />
• Part 5: Point Cloud Compression<br />
• Part 6: Immersive Media Metrics<br />
• Part 7: Immersive Media Metadata<br />
• Part 8: NBMP &#8211; Network-Based Media Processing</p>
<p>In this article, we will focus on parts 1, 2, 3 and 8.</p>
<p>&nbsp;</p>
<h4>Architecture</h4>
<p>MPEG-I exposes a <strong>set</strong> of architectures rather than just one:<a href="http://vrtogether.eu/wp-content/uploads/2018/07/MPEG-VR.png"><img loading="lazy" class="aligncenter wp-image-652" src="http://vrtogether.eu/wp-content/uploads/2018/07/MPEG-VR.png" alt="" width="742" height="499" srcset="https://vrtogether.eu/wp-content/uploads/2018/07/MPEG-VR.png 1877w, https://vrtogether.eu/wp-content/uploads/2018/07/MPEG-VR-300x202.png 300w, https://vrtogether.eu/wp-content/uploads/2018/07/MPEG-VR-768x516.png 768w, https://vrtogether.eu/wp-content/uploads/2018/07/MPEG-VR-1024x688.png 1024w, https://vrtogether.eu/wp-content/uploads/2018/07/MPEG-VR-700x470.png 700w, https://vrtogether.eu/wp-content/uploads/2018/07/MPEG-VR-410x275.png 410w, https://vrtogether.eu/wp-content/uploads/2018/07/MPEG-VR-100x67.png 100w, https://vrtogether.eu/wp-content/uploads/2018/07/MPEG-VR-275x185.png 275w" sizes="(max-width: 742px) 100vw, 742px" /></a></p>
<h4>Part 1: Requirements</h4>
<p>MPEG-I requirements are divided into <strong>phases</strong>:</p>
<ul>
<li>Phase 1a: Captured image based multi-view encoding/decoding (currently finalizing standardization)</li>
<li>Phase 1b: Video-based multi-view encoding/decoding. Mostly 3DoF and some 3DoF+. VR and interactivity activities are likely to be split.</li>
<li>Phase 2: Video + additional data (depth, point cloud) based multi-view encoding/decoding. This phase allows to take into account 6DoF with limited freedom (Omnidirectional 6DoF, Windowed 6DoF) and to Synthesize points of view from fixed cameras</li>
</ul>
<h4></h4>
<p>&nbsp;</p>
<h4>Part 2: OMAF</h4>
<p>OMAF is a profile explanation on <strong>how to use the MPEG tools with omnidirectional media</strong>. OMAF kicked-off its activity towards a 2nd edition enabling support for 3DoF+ and social VR with the plan going to Committee Draft (CD) in October 2018.</p>
<p>Additionally, there is a test framework proposed which allows assessing the performance of various CMAF tools. Its main focus is on video, but MPEG’s audio subgroup has a similar framework to enable subjective testing. It could be interesting seeing these two frameworks combined in one way or the other.</p>
<p>OMAF implies new ISOBMFF/MP4 boxes and models. This is very interesting to follow for VRTogether, but it might be too complex to implement at this stage of specification. In addition, OMAF targets the newest technologies while VRTogether wants to deploy with existing ones. There is room for MPEG contributions in this area.</p>
<p>OMAF also specifies some support for timed metadata (like timed text for subtitle) that has to be followed as well. Our current workflow implies many static parameters on the capture or rendering sides that may become dynamic.</p>
<p>&nbsp;</p>
<h4>Part 3: Versatile Video Coding</h4>
<p>This part focuses on immersive video coding which will be a successor of HEVC. The name <strong>VVC</strong> (Versatile Video Coding) was hand voted at the meeting. Current experiments show that VVC codec can outperform HEVC by 40%. The release of VVC coding is planned for October 2020.</p>
<p>&nbsp;</p>
<h4>Part 8: Network-Based Media Processing</h4>
<p>Network-Based Media Processing (NBMP) is a framework that allows service providers and end-users to <strong>describe media processing operations</strong> that are to be performed by the network. NBMP describes the composition of network-based media processing services out of a set of network-based media processing functions and makes these NBMP services accessible through Application Programming Interfaces (APIs).</p>
<p>Motion Spell, partner of the VRTogether project, has decided to attend some <a href="https://mpeg.chiariglione.org/standards/exploration/network-based-media-processing" target="_blank" rel="noopener noreferrer">sessions on NBMP</a> during the 122 MPEG meeting since this new activity that allows <strong>building media workflows</strong> has generally been ignored so far. One of the main use-case covered by the output of the previous MPEG meeting in Gwangju included the <strong>ingest of media for distribution</strong>. Unified Streaming, co-chaired, wants to standardize ingest. Indeed, this is very exciting for VRTogether; Motion Spell will be interested in implementing an ingest component. For example, it could be useful to compare our low latency ingest implementation with the status of this standardization effort by the end of the project.</p>
<p><a href="https://www.tno.nl/en/" target="_blank" rel="noopener noreferrer">TNO</a>, also a partner of the VRTogether project, proposed a contribution that puts the focus on <strong>extending scene description for 3D environment</strong> in the scope of NBMP. This contribution also exposed a tentative list of NBMP functions that could be useful for Social VR:</p>
<ul>
<li>Background removal.</li>
<li>User detection and scaling.</li>
<li>Room composition without users.</li>
<li>Room composition with users.</li>
<li>Low-latency 3DOF encoding.</li>
<li>Network-based media synchronization.</li>
<li>3D audio mixing functionality.</li>
<li>Lip-sync compensation.</li>
</ul>
<p>&nbsp;</p>
<h5>Who we are</h5>
<p><a href="http://www.motionspell.com" target="_blank" rel="noopener noreferrer">Motion Spell</a> is an SME specialized in audio-visual media technologies. Motion Spell was created in 2013 in Paris, France. On a conceptual and technical level, Motion Spell will focus on the development of transmission open tools. Furthermore, Motion Spell plans to explore encoding requirements for VR to participate in the current standardization efforts and first implementations. Finally we will also assist on the playback side to ensure the end-to-end workflow is covered.</p>
<p>Come and follow us in this VR journey with <a href="http://www.i2cat.net/en" target="_blank" rel="noopener noreferrer">i2CAT</a>, <a href="https://www.cwi.nl/" target="_blank" rel="noopener noreferrer">CWI</a>, <a href="https://www.tno.nl/en/" target="_blank" rel="noopener noreferrer">TNO</a>, <a href="https://www.certh.gr/root.en.aspx" target="_blank" rel="noopener noreferrer">CERTH</a>, <a href="http://www.artanim.ch/" target="_blank" rel="noopener noreferrer">Artanim</a>, <a href="https://www.viaccess-orca.com/" target="_blank" rel="noopener noreferrer">Viaccess-Orca</a>, <a href="http://www.entropystudio.net/" target="_blank" rel="noopener noreferrer">Entropy Studio</a>.</p>
<p><img loading="lazy" class="size-full wp-image-380 alignleft" src="http://vrtogether.eu/wp-content/uploads/2018/02/eu-logo.png" alt="" width="226" height="111" srcset="https://vrtogether.eu/wp-content/uploads/2018/02/eu-logo.png 226w, https://vrtogether.eu/wp-content/uploads/2018/02/eu-logo-100x49.png 100w" sizes="(max-width: 226px) 100vw, 226px" /></p>
<p><em>This project has been funded by the European Commission as part of the H2020 program, under the grant agreement 762111.</em></p>
<p>&nbsp;</p>
<p>The post <a rel="nofollow" href="https://vrtogether.eu/2018/07/18/comeback-vr-mpeg/">The comeback of VR at MPEG</a> appeared first on <a rel="nofollow" href="https://vrtogether.eu">VRTogether</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>VR at NAB show: the calm before the storm?</title>
		<link>https://vrtogether.eu/2018/07/13/vr-nab-show-calm-before-storm/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=vr-nab-show-calm-before-storm</link>
		
		<dc:creator><![CDATA[Motion Spell]]></dc:creator>
		<pubDate>Fri, 13 Jul 2018 09:11:30 +0000</pubDate>
				<category><![CDATA[Events]]></category>
		<category><![CDATA[Motion Spell]]></category>
		<category><![CDATA[Nab Show]]></category>
		<category><![CDATA[VR Market]]></category>
		<guid isPermaLink="false">http://vrtogether.eu/?p=601</guid>

					<description><![CDATA[<p>The 2018 annual NAB Show pulled in more than 100,000 attendees to the Las Vegas Convention Center to see more than 1800 exhibiting companies. As a VRTogether project partner, Motion Spell went to the show to probe the innovation regarding Virtual Reality (VR). The show was quite packed, and contrary to IBC, it seems like [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://vrtogether.eu/2018/07/13/vr-nab-show-calm-before-storm/">VR at NAB show: the calm before the storm?</a> appeared first on <a rel="nofollow" href="https://vrtogether.eu">VRTogether</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>The 2018 annual <strong><a href="https://www.nabshow.com/">NAB Show</a></strong> pulled in more than 100,000 attendees to the Las Vegas Convention Center to see more than 1800 exhibiting companies. As a VRTogether project partner, <strong>Motion Spell</strong> went to the show to probe the innovation regarding Virtual Reality (VR). The show was quite packed, and contrary to IBC, it seems like the attendance was better than the previous year. However, the fact was that the number of booths related to <strong>VR decreased dramatically</strong> from last year as if VR seems to be out of the professional video scope in 2018. Virtual Reality has certainly made a false start, but will it come back on the scene? Are we now in the middle of the calm before the storm?</p>
<p><img loading="lazy" class="aligncenter wp-image-624" src="http://vrtogether.eu/wp-content/uploads/2018/07/Imagen1.png" alt="" width="813" height="251" srcset="https://vrtogether.eu/wp-content/uploads/2018/07/Imagen1.png 940w, https://vrtogether.eu/wp-content/uploads/2018/07/Imagen1-300x93.png 300w, https://vrtogether.eu/wp-content/uploads/2018/07/Imagen1-768x237.png 768w, https://vrtogether.eu/wp-content/uploads/2018/07/Imagen1-700x216.png 700w, https://vrtogether.eu/wp-content/uploads/2018/07/Imagen1-410x126.png 410w, https://vrtogether.eu/wp-content/uploads/2018/07/Imagen1-100x31.png 100w, https://vrtogether.eu/wp-content/uploads/2018/07/Imagen1-275x85.png 275w" sizes="(max-width: 813px) 100vw, 813px" /> <img loading="lazy" class="aligncenter wp-image-625" src="http://vrtogether.eu/wp-content/uploads/2018/07/Imagen2.jpg" alt="" width="813" height="249" srcset="https://vrtogether.eu/wp-content/uploads/2018/07/Imagen2.jpg 946w, https://vrtogether.eu/wp-content/uploads/2018/07/Imagen2-300x92.jpg 300w, https://vrtogether.eu/wp-content/uploads/2018/07/Imagen2-768x235.jpg 768w, https://vrtogether.eu/wp-content/uploads/2018/07/Imagen2-700x215.jpg 700w, https://vrtogether.eu/wp-content/uploads/2018/07/Imagen2-410x126.jpg 410w, https://vrtogether.eu/wp-content/uploads/2018/07/Imagen2-100x31.jpg 100w, https://vrtogether.eu/wp-content/uploads/2018/07/Imagen2-275x84.jpg 275w" sizes="(max-width: 813px) 100vw, 813px" /></p>
<h4></h4>
<p>&nbsp;</p>
<h4>No actual innovations but many improvements</h4>
<p>Regarding VR and 360 aspects, we can mention several interesting booths where improvements have been made since last year. The Chinese company <strong>Kandao Technology</strong> exposed its caption solution “VR Obsidian” with many enhancements from the previous version. Indeed, we discovered an update of the Kandao live software enable to handle 8k 360 in live, and their new 6-Dof volumetric solution based on their deep map generated with their 3D 360 camera. This last solution could be interested to follow for the VRTogether project since it allows to rebuilt 3D from any viewpoint, which pushes the VR experience to a higher level.</p>
<p>Regarding 360 caption, we can also talk about the Fraunhofer Heinrich Hertz Institute (HHI) current innovations in the field of immersive imaging technologies. They presented their enhanced <strong>Omnicam-360</strong> that enables 10K video 360-degree capturing. The live encoding is tile-based and use the Fraunhofer HHI HEVC encoder compliant with the MPEG-OMAF Viewport-Dependent Media Profile.</p>
<p>We can also put in light the 8ball product from <strong>HEAR 360+.</strong> It’s a new immersive audio technology included patented omni-binaural microphone and companion software that provides a seamless workflow from on-set recording to content delivery. The 8ball solves the challenges that cinematic VR filmmakers face when attempting to create truly immersive spatial audio.</p>
<p>The general feeling is that most of the VR innovation was about <strong>united experiences</strong>, including immersion with audio, low latency, precision, wide field-of-views.</p>
<p>This feeling converges with VRtogether’s vision: VR, AR and Immersive Audio is going to fundamentally change how people communicate and create an <strong>intense emotional impact</strong>.</p>
<p><img loading="lazy" class="aligncenter wp-image-633" src="http://vrtogether.eu/wp-content/uploads/2018/07/Imagen3.png" alt="" width="813" height="202" srcset="https://vrtogether.eu/wp-content/uploads/2018/07/Imagen3.png 1496w, https://vrtogether.eu/wp-content/uploads/2018/07/Imagen3-300x74.png 300w, https://vrtogether.eu/wp-content/uploads/2018/07/Imagen3-768x190.png 768w, https://vrtogether.eu/wp-content/uploads/2018/07/Imagen3-1024x254.png 1024w, https://vrtogether.eu/wp-content/uploads/2018/07/Imagen3-700x174.png 700w, https://vrtogether.eu/wp-content/uploads/2018/07/Imagen3-410x102.png 410w, https://vrtogether.eu/wp-content/uploads/2018/07/Imagen3-100x25.png 100w, https://vrtogether.eu/wp-content/uploads/2018/07/Imagen3-275x68.png 275w" sizes="(max-width: 813px) 100vw, 813px" /></p>
<h4></h4>
<h4></h4>
<p>&nbsp;</p>
<h4>Future Park zone</h4>
<p>We can also report exciting things into the future park zone. Indeed, two projects on the Futures Park, <strong>ImmersiaTV and ConvergenceTV</strong> are linked with some partners of the VRTogether project.<br />
Indeed, ImmersiaTV features i2CAT, and ConvergenceTV (which won the NAB Innovation Award 2018) features <strong>Motion Spell</strong>.</p>
<p>Regarding the ConvergenceTV, there was a nice demonstration of MPEG-DASH ultra-low latency using the Motion Spell / GPAC Licensing Signals platforms. Professional players are stable at around 500ms.</p>
<p><img loading="lazy" class="aligncenter wp-image-640" src="http://vrtogether.eu/wp-content/uploads/2018/07/Imagen4.png" alt="" width="682" height="343" srcset="https://vrtogether.eu/wp-content/uploads/2018/07/Imagen4.png 958w, https://vrtogether.eu/wp-content/uploads/2018/07/Imagen4-300x151.png 300w, https://vrtogether.eu/wp-content/uploads/2018/07/Imagen4-768x386.png 768w, https://vrtogether.eu/wp-content/uploads/2018/07/Imagen4-700x352.png 700w, https://vrtogether.eu/wp-content/uploads/2018/07/Imagen4-410x206.png 410w, https://vrtogether.eu/wp-content/uploads/2018/07/Imagen4-100x50.png 100w, https://vrtogether.eu/wp-content/uploads/2018/07/Imagen4-275x138.png 275w" sizes="(max-width: 682px) 100vw, 682px" /></p>
<p>&nbsp;</p>
<h5>Who we are</h5>
<p><a href="http://www.motionspell.com" target="_blank" rel="noopener noreferrer">Motion Spell</a> is an SME specialized in audio-visual media technologies. Motion Spell was created in 2013 in Paris, France. On a conceptual and technical level, Motion Spell will focus on the development of transmission open tools. Furthermore, Motion Spell plans to explore encoding requirements for VR to participate in the current standardization efforts and first implementations. Finally we will also assist on the playback side to ensure the end-to-end workflow is covered.</p>
<p>Come and follow us in this VR journey with <a href="http://www.i2cat.net/en" target="_blank" rel="noopener noreferrer">i2CAT</a>, <a href="https://www.cwi.nl/" target="_blank" rel="noopener noreferrer">CWI</a>, <a href="https://www.tno.nl/en/" target="_blank" rel="noopener noreferrer">TNO</a>, <a href="https://www.certh.gr/root.en.aspx" target="_blank" rel="noopener noreferrer">CERTH</a>, <a href="http://www.artanim.ch/" target="_blank" rel="noopener noreferrer">Artanim</a>, <a href="https://www.viaccess-orca.com/" target="_blank" rel="noopener noreferrer">Viaccess-Orca</a>, <a href="http://www.entropystudio.net/" target="_blank" rel="noopener noreferrer">Entropy Studio</a>.</p>
<p><img loading="lazy" class="size-full wp-image-380 alignleft" src="http://vrtogether.eu/wp-content/uploads/2018/02/eu-logo.png" alt="" width="226" height="111" srcset="https://vrtogether.eu/wp-content/uploads/2018/02/eu-logo.png 226w, https://vrtogether.eu/wp-content/uploads/2018/02/eu-logo-100x49.png 100w" sizes="(max-width: 226px) 100vw, 226px" /></p>
<p><em>This project has been funded by the European Commission as part of the H2020 program, under the grant agreement 762111.</em></p>
<p>&nbsp;</p>
<p>The post <a rel="nofollow" href="https://vrtogether.eu/2018/07/13/vr-nab-show-calm-before-storm/">VR at NAB show: the calm before the storm?</a> appeared first on <a rel="nofollow" href="https://vrtogether.eu">VRTogether</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Delivering your new VR social experience: our commitment in VRTogether</title>
		<link>https://vrtogether.eu/2018/02/13/delivering-your-new-vr-social-experience-our-commitment-in-vrtogether/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=delivering-your-new-vr-social-experience-our-commitment-in-vrtogether</link>
		
		<dc:creator><![CDATA[Motion Spell]]></dc:creator>
		<pubDate>Tue, 13 Feb 2018 06:13:44 +0000</pubDate>
				<category><![CDATA[News]]></category>
		<guid isPermaLink="false">http://vrtogether.eu/?p=377</guid>

					<description><![CDATA[<p>Introduction Although VR has been a buzz word for the past few years, VR experience delivery is still at its infancy. There are some products starting to use tiles and other techniques to broadcast VR videos but real deployments still stream flat equirectangular videos. There are no standards able to stream and merge 3D models [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://vrtogether.eu/2018/02/13/delivering-your-new-vr-social-experience-our-commitment-in-vrtogether/">Delivering your new VR social experience: our commitment in VRTogether</a> appeared first on <a rel="nofollow" href="https://vrtogether.eu">VRTogether</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h5><strong>Introduction</strong></h5>
<p>Although VR has been a buzz word for the past few years, <strong>VR experience delivery is still at its infancy</strong>. There are some products starting to use <a href="http://gpac.io/2016/05/25/srd/" target="_blank" rel="noopener noreferrer">tiles</a> and other techniques to broadcast VR videos but <strong>real deployments still stream flat <a href="https://en.wikipedia.org/wiki/Equirectangular_projection" target="_blank" rel="noopener noreferrer">equirectangular</a> videos.</strong> There are no standards able to stream and merge 3D models (e.g. &#8220;3D videos&#8221; or persons) in a <strong>unified social VR experience</strong>. <a href="http://vrtogether.eu/" target="_blank" rel="noopener noreferrer">VRTogether</a> comes to the rescue.</p>
<h5><strong>VR Together</strong></h5>
<p><img loading="lazy" class="aligncenter wp-image-378 size-full" src="http://vrtogether.eu/wp-content/uploads/2018/02/logo-vr.png" alt="" width="300" height="229" srcset="https://vrtogether.eu/wp-content/uploads/2018/02/logo-vr.png 300w, https://vrtogether.eu/wp-content/uploads/2018/02/logo-vr-100x76.png 100w, https://vrtogether.eu/wp-content/uploads/2018/02/logo-vr-275x210.png 275w" sizes="(max-width: 300px) 100vw, 300px" /></p>
<p><strong><a href="http://vrtogether.eu/" target="_blank" rel="noopener noreferrer">VRTogether</a> </strong>is a consortium about social virtual reality. Our goal is to <strong>provide a social virtual reality experience that mixes up natural and artificial</strong> (e.g. a movie and 3D-reconstructed persons). In one of the <a href="http://vrtogether.eu/about-vr-together/pilots/" target="_blank" rel="noopener noreferrer">content samples</a> developed, users are able to reenact themselves an experience which looks like one they could have seen in a thriller.</p>
<h5><strong>Market status</strong></h5>
<p>The world of VR has acted strange lately. After the <strong>sudden boom</strong> (from 2014&#8217;s <a href="https://www.theguardian.com/technology/2014/jul/22/facebook-oculus-rift-acquisition-virtual-reality" target="_blank" rel="noopener noreferrer">Oculus Rift acquisition by Facebook for $2bn</a> to 2017&#8217;s reality check from the market), new <a href="https://en.wikipedia.org/wiki/Head-mounted_display" target="_blank" rel="noopener noreferrer">HMDs</a> have been presented at the <strong>CES 2018</strong>, solving real issues (increased field-of-view, better displays, less motion-sickness, autonomous). However while problems are getting solved, the news about the solutions receive little coverage.</p>
<p>Social VR has been teased a lot during this VR frenzy. Who doesn&#8217;t remember this Facebook/Oculus conference with a room full of people wearing a Rift?</p>
<p><img loading="lazy" class="size-full wp-image-379 aligncenter" src="http://vrtogether.eu/wp-content/uploads/2018/02/mark-zuckerberg-samsung-unpacked-2016.jpg" alt="" width="930" height="618" srcset="https://vrtogether.eu/wp-content/uploads/2018/02/mark-zuckerberg-samsung-unpacked-2016.jpg 930w, https://vrtogether.eu/wp-content/uploads/2018/02/mark-zuckerberg-samsung-unpacked-2016-300x199.jpg 300w, https://vrtogether.eu/wp-content/uploads/2018/02/mark-zuckerberg-samsung-unpacked-2016-768x510.jpg 768w, https://vrtogether.eu/wp-content/uploads/2018/02/mark-zuckerberg-samsung-unpacked-2016-700x465.jpg 700w, https://vrtogether.eu/wp-content/uploads/2018/02/mark-zuckerberg-samsung-unpacked-2016-410x272.jpg 410w, https://vrtogether.eu/wp-content/uploads/2018/02/mark-zuckerberg-samsung-unpacked-2016-100x66.jpg 100w, https://vrtogether.eu/wp-content/uploads/2018/02/mark-zuckerberg-samsung-unpacked-2016-275x183.jpg 275w" sizes="(max-width: 930px) 100vw, 930px" /></p>
<p>Beyond the awe that provoked this image on many people, there is an undeniable fact: <strong>social VR is one of the futures of VR.</strong></p>
<h5><strong>Social VR</strong></h5>
<p>Social VR is one of the futures of VR.</p>
<p>To get a full experience of social VR, you need your friends to be represented in full 3D with a feeling of interaction with them. There are some technical catches:</p>
<ul>
<li>As of now, there are <strong>no standards</strong> to stream 3D videos. The two main technologies that compete are <a href="https://en.wikipedia.org/wiki/Polygon_mesh" target="_blank" rel="noopener noreferrer">Meshes</a> and <a href="https://en.wikipedia.org/wiki/Point_cloud" target="_blank" rel="noopener noreferrer">Point Clouds</a>. Both approaches are at different stages of standardization. Without a specification handled by a serious body (MPEG, W3C, etc.), device and browser integrations are absent. These are key points as of 2018 and we are working on it.</li>
<li><strong>Bandwidth limitation</strong>. 3D representation of the users needs to be sent in real time, which requires a lot of bandwidth at the moment to get a smooth experience. We are talking about 50mbps for a realistic representation.</li>
<li><strong>Latency</strong>. We have to ensure that, to be able to interact convincingly with your friends, you&#8217;ll be able to see yourself and your friends moving instantly, without any perceptible delay. A self-representation should show almost zero-delay.</li>
<li><strong>Synchronization</strong>: the users should not perceive strange artifacts (e.g. lip sync, freezes, zombies, etc.).</li>
</ul>
<p>The dust is settling down, the VR mania is down, and now, we can start to get really serious about it.</p>
<h5><strong>Who we are</strong></h5>
<p><a href="http://www.motionspell.com" target="_blank" rel="noopener noreferrer">Motion Spell</a>, the company under <a href="https://www.gpac-licensing.com/" target="_blank" rel="noopener noreferrer">GPAC Licensing</a>, will support that objective at<a href="https://www.nabshow.com/" target="_blank" rel="noopener noreferrer"> NAB 2018</a>, and at <a href="https://mpeg.chiariglione.org/meetings/122" target="_blank" rel="noopener noreferrer">MPEG 122 in San Diego.</a></p>
<p>The technical objectives of the consortium are to provide <strong>more reliable technological bricks</strong> to support packaging and delivery of such experiences. As such, <a href="http://www.motionspell.com" target="_blank" rel="noopener noreferrer">Motion Spell</a> will deliver a first prototype as part of the <strong>VRTogether Pilot one</strong>, which will be available this semester.</p>
<p>Come and follow us in this VR journey with <a href="http://www.i2cat.net/en" target="_blank" rel="noopener noreferrer">i2Cat</a>, <a href="https://www.cwi.nl/" target="_blank" rel="noopener noreferrer">CWI</a>, <a href="https://www.tno.nl/en/" target="_blank" rel="noopener noreferrer">TNO</a>, <a href="http://futurelighthouse.com/" target="_blank" rel="noopener noreferrer">Future Lighthouse</a>, <a href="https://www.certh.gr/root.en.aspx" target="_blank" rel="noopener noreferrer">CERTH</a>, <a href="http://www.artanim.ch/" target="_blank" rel="noopener noreferrer">Artanim</a>, <a href="https://www.viaccess-orca.com/" target="_blank" rel="noopener noreferrer">Viaccess Orca</a>, <a href="http://www.entropystudio.net/" target="_blank" rel="noopener noreferrer">Entropy Studio</a>.</p>
<p><img loading="lazy" class="size-full wp-image-380 alignleft" src="http://vrtogether.eu/wp-content/uploads/2018/02/eu-logo.png" alt="" width="226" height="111" srcset="https://vrtogether.eu/wp-content/uploads/2018/02/eu-logo.png 226w, https://vrtogether.eu/wp-content/uploads/2018/02/eu-logo-100x49.png 100w" sizes="(max-width: 226px) 100vw, 226px" /></p>
<p><em>This project has been funded by the European Commission as part of the H2020 program, under the grant agreement 762111.</em></p>
<p>VRTogether: <a href="http://vrtogether.eu/" target="_blank" rel="noopener noreferrer">website</a>, <a href="https://twitter.com/vrtogether_eu" target="_blank" rel="noopener noreferrer">Twitter</a>.</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p><em>Text and pictures: Rodolphe Fouquet – <a href="https://www.gpac-licensing.com/" target="_blank" rel="noopener noreferrer">Motion Spell</a></em></p>
<p>The post <a rel="nofollow" href="https://vrtogether.eu/2018/02/13/delivering-your-new-vr-social-experience-our-commitment-in-vrtogether/">Delivering your new VR social experience: our commitment in VRTogether</a> appeared first on <a rel="nofollow" href="https://vrtogether.eu">VRTogether</a>.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
