<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>standardisation &#8211; VRTogether</title>
	<atom:link href="https://vrtogether.eu/tag/standardisation/feed/" rel="self" type="application/rss+xml" />
	<link>https://vrtogether.eu</link>
	<description>An end-to-end system for the production and delivery of photorealistic and social virtual reality experiences</description>
	<lastBuildDate>Wed, 29 Jul 2020 18:58:35 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	

 
	<item>
		<title>CWI Standardization Efforts in VQEG (and ITU)</title>
		<link>https://vrtogether.eu/2018/09/19/cwi-standardization-efforts-in-vqeg-and-itu/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=cwi-standardization-efforts-in-vqeg-and-itu</link>
		
		<dc:creator><![CDATA[CWI]]></dc:creator>
		<pubDate>Wed, 19 Sep 2018 11:58:59 +0000</pubDate>
				<category><![CDATA[News]]></category>
		<category><![CDATA[CWI]]></category>
		<category><![CDATA[ITU]]></category>
		<category><![CDATA[standardisation]]></category>
		<category><![CDATA[VQEG]]></category>
		<guid isPermaLink="false">http://vrtogether.eu/?p=720</guid>

					<description><![CDATA[<p>With the recent advances in capture and display technologies, VR and AR applications are on the spot again. These applications involve new kinds of visual signals, such as omnidirectional images and video, and volumetric signals, such as meshes and point clouds. Additionally, they imply a truly interactive and immersive user experience: the end user can [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://vrtogether.eu/2018/09/19/cwi-standardization-efforts-in-vqeg-and-itu/">CWI Standardization Efforts in VQEG (and ITU)</a> appeared first on <a rel="nofollow" href="https://vrtogether.eu">VRTogether</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>With the recent advances in capture and display technologies, <strong>VR and AR applications are on the spot again</strong>. These applications involve new kinds of visual signals, such as omnidirectional images and video, and volumetric signals, such as meshes and point clouds. Additionally, they imply a truly interactive and immersive user experience: the end user can navigate the scene, with three or six degrees of freedom (3DoF or 6DoF), depending on the scenario.</p>
<p>Assessing the quality of the signals and the user’s quality of the experience for VR and AR applications opens up many <strong>new research challenges</strong> concerning human perception and interaction. Therefore, it is no surprise that standardisation and experts groups are looking into the problem of quality assessment of immersive media.</p>
<p><strong>CWI has recently started to actively participate in the activities of the Immersive Media Group</strong> (IMG) of the Video Quality Expert Group (<a href="https://www.its.bldrdoc.gov/vqeg/vqeg-home.aspx">VQEG</a>). VQEG provides a forum, via email lists and face-to-face meetings for video quality assessment experts to exchange information and work together on common goals. The general motivation of VQEG is <strong>to advance the field of video quality assessment</strong> by investigating new and advanced subjective and objective methods for assessing quality. VQEG activities, such as validation tests, are documented in reports and submitted to relevant ITU Study Groups (e.g., ITU-T SG9, ITU-T SG12, ITU-R WP6C), and other SDOs as appropriate. Several VQEG studies have resulted in ITU Recommendations.</p>
<p>IMG is a group of VQEG that is currently looking at the <strong>quality assessment of immersive media</strong>, involved in virtual and augmented reality applications. CWI is involved in the current activity of the group, which is focusing on the definition of a joint test plan for the design of a subjective test campaign concerning subjective quality assessment of 360-degree content. The group has also established a liaison with the <a href="https://www.itu.int/en/ITU-T/studygroups/2017-2020/12/Pages/q13.aspx">ITU-T Question 13</a>, on Quality of experience (QoE), quality of service (QoS) and performance requirements and assessment methods for multimedia.</p>
<p>The next face-to-face meeting of the VQEG IMG is scheduled for November 12 to 16 and will be <strong>hosted by Google</strong> in Mountain View, CA, USA. CWI is planning to participate in the meeting and present the current activities concerning quality assessment of point cloud signals and user’s QoE in social VR applications.</p>
<h5>Who we are</h5>
<p><a href="https://www.cwi.nl/" target="_blank" rel="noopener noreferrer">CWI</a> is the national research institute for mathematics and computer science of the Dutch National Science Foundation (NWO). CWI performs frontier research in mathematics and computer science and transfers new knowledge in these fields to society in general and trade and industry in particular.</p>
<p>Come and follow us in this VR journey with <a href="http://www.i2cat.net/en" target="_blank" rel="noopener noreferrer">i2Cat</a>, <a href="https://www.tno.nl/en/" target="_blank" rel="noopener noreferrer">TNO</a>, <a href="http://motionspell.com/" target="_blank" rel="noopener noreferrer">Motion Spell</a>, <a href="https://www.certh.gr/root.en.aspx" target="_blank" rel="noopener noreferrer">CERTH</a>, <a href="http://www.artanim.ch/" target="_blank" rel="noopener noreferrer">Artanim</a>, <a href="https://www.viaccess-orca.com/" target="_blank" rel="noopener noreferrer">Viaccess Orca</a>, <a href="http://www.entropystudio.net/" target="_blank" rel="noopener noreferrer">Entropy Studio</a>.</p>
<p><img loading="lazy" class="size-full wp-image-380 alignleft" src="http://vrtogether.eu/wp-content/uploads/2018/02/eu-logo.png" alt="" width="226" height="111" srcset="https://vrtogether.eu/wp-content/uploads/2018/02/eu-logo.png 226w, https://vrtogether.eu/wp-content/uploads/2018/02/eu-logo-100x49.png 100w" sizes="(max-width: 226px) 100vw, 226px" /></p>
<p><em>This project has been funded by the European Commission as part of the H2020 program, under the grant agreement 762111.</em></p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>The post <a rel="nofollow" href="https://vrtogether.eu/2018/09/19/cwi-standardization-efforts-in-vqeg-and-itu/">CWI Standardization Efforts in VQEG (and ITU)</a> appeared first on <a rel="nofollow" href="https://vrtogether.eu">VRTogether</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>The comeback of VR at MPEG</title>
		<link>https://vrtogether.eu/2018/07/18/comeback-vr-mpeg/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=comeback-vr-mpeg</link>
		
		<dc:creator><![CDATA[Motion Spell]]></dc:creator>
		<pubDate>Wed, 18 Jul 2018 07:19:51 +0000</pubDate>
				<category><![CDATA[News]]></category>
		<category><![CDATA[codec]]></category>
		<category><![CDATA[MPEG]]></category>
		<category><![CDATA[standardisation]]></category>
		<category><![CDATA[standards]]></category>
		<guid isPermaLink="false">http://vrtogether.eu/?p=650</guid>

					<description><![CDATA[<p>The general feeling from the MPEG community is that Virtual Reality (VR) made a false start. The Oculus rift’s acquisition (2014) for $2bn created a premature launch of a funding bubble that exploded in early 2017. However, the long-term trend shows that VR is coming back on the scene and will soon catch up into [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://vrtogether.eu/2018/07/18/comeback-vr-mpeg/">The comeback of VR at MPEG</a> appeared first on <a rel="nofollow" href="https://vrtogether.eu">VRTogether</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>The general feeling from the MPEG community is that <strong>Virtual Reality (VR) made a false start</strong>. The <a href="https://www.theguardian.com/technology/2014/jul/22/facebook-oculus-rift-acquisition-virtual-reality" target="_blank" rel="noopener noreferrer">Oculus rift’s acquisition</a> (2014) for $2bn created a premature launch of a funding bubble that exploded in early 2017. However, the long-term trend shows that VR is coming back on the scene and will soon catch up into the market. As a VRTogether project partner, Motion Spell has great expectations about VR. Our participation at the last MPEG meeting (#122) in San Diego confirmed that <strong>activities regarding VR blossom on every field</strong>: MPEG-I, OMAF, Point clouds, NBMP, MPEG-MORE.</p>
<p>&nbsp;</p>
<h4>MPEG</h4>
<p>MPEG is the Motion Picture Expert Group, a group from IEC and ISO that created some of the <strong>foundations of the multimedia industry</strong>: MPEG-2 TS and the MP4 file format, a series of successful codecs both in video (MPEG-2 Video, AVC/H264) and audio (MP3, AAC). A new generation (MPEG-H) emerged in 2013 with MPEG 3D Audio, HEVC and MMT and other activities like MPEG-I (more below).</p>
<p>The <a href="http://gpac.io/" target="_blank" rel="noopener noreferrer">GPAC</a> team and its commercial arm (GPAC Licensing), which is led by <strong>Motion Spell</strong>, are active contributors at MPEG.</p>
<p>MPEG meetings are organized as a set of thematic meeting rooms that represent <strong>different working groups</strong>. Each working group follows its way from requirements to a working draft and then to an international standard. Each MPEG meeting gathers around 500 participants from all over the world.</p>
<p>&nbsp;</p>
<h4>MPEG-I: Coded Representation of Immersive Media</h4>
<p>Since mid-2017, MPEG has started to work on MPEG-I. MPEG-I targets <a href="https://mpeg.chiariglione.org/standards/mpeg-i" target="_blank" rel="noopener noreferrer">future immersive applications</a>. The goal of this new standard is to enable various <strong>forms of audio-visual immersion</strong>, including panoramic video with 2D and 3D audio, with various degrees of true 3D visual perception (leaning toward 6 degrees of freedom). This full standard has already reached a relevant state that forces us to<strong> take it into account in VRTogether</strong>.</p>
<p>MPEG-I is a set of <strong>standards</strong> defining the future of media, which currently comprises eight parts:<br />
• Part 1: Requirements &#8211; Technical Report on Immersive Media<br />
• Part 2: OMAF &#8211; Omnidirectional Media Format<br />
• Part 3: Versatile Video Coding<br />
• Part 4: Immersive Audio Coding<br />
• Part 5: Point Cloud Compression<br />
• Part 6: Immersive Media Metrics<br />
• Part 7: Immersive Media Metadata<br />
• Part 8: NBMP &#8211; Network-Based Media Processing</p>
<p>In this article, we will focus on parts 1, 2, 3 and 8.</p>
<p>&nbsp;</p>
<h4>Architecture</h4>
<p>MPEG-I exposes a <strong>set</strong> of architectures rather than just one:<a href="http://vrtogether.eu/wp-content/uploads/2018/07/MPEG-VR.png"><img loading="lazy" class="aligncenter wp-image-652" src="http://vrtogether.eu/wp-content/uploads/2018/07/MPEG-VR.png" alt="" width="742" height="499" srcset="https://vrtogether.eu/wp-content/uploads/2018/07/MPEG-VR.png 1877w, https://vrtogether.eu/wp-content/uploads/2018/07/MPEG-VR-300x202.png 300w, https://vrtogether.eu/wp-content/uploads/2018/07/MPEG-VR-768x516.png 768w, https://vrtogether.eu/wp-content/uploads/2018/07/MPEG-VR-1024x688.png 1024w, https://vrtogether.eu/wp-content/uploads/2018/07/MPEG-VR-700x470.png 700w, https://vrtogether.eu/wp-content/uploads/2018/07/MPEG-VR-410x275.png 410w, https://vrtogether.eu/wp-content/uploads/2018/07/MPEG-VR-100x67.png 100w, https://vrtogether.eu/wp-content/uploads/2018/07/MPEG-VR-275x185.png 275w" sizes="(max-width: 742px) 100vw, 742px" /></a></p>
<h4>Part 1: Requirements</h4>
<p>MPEG-I requirements are divided into <strong>phases</strong>:</p>
<ul>
<li>Phase 1a: Captured image based multi-view encoding/decoding (currently finalizing standardization)</li>
<li>Phase 1b: Video-based multi-view encoding/decoding. Mostly 3DoF and some 3DoF+. VR and interactivity activities are likely to be split.</li>
<li>Phase 2: Video + additional data (depth, point cloud) based multi-view encoding/decoding. This phase allows to take into account 6DoF with limited freedom (Omnidirectional 6DoF, Windowed 6DoF) and to Synthesize points of view from fixed cameras</li>
</ul>
<h4></h4>
<p>&nbsp;</p>
<h4>Part 2: OMAF</h4>
<p>OMAF is a profile explanation on <strong>how to use the MPEG tools with omnidirectional media</strong>. OMAF kicked-off its activity towards a 2nd edition enabling support for 3DoF+ and social VR with the plan going to Committee Draft (CD) in October 2018.</p>
<p>Additionally, there is a test framework proposed which allows assessing the performance of various CMAF tools. Its main focus is on video, but MPEG’s audio subgroup has a similar framework to enable subjective testing. It could be interesting seeing these two frameworks combined in one way or the other.</p>
<p>OMAF implies new ISOBMFF/MP4 boxes and models. This is very interesting to follow for VRTogether, but it might be too complex to implement at this stage of specification. In addition, OMAF targets the newest technologies while VRTogether wants to deploy with existing ones. There is room for MPEG contributions in this area.</p>
<p>OMAF also specifies some support for timed metadata (like timed text for subtitle) that has to be followed as well. Our current workflow implies many static parameters on the capture or rendering sides that may become dynamic.</p>
<p>&nbsp;</p>
<h4>Part 3: Versatile Video Coding</h4>
<p>This part focuses on immersive video coding which will be a successor of HEVC. The name <strong>VVC</strong> (Versatile Video Coding) was hand voted at the meeting. Current experiments show that VVC codec can outperform HEVC by 40%. The release of VVC coding is planned for October 2020.</p>
<p>&nbsp;</p>
<h4>Part 8: Network-Based Media Processing</h4>
<p>Network-Based Media Processing (NBMP) is a framework that allows service providers and end-users to <strong>describe media processing operations</strong> that are to be performed by the network. NBMP describes the composition of network-based media processing services out of a set of network-based media processing functions and makes these NBMP services accessible through Application Programming Interfaces (APIs).</p>
<p>Motion Spell, partner of the VRTogether project, has decided to attend some <a href="https://mpeg.chiariglione.org/standards/exploration/network-based-media-processing" target="_blank" rel="noopener noreferrer">sessions on NBMP</a> during the 122 MPEG meeting since this new activity that allows <strong>building media workflows</strong> has generally been ignored so far. One of the main use-case covered by the output of the previous MPEG meeting in Gwangju included the <strong>ingest of media for distribution</strong>. Unified Streaming, co-chaired, wants to standardize ingest. Indeed, this is very exciting for VRTogether; Motion Spell will be interested in implementing an ingest component. For example, it could be useful to compare our low latency ingest implementation with the status of this standardization effort by the end of the project.</p>
<p><a href="https://www.tno.nl/en/" target="_blank" rel="noopener noreferrer">TNO</a>, also a partner of the VRTogether project, proposed a contribution that puts the focus on <strong>extending scene description for 3D environment</strong> in the scope of NBMP. This contribution also exposed a tentative list of NBMP functions that could be useful for Social VR:</p>
<ul>
<li>Background removal.</li>
<li>User detection and scaling.</li>
<li>Room composition without users.</li>
<li>Room composition with users.</li>
<li>Low-latency 3DOF encoding.</li>
<li>Network-based media synchronization.</li>
<li>3D audio mixing functionality.</li>
<li>Lip-sync compensation.</li>
</ul>
<p>&nbsp;</p>
<h5>Who we are</h5>
<p><a href="http://www.motionspell.com" target="_blank" rel="noopener noreferrer">Motion Spell</a> is an SME specialized in audio-visual media technologies. Motion Spell was created in 2013 in Paris, France. On a conceptual and technical level, Motion Spell will focus on the development of transmission open tools. Furthermore, Motion Spell plans to explore encoding requirements for VR to participate in the current standardization efforts and first implementations. Finally we will also assist on the playback side to ensure the end-to-end workflow is covered.</p>
<p>Come and follow us in this VR journey with <a href="http://www.i2cat.net/en" target="_blank" rel="noopener noreferrer">i2CAT</a>, <a href="https://www.cwi.nl/" target="_blank" rel="noopener noreferrer">CWI</a>, <a href="https://www.tno.nl/en/" target="_blank" rel="noopener noreferrer">TNO</a>, <a href="https://www.certh.gr/root.en.aspx" target="_blank" rel="noopener noreferrer">CERTH</a>, <a href="http://www.artanim.ch/" target="_blank" rel="noopener noreferrer">Artanim</a>, <a href="https://www.viaccess-orca.com/" target="_blank" rel="noopener noreferrer">Viaccess-Orca</a>, <a href="http://www.entropystudio.net/" target="_blank" rel="noopener noreferrer">Entropy Studio</a>.</p>
<p><img loading="lazy" class="size-full wp-image-380 alignleft" src="http://vrtogether.eu/wp-content/uploads/2018/02/eu-logo.png" alt="" width="226" height="111" srcset="https://vrtogether.eu/wp-content/uploads/2018/02/eu-logo.png 226w, https://vrtogether.eu/wp-content/uploads/2018/02/eu-logo-100x49.png 100w" sizes="(max-width: 226px) 100vw, 226px" /></p>
<p><em>This project has been funded by the European Commission as part of the H2020 program, under the grant agreement 762111.</em></p>
<p>&nbsp;</p>
<p>The post <a rel="nofollow" href="https://vrtogether.eu/2018/07/18/comeback-vr-mpeg/">The comeback of VR at MPEG</a> appeared first on <a rel="nofollow" href="https://vrtogether.eu">VRTogether</a>.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
