<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>VR &#8211; VRTogether</title>
	<atom:link href="https://vrtogether.eu/tag/vr/feed/" rel="self" type="application/rss+xml" />
	<link>https://vrtogether.eu</link>
	<description>An end-to-end system for the production and delivery of photorealistic and social virtual reality experiences</description>
	<lastBuildDate>Wed, 29 Jul 2020 18:58:29 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	

 
	<item>
		<title>VRTogether wins the Best Demo Award at MMM2019</title>
		<link>https://vrtogether.eu/2019/01/18/vrtogether-wins-the-best-demo-award-at-mmm2019/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=vrtogether-wins-the-best-demo-award-at-mmm2019</link>
		
		<dc:creator><![CDATA[CERTH]]></dc:creator>
		<pubDate>Fri, 18 Jan 2019 08:20:55 +0000</pubDate>
				<category><![CDATA[Events]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[Award]]></category>
		<category><![CDATA[Conference]]></category>
		<category><![CDATA[TVM]]></category>
		<category><![CDATA[VR]]></category>
		<guid isPermaLink="false">http://vrtogether.eu/?p=788</guid>

					<description><![CDATA[<p>The post <a rel="nofollow" href="https://vrtogether.eu/2019/01/18/vrtogether-wins-the-best-demo-award-at-mmm2019/">VRTogether wins the Best Demo Award at MMM2019</a> appeared first on <a rel="nofollow" href="https://vrtogether.eu">VRTogether</a>.</p>
]]></description>
										<content:encoded><![CDATA[<div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>VRTogether project partner <a href="https://www.certh.gr/root.en.aspx" target="_blank" rel="noopener">CERTH</a> participated at the 25th <b>International MultiMedia Modeling Conference (MMM) </b><span style="font-weight: 400;">with the SpaceWars demo</span><b>. </b><span style="font-weight: 400;">MMM</span> <span style="font-weight: 400;">conference is a leading international forum for researchers and industry practitioners to share new ideas, original research results and practical development experiences from all multimedia-related areas. The MMM 2019 program is organized in several regular oral sessions, five special sessions, two poster and demos sessions, one industry session, the Video Browser Showdown session, one workshop, three invited keynote talks and two tutorials.</span></p>
<p>On the 4th day of the conference, CERTH’s (VCL) team presented a poster that depicts the <b>Time-Varying Mesh</b><span style="font-weight: 400;">-based (TVM) reconstruction pipeline as used in </span><b>VRTogether</b><span style="font-weight: 400;">, as well as a game application called “</span><b>SpaceWars</b><span style="font-weight: 400;">” that leverages the real-time reconstruction of the users.</span></p>
<p><iframe width="1170" height="658" src="https://www.youtube.com/embed/nK7pC41YjZY?feature=oembed" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></p>
<p>SpaceWars is a Tele-Immersive game where multiple players are placed into the same virtual arena on top of futuristic hovercrafts, where they engage each other in a Capture-the-Flag type of game. SpaceWars utilizes the latest version of the VRTogether platform developed at CERTH (VCL) and was developed in order to stress-test the technology with respect to the real-time interactions between remote users within the challenging responsiveness setting of multiplayer game.</p>
<p>The current version of VRTogether’s TVM-based system consists only of 4 Microsoft Kinect sensors surrounding the user. The platform is portable and can be easily deployed and operated as a local capturing station as it is also low-cost in terms of equipment by using consumer grade vision sensors. Additionally, it utilizes an easy-to-use calibration scheme, using a custom built structure requiring commercially available materials.</p>
<p>SpaceWars system also includes a VR spectator mode, allowing users who are not playing to “live” the experience inside the virtual environment watch the on-going game.</p>
<p>During the demo session, all attendees had the opportunity to vote for the best presented demo. Subsequently, the committee convened taking into account the aforementioned votes. SpaceWars was voted as the best presented demo of the conference, winning the <b>Best Demo Award</b> along with the corresponding prize from Springer.</p>
<p>The award is available at the <b>Visual Computing Lab</b> official site (<a href="http://vcl.iti.gr/new/best-demo-award-mmm-2019/">click here</a>).</p>

		</div>
	</div>
</div></div></div></div><section class="vc_section"><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="vc_separator wpb_content_element vc_separator_align_center vc_sep_width_100 vc_sep_pos_align_center vc_separator_no_text vc_sep_color_grey" ><span class="vc_sep_holder vc_sep_holder_l"><span  class="vc_sep_line"></span></span><span class="vc_sep_holder vc_sep_holder_r"><span  class="vc_sep_line"></span></span>
</div>
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><em>Text and figures: Spyridon Thermos, Anargyros Chatzitofis &#8211; CERTH</em></p>
<p>Come and follow us in this VR journey with <a href="http://www.i2cat.net/en" target="_blank" rel="noopener">i2CAT</a>, <a href="https://www.cwi.nl/" target="_blank" rel="noopener">CWI</a>, <a href="https://www.tno.nl/en/" target="_blank" rel="noopener">TNO</a>, <a href="https://www.certh.gr/root.en.aspx" target="_blank" rel="noopener">CERTH</a>, <a href="http://www.artanim.ch/" target="_blank" rel="noopener">Artanim</a>, <a href="https://www.viaccess-orca.com/" target="_blank" rel="noopener">Viaccess-Orca</a>, <a href="https://www.entropystudio.net/" target="_blank" rel="noopener">Entropy Studio</a>, <a href="https://www.gpac-licensing.com/" target="_blank" rel="noopener">Motion Spell</a>.</p>

		</div>
	</div>
</div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper"><div  class="vc_tweetmeme-element"><a href="https://twitter.com/share" class="twitter-share-button" data-via="VRTogether_EU">Tweet</a><script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+'://platform.twitter.com/widgets.js';fjs.parentNode.insertBefore(js,fjs);}}(document, 'script', 'twitter-wjs');</script></div></div></div></div></div></section><section class="vc_section"><div class="vc_row wpb_row vc_row-fluid vc_column-gap-10"><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="vc_row wpb_row vc_inner vc_row-fluid vc_custom_1547799200116 vc_row-has-fill vc_row-o-content-middle vc_row-flex"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_center">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="75" height="50" src="https://vrtogether.eu/wp-content/uploads/2019/01/EU_flag_75px.png" class="vc_single_image-img attachment-thumbnail" alt="" loading="lazy" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>This project has been funded by the European Commission as part of the H2020 program, under the grant agreement 762111.</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div></div></div></div></div></section>
<p>The post <a rel="nofollow" href="https://vrtogether.eu/2019/01/18/vrtogether-wins-the-best-demo-award-at-mmm2019/">VRTogether wins the Best Demo Award at MMM2019</a> appeared first on <a rel="nofollow" href="https://vrtogether.eu">VRTogether</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>VR strikes back at MPEG</title>
		<link>https://vrtogether.eu/2018/10/02/vr-strikes-back-mpeg/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=vr-strikes-back-mpeg</link>
		
		<dc:creator><![CDATA[Motion Spell]]></dc:creator>
		<pubDate>Tue, 02 Oct 2018 14:38:48 +0000</pubDate>
				<category><![CDATA[Events]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[Compression]]></category>
		<category><![CDATA[MPEG]]></category>
		<category><![CDATA[Point Cloud]]></category>
		<category><![CDATA[VR]]></category>
		<guid isPermaLink="false">http://vrtogether.eu/?p=727</guid>

					<description><![CDATA[<p>VRTogether project partners Motion Spell, TNO and CWI participated at the last MPEG meetings (#122 and #123) in San Diego and Ljubljana with the intention of getting brand-new feedback around Virtual Reality. It appears that VR activities blossom in many fields:  MPEG-I, OMAF, Point clouds, NBMP, MPEG-MORE. The long-term trend shows that VR is coming [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://vrtogether.eu/2018/10/02/vr-strikes-back-mpeg/">VR strikes back at MPEG</a> appeared first on <a rel="nofollow" href="https://vrtogether.eu">VRTogether</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>VRTogether project partners <strong>Motion Spell, TNO and CWI</strong> participated at the last <strong>MPEG meetings</strong> (<a href="http://2018isoiec.regstep.com/home/page/index">#122</a> and <a href="http://www.kcmweb.de/conferences/MPEG123_alias/www.kcmweb.de/conferences/mpeg123/">#123</a>) in San Diego and Ljubljana with the intention of getting brand-new feedback around Virtual Reality. It appears that VR activities blossom in many fields:  MPEG-I, OMAF, Point clouds, NBMP, MPEG-MORE. The long-term trend shows that VR is coming back on the scene and will soon catch up onto the market.</p>
<p>This article focuses on Point Clouds and MPEG-MORE. The <a href="http://vrtogether.eu/2018/07/18/comeback-vr-mpeg/">first part of this article</a> is covering all the other technologies.</p>
<h4>Point Cloud Compression</h4>
<p><a href="https://mpeg.chiariglione.org/standards/mpeg-i">MPEG-I</a> targets<strong> future immersive applications</strong>. Part 5 of this standard specifies <strong>Point Cloud Compression</strong> (PCC).</p>
<p>A point cloud is defined as a set of points in the 3D space. Each point is identified by its cartesian coordinates (x,y,z), referred to as spatial attributes, as well as other attributes, such as a color, a normal, a reflectance value, etc. There are no restrictions on the attributes associated with each point.</p>
<p>Point clouds allow representing <strong>volumetric signals</strong>. Because of their simplicity and versatility, they are important for emerging AR and VR applications. Point clouds are usually captured using multiple RGB plus depth sensors. A point cloud can contain millions of points in order to create a photorealistic reconstruction of an object. Compression of point clouds is essential to efficiently store and transmit volumetric data for applications such as tele-immersive video and free-viewpoint sports replays, as well as for innovative medical and robotic applications.</p>
<p>MPEG has a separate activity on point cloud compression: in April 2017 MPEG issued a Call for Proposals (CfP) on PCC technologies, seeking compression proposals in three categories:</p>
<ol>
<li>Static frames</li>
<li>Dynamic sequences</li>
<li>Dynamically acquired/fused point clouds</li>
</ol>
<p>Leading technology companies responded to the CfP, and the proposals were assessed in October 2017. In addition to objective metrics, <a href="https://mpeg.chiariglione.org/meetings/120">each proposal was also evaluated through subjective tests</a>, performed at GBTech and CWI. The winning projects were selected as “Test Models” for the next step of the standardization activity.</p>
<p>For the compression of dynamic sequences, it was found that compression performance can be significantly improved by <strong>leveraging existing video codecs after performing a 3D to 2D conversion</strong> using a suitable mapping scheme. This also allows the use of hardware acceleration of existing video codecs, which is supported by many current generation GPUs. Thus, synergies with existing hardware and software infrastructure can allow rapid deployment of new immersive experiences.</p>
<p><img loading="lazy" class="aligncenter wp-image-728" src="http://vrtogether.eu/wp-content/uploads/2018/10/Point-Cloud-Example.jpg" alt="" width="843" height="321" srcset="https://vrtogether.eu/wp-content/uploads/2018/10/Point-Cloud-Example.jpg 1105w, https://vrtogether.eu/wp-content/uploads/2018/10/Point-Cloud-Example-300x114.jpg 300w, https://vrtogether.eu/wp-content/uploads/2018/10/Point-Cloud-Example-768x293.jpg 768w, https://vrtogether.eu/wp-content/uploads/2018/10/Point-Cloud-Example-1024x390.jpg 1024w, https://vrtogether.eu/wp-content/uploads/2018/10/Point-Cloud-Example-700x267.jpg 700w, https://vrtogether.eu/wp-content/uploads/2018/10/Point-Cloud-Example-410x156.jpg 410w, https://vrtogether.eu/wp-content/uploads/2018/10/Point-Cloud-Example-100x38.jpg 100w, https://vrtogether.eu/wp-content/uploads/2018/10/Point-Cloud-Example-275x105.jpg 275w" sizes="(max-width: 843px) 100vw, 843px" /></p>
<p>Figure 1: An example of a perspective view of a point cloud: the original version on the left, and two views of compressed versions of the same point cloud in the middle and on the right.</p>
<p>After the selection of the Test Models that combine the best performing technologies, the activity focused on the identification and investigation of methods to optimize the Test Models, by performing “Core Experiments”. Examples of Core Experiments include the comparison of different schemes for mapping the texture information from 3D to 2D, the analysis of hybrid codecs that combine 3D geometry compression techniques with traditional block-based video compression strategies, and the use of motion field coding. These Core Experiments are still ongoing.</p>
<p>At the last MPEG meetings, the PCC activity has been particularly crowded, <strong>attracting attention from many industrial partners</strong>. The main activities of the group focused on cross-checking test models and reviewing the results of the core experiments. In addition, some new datasets created by commercial companies (8i, Owlii, Samsung, Fraunhofer) were presented, and a proposal to merge two of the Test Models was presented; the goal was to take advantage of the 2D compression technics (HEVC and successors). A preliminary contribution also explored the delivery and transmission of point clouds based on this approach.</p>
<p>In the VRTogether project, CWI is providing a solution for <a href="https://ieeexplore.ieee.org/document/7434610/">lossy compression of dynamic point clouds</a>, based on the open source software for Point Cloud Compression developed at CWI (available at <a href="https://github.com/cwi-dis/cwi-pcl-codec" target="_blank" rel="noopener">https://github.com/cwi-dis/cwi-pcl-codec</a>). This solution will not be competing in the standardization race, but it serves as an <strong>open source tool to benchmark different solutions and experiment research ideas</strong>. Currently, it is being integrated into the VRTogether DASH-based point cloud communication pipeline that will allow multiple users to see each other point cloud representation, captured in real time, and rendered in the same virtual environment. Part of CWI research within the VRTogether project will also focus on the design of new objective quality metrics for evaluating point clouds, based on the study of human perception of volumetric signals.</p>
<h4>MPEG-MORE</h4>
<p><strong>MPEG‘s Media Orchestration standard</strong> (also known as MORE: MPEG-B part 13- <a href="https://mpeg.chiariglione.org/standards/mpeg-b/media-orchestration">https://mpeg.chiariglione.org/standards/mpeg-b/media-orchestration</a>) has been finalized by the committee and the final edited version has been submitted to MPEG’s parent body and ISO for one more yes/no ballot followed by publication. This last step is a mere formality.  It is a bit hard to estimate when the specification will be published by ISO since it requires some secretariat work and this can take quite a few months. The work on Reference Content and SW continues. This work intends to make content available with MORE metadata so that (potential) users of the MORE specification can understand how the specification works and are assisted in creating implementations.</p>
<p>In the meantime, <strong>Social VR has become more important in MPEG</strong>, and it looks like some of the requirements can be fulfilled by the MORE specification. This notably applies to the simpler forms of Social VR, where images of users are composited into a VR360 experience. This requires both temporal synchronization (multiple users should experience the same scene simultaneously) and spatial coordination (the composited images for all users need to have consistent location and size for the experience to be perceived as realistic and compelling). MORE defines the necessary metadata and protocols for this.</p>
<h4>About MPEG</h4>
<p>MPEG is the Moving Picture Experts Group, a group from IEC and ISO which created some of the foundations of the multimedia industry: the MPEG-2 Transport Stream, and the MP4 file format, a series of successful codecs both in video (MPEG-2 Video, AVC/H264) and audio (MP3, AAC). A new generation (MPEG-H) emerged in 2013 with MPEG 3D Audio, HEVC and MMT, and other activities in MPEG-I like Point Cloud, MPEG Orchestration.</p>
<p>&nbsp;</p>
<h5>Who we are</h5>
<p><a href="http://www.motionspell.com" target="_blank" rel="noopener">Motion Spell</a> is an SME specialized in audio-visual media technologies. Motion Spell was created in 2013 in Paris, France. On a conceptual and technical level, Motion Spell will focus on the development of transmission open tools. Furthermore, Motion Spell plans to explore encoding requirements for VR to participate in the current standardization efforts and first implementations. Finally we will also assist on the playback side to ensure the end-to-end workflow is covered.</p>
<p>Come and follow us in this VR journey with <a href="http://www.i2cat.net/en" target="_blank" rel="noopener">i2CAT</a>, <a href="https://www.cwi.nl/" target="_blank" rel="noopener">CWI</a>, <a href="https://www.tno.nl/en/" target="_blank" rel="noopener">TNO</a>, <a href="https://www.certh.gr/root.en.aspx" target="_blank" rel="noopener">CERTH</a>, <a href="http://www.artanim.ch/" target="_blank" rel="noopener">Artanim</a>, <a href="https://www.viaccess-orca.com/" target="_blank" rel="noopener">Viaccess-Orca</a>, <a href="http://www.entropystudio.net/" target="_blank" rel="noopener">Entropy Studio</a>.</p>
<p><img loading="lazy" class="size-full wp-image-380 alignleft" src="http://vrtogether.eu/wp-content/uploads/2018/02/eu-logo.png" alt="" width="226" height="111" srcset="https://vrtogether.eu/wp-content/uploads/2018/02/eu-logo.png 226w, https://vrtogether.eu/wp-content/uploads/2018/02/eu-logo-100x49.png 100w" sizes="(max-width: 226px) 100vw, 226px" /></p>
<p>The post <a rel="nofollow" href="https://vrtogether.eu/2018/10/02/vr-strikes-back-mpeg/">VR strikes back at MPEG</a> appeared first on <a rel="nofollow" href="https://vrtogether.eu">VRTogether</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Your body feels good</title>
		<link>https://vrtogether.eu/2018/08/16/your-body-feels-good/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=your-body-feels-good</link>
		
		<dc:creator><![CDATA[Artanim]]></dc:creator>
		<pubDate>Thu, 16 Aug 2018 08:42:40 +0000</pubDate>
				<category><![CDATA[News]]></category>
		<category><![CDATA[Avatar]]></category>
		<category><![CDATA[Experiment]]></category>
		<category><![CDATA[Social VR]]></category>
		<category><![CDATA[VR]]></category>
		<guid isPermaLink="false">http://vrtogether.eu/?p=685</guid>

					<description><![CDATA[<p>Do you feel in control of the body that you see? This is an important question in virtual reality (VR) as it highly impacts the user’s sensation of presence and embodiment of an avatar representation while immersed in a virtual environment. To better understand this aspect, VR together partners Artanim performed an experiment to assess [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://vrtogether.eu/2018/08/16/your-body-feels-good/">Your body feels good</a> appeared first on <a rel="nofollow" href="https://vrtogether.eu">VRTogether</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Do you feel in control of the body that you see? This is an important question in virtual reality (VR) as it highly impacts the <strong>user’s sensation of presence and embodiment of an avatar representation</strong> while immersed in a virtual environment. To better understand this aspect, VR together partners <a href="http://www.artanim.ch/"><strong>Artanim</strong></a> performed an experiment to assess the relative impact of different levels of body animation fidelity to presence.</p>
<p>In this <strong>experiment</strong>, the users are equipped with a motion capture suit and reflective markers to track their movements in real time with a Vicon optical motion capture system. They also wear Manus VR gloves for fingers tracking and an Oculus HMD. At each trial, the face (eye gaze and mouth), fingers and the avatar&#8217;s upper and lower bodies are manipulated with different degree of animation fidelity, such as no animation, procedural animation and motion capture. Each time, users have to execute a number of tasks (walk, grab an object, speak in front of a mirror) and to evaluate if they are in control of their body. Users start with the simplest setting and, according to the judged priority, improve features of the avatar animation until they are satisfied with the experience of control.</p>
<p>Using the order in which users improve the movement features, we can assert on the most valuable animation features to the users. With this experiment, we want to confront the relative importance of animation features with the costs of adoption (monetary and effort) to provide software and use guidelines for live 3D rigged character mesh animation based on affordable hardware. This outcome will be useful to better define <strong>what makes a compelling social VR experience</strong>.</p>
<p><img loading="lazy" class="wp-image-690 alignleft" src="http://vrtogether.eu/wp-content/uploads/2018/08/VR-Together.jpg" alt="" width="558" height="594" srcset="https://vrtogether.eu/wp-content/uploads/2018/08/VR-Together.jpg 1200w, https://vrtogether.eu/wp-content/uploads/2018/08/VR-Together-282x300.jpg 282w, https://vrtogether.eu/wp-content/uploads/2018/08/VR-Together-768x818.jpg 768w, https://vrtogether.eu/wp-content/uploads/2018/08/VR-Together-962x1024.jpg 962w, https://vrtogether.eu/wp-content/uploads/2018/08/VR-Together-700x746.jpg 700w, https://vrtogether.eu/wp-content/uploads/2018/08/VR-Together-410x437.jpg 410w, https://vrtogether.eu/wp-content/uploads/2018/08/VR-Together-100x107.jpg 100w, https://vrtogether.eu/wp-content/uploads/2018/08/VR-Together-275x293.jpg 275w" sizes="(max-width: 558px) 100vw, 558px" /><img loading="lazy" class="alignleft wp-image-688" src="http://vrtogether.eu/wp-content/uploads/2018/08/VR-Together3.jpg" alt="" width="378" height="283" srcset="https://vrtogether.eu/wp-content/uploads/2018/08/VR-Together3.jpg 1200w, https://vrtogether.eu/wp-content/uploads/2018/08/VR-Together3-300x225.jpg 300w, https://vrtogether.eu/wp-content/uploads/2018/08/VR-Together3-768x576.jpg 768w, https://vrtogether.eu/wp-content/uploads/2018/08/VR-Together3-1024x768.jpg 1024w, https://vrtogether.eu/wp-content/uploads/2018/08/VR-Together3-700x525.jpg 700w, https://vrtogether.eu/wp-content/uploads/2018/08/VR-Together3-410x308.jpg 410w, https://vrtogether.eu/wp-content/uploads/2018/08/VR-Together3-100x75.jpg 100w, https://vrtogether.eu/wp-content/uploads/2018/08/VR-Together3-275x206.jpg 275w" sizes="(max-width: 378px) 100vw, 378px" /></p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p><img loading="lazy" class="wp-image-689 alignnone" src="http://vrtogether.eu/wp-content/uploads/2018/08/VR-Together4.jpg" alt="" width="377" height="283" srcset="https://vrtogether.eu/wp-content/uploads/2018/08/VR-Together4.jpg 1200w, https://vrtogether.eu/wp-content/uploads/2018/08/VR-Together4-300x225.jpg 300w, https://vrtogether.eu/wp-content/uploads/2018/08/VR-Together4-768x576.jpg 768w, https://vrtogether.eu/wp-content/uploads/2018/08/VR-Together4-1024x768.jpg 1024w, https://vrtogether.eu/wp-content/uploads/2018/08/VR-Together4-700x525.jpg 700w, https://vrtogether.eu/wp-content/uploads/2018/08/VR-Together4-410x308.jpg 410w, https://vrtogether.eu/wp-content/uploads/2018/08/VR-Together4-100x75.jpg 100w, https://vrtogether.eu/wp-content/uploads/2018/08/VR-Together4-275x206.jpg 275w" sizes="(max-width: 377px) 100vw, 377px" /></p>
<p>&nbsp;</p>
<h5>Who we are</h5>
<p><a href="http://www.artanim.ch/" target="_blank" rel="noopener noreferrer">Artanim</a> is a non-profit foundation founded in 2011 and located in Geneva, Switzerland. The foundation carries out research activities in the field of computer graphics according to two strategic axes of research linked to motion capture: virtual reality, mainly the creation and animation of digital avatars and the development of interactive VR and AR applications; and medical research (joint biomechanics, orthopaedics, sports medicine).</p>
<p>Come and follow us in this VR journey with <a href="http://www.motionspell.com/" target="_blank" rel="noopener noreferrer">Motion Spell</a>, <a href="http://www.i2cat.net">i2CAT</a>, <a href="https://www.cwi.nl/" target="_blank" rel="noopener noreferrer">CWI</a>, <a href="https://www.tno.nl/en/" target="_blank" rel="noopener noreferrer">TNO</a>, <a href="https://www.certh.gr/root.en.aspx" target="_blank" rel="noopener noreferrer">CERTH</a>, <a href="https://www.viaccess-orca.com/" target="_blank" rel="noopener noreferrer">Viaccess-Orca</a>, <a href="http://www.entropystudio.net/" target="_blank" rel="noopener noreferrer">Entropy Studio</a><img loading="lazy" class="size-full wp-image-380 alignleft" src="http://vrtogether.eu/wp-content/uploads/2018/02/eu-logo.png" alt="" width="226" height="111" srcset="https://vrtogether.eu/wp-content/uploads/2018/02/eu-logo.png 226w, https://vrtogether.eu/wp-content/uploads/2018/02/eu-logo-100x49.png 100w" sizes="(max-width: 226px) 100vw, 226px" /></p>
<p><em>This project has been funded by the European Commission as part of the H2020 program, under the grant agreement 762111.</em></p>
<p>&nbsp;</p>
<p><em>Text and pictures: </em><em>Henrique Galvan Debarba, Caecilia Charbonnier </em><em>&#8211; <a href="http://www.artanim.ch/" target="_blank" rel="noopener noreferrer">Artanim</a></em></p>
<p>The post <a rel="nofollow" href="https://vrtogether.eu/2018/08/16/your-body-feels-good/">Your body feels good</a> appeared first on <a rel="nofollow" href="https://vrtogether.eu">VRTogether</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>VRTogether has launched a market research study covering VR trends and technologies</title>
		<link>https://vrtogether.eu/2018/08/09/market-study-vr-trends-technologies/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=market-study-vr-trends-technologies</link>
		
		<dc:creator><![CDATA[Viaccess-Orca]]></dc:creator>
		<pubDate>Thu, 09 Aug 2018 12:12:41 +0000</pubDate>
				<category><![CDATA[News]]></category>
		<category><![CDATA[Market]]></category>
		<category><![CDATA[Study]]></category>
		<category><![CDATA[Trends]]></category>
		<category><![CDATA[Virtual Reality]]></category>
		<category><![CDATA[VR]]></category>
		<guid isPermaLink="false">http://vrtogether.eu/?p=678</guid>

					<description><![CDATA[<p>VRTogether’s members are currently analysing the market to provide a wide vision of the current market and the expected evolution of the immersive audiovisual products. The objectives of this study are t: Shed light on changing behaviours and associated expectations in audio-visual consumption. Assess the market potential of the solutions developed within VRTogether group both [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://vrtogether.eu/2018/08/09/market-study-vr-trends-technologies/">VRTogether has launched a market research study covering VR trends and technologies</a> appeared first on <a rel="nofollow" href="https://vrtogether.eu">VRTogether</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>VRTogether’s members are currently analysing the market to provide a wide vision of the <strong>current market and the expected evolution</strong> of the immersive audiovisual products. The objectives of this study are t:</p>
<ul>
<li>Shed light on changing behaviours and associated expectations in audio-visual consumption.</li>
<li>Assess the market potential of the solutions developed within VRTogether group both regarding enhancing existing formats and creating new types of content.</li>
<li>Identify technological trends, similar potential solutions as well as new partners.</li>
<li>Identify the targets (B2C, B2B, &#8230;) of the solution and how they could successfully be addressed.</li>
</ul>
<p>The scope of the study includes the head-mounted displays (HMD), the capture systems, the rendering engines and the VR softwares expected to be used by various industries.</p>
<p>The study provides an estimation of the forecasted revenues, an overview of the current usages, a VR SWOT (strengths, weaknesses, opportunities and Threats) and the VR applications being used or trialled nowadays in the different industries. With the eruption of virtual reality content, the user experience has been very lonesome. As a consequence, some developers have begun to include<strong> a social dimension to their content</strong> in various industries: Games, Social Networks, Sports, Music &amp; Arts, Education, Tourism and of course business applications. The VR market remains relatively niche, but a continued moderate growth of the headset adoption is expected over the next few years: hardware prices drop, technology improves and content becomes more appealing.</p>
<p>Furthermore, the <strong>VR value chain</strong> is presented in the study and the associated technologies are benchmarked to highlight the best suited to the VRTogether project. The analysis will lead to the proposal of a value proposition associated with a best-of-breed ecosystem. The study will also provide the targeted customers (businesses and consumers) and the possible business models.</p>
<p>We will keep you informed of the publication of the document!</p>
<p>&nbsp;</p>
<h5>Who we are</h5>
<p>As a leading global provider of content protection, delivery, and discovery solutions, <a href="https://www.viaccess-orca.com/" target="_blank" rel="noopener noreferrer">Viaccess-Orca</a> is shaping the ultimate content experience. VO will contribute to the different WP of the project VRTogether, we will focus on Platform Design and architecture and leads work package 5 on Innovation, Dissemination and exploitation. We will also contribute to the evaluation of the solution compared to market available product.</p>
<p>Come and follow us in this VR journey with <a href="http://www.i2cat.net/en" target="_blank" rel="noopener noreferrer">i2CAT</a>, <a href="https://www.cwi.nl/" target="_blank" rel="noopener noreferrer">CWI</a>, <a href="https://www.tno.nl/en/" target="_blank" rel="noopener noreferrer">TNO</a>, <a href="https://www.certh.gr/root.en.aspx" target="_blank" rel="noopener noreferrer">CERTH</a>, <a href="http://www.artanim.ch/" target="_blank" rel="noopener noreferrer">Artanim</a>, <a href="http://www.motionspell.com" target="_blank" rel="noopener noreferrer">Motion Spell</a>, <a href="http://www.entropystudio.net/" target="_blank" rel="noopener noreferrer">Entropy Studio</a>.</p>
<p><img loading="lazy" class="size-full wp-image-380 alignleft" src="http://vrtogether.eu/wp-content/uploads/2018/02/eu-logo.png" alt="" width="226" height="111" srcset="https://vrtogether.eu/wp-content/uploads/2018/02/eu-logo.png 226w, https://vrtogether.eu/wp-content/uploads/2018/02/eu-logo-100x49.png 100w" sizes="(max-width: 226px) 100vw, 226px" /></p>
<p><em>This project has been funded by the European Commission as part of the H2020 program, under the grant agreement 762111.</em></p>
<p>&nbsp;</p>
<p>The post <a rel="nofollow" href="https://vrtogether.eu/2018/08/09/market-study-vr-trends-technologies/">VRTogether has launched a market research study covering VR trends and technologies</a> appeared first on <a rel="nofollow" href="https://vrtogether.eu">VRTogether</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>VRTogether and the future of Social VR</title>
		<link>https://vrtogether.eu/2018/08/03/vrtogether-and-the-future-of-social-vr/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=vrtogether-and-the-future-of-social-vr</link>
		
		<dc:creator><![CDATA[i2CAT]]></dc:creator>
		<pubDate>Fri, 03 Aug 2018 10:28:00 +0000</pubDate>
				<category><![CDATA[News]]></category>
		<category><![CDATA[Oculus]]></category>
		<category><![CDATA[Oculus Venues]]></category>
		<category><![CDATA[Social VR]]></category>
		<category><![CDATA[Virtual Reality]]></category>
		<category><![CDATA[VR]]></category>
		<category><![CDATA[VR Market]]></category>
		<category><![CDATA[VRTogether]]></category>
		<guid isPermaLink="false">http://vrtogether.eu/?p=669</guid>

					<description><![CDATA[<p>The virtual reality market is gaining more traction by the day, large players in the field develop new innovative solutions to maximise immersiveness and a lot is fueled by the social media giant named Facebook who owns one of the most powerful companies in the field, Oculus. It’s Chief Scientist, Michael Abrash, recently published an [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://vrtogether.eu/2018/08/03/vrtogether-and-the-future-of-social-vr/">VRTogether and the future of Social VR</a> appeared first on <a rel="nofollow" href="https://vrtogether.eu">VRTogether</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>The virtual reality market is gaining more traction by the day, large players in the field develop new innovative solutions to maximise immersiveness and a lot is fueled by the social media giant named Facebook who owns one of the most powerful companies in the field, Oculus. It’s Chief Scientist, Michael Abrash, recently published an <a href="https://www.oculus.com/blog/vrs-grand-challenge-michael-abrash-on-the-future-of-human-interaction/" target="_blank" rel="noopener noreferrer">interesting blog post</a> describing the biggest challenges in VR in the years to come.</p>
<p>In the issues analysed he addresses the topic of inference by giving different examples in which our brain’s assumptions are far from reality and how VR can use this in its favour to design the next level of immersive experiences. Moreover, he refers to open issues such as eye tracking, hand tracking, focus in VR displays and essentially the importance of coordination of real humans movements within the virtual space.</p>
<p>Looking a bit further around we discovered a <a href="https://www.forbes.com/sites/johnkoetsier/2018/04/30/virtual-reality-77-of-vr-users-want-more-social-engagement-67-use-weekly-28-use-daily/#6360193e18fc/" target="_blank" rel="noopener noreferrer">market research study</a> indicating how people intend to use VR technologies. We notice a high expectation of VR usage among the users who own the necessary hardware but most importantly the importance of social engagement in VR.</p>
<figure id="attachment_675" aria-describedby="caption-attachment-675" style="width: 814px" class="wp-caption aligncenter"><img loading="lazy" class="wp-image-675 size-full" src="http://vrtogether.eu/wp-content/uploads/2018/08/Oculus-venues.jpg" alt="" width="814" height="424" srcset="https://vrtogether.eu/wp-content/uploads/2018/08/Oculus-venues.jpg 814w, https://vrtogether.eu/wp-content/uploads/2018/08/Oculus-venues-300x156.jpg 300w, https://vrtogether.eu/wp-content/uploads/2018/08/Oculus-venues-768x400.jpg 768w, https://vrtogether.eu/wp-content/uploads/2018/08/Oculus-venues-700x365.jpg 700w, https://vrtogether.eu/wp-content/uploads/2018/08/Oculus-venues-410x214.jpg 410w, https://vrtogether.eu/wp-content/uploads/2018/08/Oculus-venues-100x52.jpg 100w, https://vrtogether.eu/wp-content/uploads/2018/08/Oculus-venues-275x143.jpg 275w" sizes="(max-width: 814px) 100vw, 814px" /><figcaption id="caption-attachment-675" class="wp-caption-text">Source: <a href="https://techcrunch.com/2018/05/01/new-oculus-venues-app-organizes-live-vr-events-under-one-roof/">Techcrunch</a></figcaption></figure>
<p>We recently had a great experience by trying out the platform that Oculus experiment with, <a href="https://www.wired.com/story/oculus-venues/" target="_blank" rel="noopener noreferrer">Oculus Venues</a>, by showing <a href="https://www.roadtovr.com/world-cup-2018-oculus-venues/" target="_blank" rel="noopener noreferrer">selected games</a> from the World Cup to viewers based in the USA. After joining the experience and starting the application, we found ourselves represented by an avatar and seated in a virtual part of the football stadium where the game was taking place. Besides the football game and the score, the most interesting part was the rest of the people who were around us. It was possible to interact verbally and also use the Rift controllers for moving the hands of the avatar. It was really interesting to see people joining such an experience given the airing time of the events (05:00 a.m.) and its popularity as a time assigned to sleep. Communicating with the other end-users and talking about the sociality of this VR experience was very easy and a lot of this excitement was shared among the crowd of the participants.</p>
<figure id="attachment_676" aria-describedby="caption-attachment-676" style="width: 823px" class="wp-caption aligncenter"><img loading="lazy" class="wp-image-676" src="http://vrtogether.eu/wp-content/uploads/2018/08/VRTogether-Pilot-1-Scenes-designs.jpg" alt="" width="823" height="450" srcset="https://vrtogether.eu/wp-content/uploads/2018/08/VRTogether-Pilot-1-Scenes-designs.jpg 1299w, https://vrtogether.eu/wp-content/uploads/2018/08/VRTogether-Pilot-1-Scenes-designs-300x164.jpg 300w, https://vrtogether.eu/wp-content/uploads/2018/08/VRTogether-Pilot-1-Scenes-designs-768x420.jpg 768w, https://vrtogether.eu/wp-content/uploads/2018/08/VRTogether-Pilot-1-Scenes-designs-1024x560.jpg 1024w, https://vrtogether.eu/wp-content/uploads/2018/08/VRTogether-Pilot-1-Scenes-designs-700x383.jpg 700w, https://vrtogether.eu/wp-content/uploads/2018/08/VRTogether-Pilot-1-Scenes-designs-410x224.jpg 410w, https://vrtogether.eu/wp-content/uploads/2018/08/VRTogether-Pilot-1-Scenes-designs-100x55.jpg 100w, https://vrtogether.eu/wp-content/uploads/2018/08/VRTogether-Pilot-1-Scenes-designs-275x150.jpg 275w" sizes="(max-width: 823px) 100vw, 823px" /><figcaption id="caption-attachment-676" class="wp-caption-text">Source: VRTogether Pilot 1 Scene designs</figcaption></figure>
<p>VR-Together will be addressing this kind of social experiences in its core by assembling an innovative end-to-end pipeline for the delivery of photorealistic immersive content to multiple users. VR-Together will be going beyond the state-of-the-art experiences mentioned above. In Pilot 1, end-users will be represented by their real-life point clouds, time-varying meshes or 2D images, compared to the avatars used in Oculus Venues.  End-users will be able to interact and communicate with each other in real-time, with a special functionality for HMD removal, within a virtual interrogation room (3D scene) while also being able to interact with the environment itself (in Pilot 3). VR-Together will also conduct some experiments to evaluate the level of immersiveness, the feeling of togetherness and various other aspects that define what constitutes a “truly” social VR experience. An overview of the project, its objectives and envisioned scenarios can be found in the publication presented during the VR workshop in ACM TVX 2018<a href="#_ftn1" name="_ftnref6"><sup>[1]</sup></a>. Below you can find a list of the metrics VR-Together will be using to evaluate the VR experience in development.</p>
<table width="900">
<tbody>
<tr>
<td width="305"><strong>Metrics</strong></td>
<td width="596"><strong>Description</strong></td>
</tr>
<tr>
<td width="305"><em>Technical performance (objective)</em></td>
<td width="596"><em>delays, bandwidth, media sync, traffic overhead, CPU load, media quality, etc.</em></td>
</tr>
<tr>
<td width="305"><em>User experience (objective)</em></td>
<td width="596"><em>gaze, head direction, physiological signals, speech activity, motion data, interaction with the environment and between users, etc.</em></td>
</tr>
<tr>
<td width="305"><em>User experience (subjective)</em></td>
<td width="596"><em>questionnaires, interviews, observations, etc.</em></td>
</tr>
<tr>
<td width="305"><em>Added value (objective/subjective)</em></td>
<td width="596"><em>questionnaires, interviews, etc.</em></td>
</tr>
</tbody>
</table>
<p>We are excited and proud to present our Pilot 1 experience in September 2018. Make sure to follow our Twitter account and find out about our next demonstration!</p>
<p>Stay tuned and see you soon!</p>
<p><a href="#_ftnref1" name="_ftn6"><sup>[1]</sup></a> M. Montagud, J. A. Núñez, T. Karavellas, I. Jurado, S. Fernández, “Convergence between TV and VR: Enabling Truly Immersive and Social Experiences”, Workshop on Virtual Reality, co-located with ACM TVX 2018, Seoul (South Korea), June 2018</p>
<p>&nbsp;</p>
<h5>Who we are</h5>
<p>The <a href="http://www.i2cat.net/en" target="_blank" rel="noopener noreferrer">i2CAT Foundation</a> is a non-profit research and technology centre that promotes R&amp;D activities in the fields of the Internet and advanced digital technologies. It has pioneered an innovation model based on collaboration between companies, public administrations, academia and users. i2CAT coordinates the VRTogether project</p>
<p>Come and follow us in this VR journey with <a href="http://www.motionspell.com/" target="_blank" rel="noopener noreferrer">Motion Spell</a>, <a href="https://www.cwi.nl/" target="_blank" rel="noopener noreferrer">CWI</a>, <a href="https://www.tno.nl/en/" target="_blank" rel="noopener noreferrer">TNO</a>, <a href="https://www.certh.gr/root.en.aspx" target="_blank" rel="noopener noreferrer">CERTH</a>, <a href="http://www.artanim.ch/" target="_blank" rel="noopener noreferrer">Artanim</a>, <a href="https://www.viaccess-orca.com/" target="_blank" rel="noopener noreferrer">Viaccess-Orca</a>, <a href="http://www.entropystudio.net/" target="_blank" rel="noopener noreferrer">Entropy Studio</a><img loading="lazy" class="size-full wp-image-380 alignleft" src="http://vrtogether.eu/wp-content/uploads/2018/02/eu-logo.png" alt="" width="226" height="111" srcset="https://vrtogether.eu/wp-content/uploads/2018/02/eu-logo.png 226w, https://vrtogether.eu/wp-content/uploads/2018/02/eu-logo-100x49.png 100w" sizes="(max-width: 226px) 100vw, 226px" /></p>
<p><em>This project has been funded by the European Commission as part of the H2020 program, under the grant agreement 762111.</em></p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p><em>Text: Themistoklis Karavellas &#8211; <a href="http://www.i2cat.net/en">i2CAT</a></em></p>
<p><a href="#_ftnref1" name="_ftn1"></a></p>
<p>The post <a rel="nofollow" href="https://vrtogether.eu/2018/08/03/vrtogether-and-the-future-of-social-vr/">VRTogether and the future of Social VR</a> appeared first on <a rel="nofollow" href="https://vrtogether.eu">VRTogether</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>From Amsterdam to Seoul: disseminating the project around the globe</title>
		<link>https://vrtogether.eu/2018/05/08/amsterdam-seoul-disseminating-project-around-globe/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=amsterdam-seoul-disseminating-project-around-globe</link>
		
		<dc:creator><![CDATA[VRTogether]]></dc:creator>
		<pubDate>Tue, 08 May 2018 12:50:22 +0000</pubDate>
				<category><![CDATA[Events]]></category>
		<category><![CDATA[ACM TVX]]></category>
		<category><![CDATA[MMSys]]></category>
		<category><![CDATA[papers]]></category>
		<category><![CDATA[VR]]></category>
		<guid isPermaLink="false">http://vrtogether.eu/?p=527</guid>

					<description><![CDATA[<p>Two papers from the VRTogether project have been accepted: one is for a demo at the ACM Multimedia Systems Conference 2018 (MMSys), which will be held in Amsterdam from 12 to 16 June; the other is a WiP Paper for the ACM International Conference on Interactive Experiences for Television and Online Video (ACM TVX2018), which [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://vrtogether.eu/2018/05/08/amsterdam-seoul-disseminating-project-around-globe/">From Amsterdam to Seoul: disseminating the project around the globe</a> appeared first on <a rel="nofollow" href="https://vrtogether.eu">VRTogether</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Two papers from the VRTogether project have been accepted: one is for a demo at the ACM Multimedia Systems Conference 2018 (<a href="http://www.mmsys2018.org/"><strong>MMSys</strong></a>), which will be held in Amsterdam from 12 to 16 June; the other is a WiP Paper for the ACM International Conference on Interactive Experiences for Television and Online Video (<a href="https://tvx.acm.org/2018/"><strong>ACM TVX2018</strong></a>), which will take place on 25-28 June in Seoul.</p>
<p>The first paper is titled “<strong>Virtual Reality Conferencing: Multi-user immersive VR experiences on the web</strong>”. The VRTogether project tackles the problem of the apparent discrepancy between the physical separation of wearing a head mounted display and the human need for sharing experiences. The project has developed a VR framework that allows creating VR experiences that are social and to consume them with off-the-shelf hardware. In the demo, the focus was on a communication use case, where a maximum of three people, sited around a round table could communicate within VR. To do so, users were recorded, and before the transmission, their background was replaced with a uniform green colour, which was removed later, leaving a transparent image showing just the user. In this experience, the three people sat around the table share the same view (the other users are on the opposite side of the table), and there is a video playing on the top of the table. The project team held a 1-day experiment trial, where they collected feedback from 54 participants, who communicated and watched a video with this system.</p>
<p>The second paper is entitled “<strong>Experiencing Virtual Reality Together: Social VR Use Case Study</strong>”. When it comes to Social Virtual Reality, users are always represented as artificial avatars. Even though this might be beneficial for some use cases, this might not be appropriate for many communication settings such as business meetings, or sharing experiences with family or friends. It is still unclear which use cases are relevant to the different methods users can be represented. Thus, more research is necessary to understand better Social VR requirements. As a first step to close the gap, the VRTogether team conducted a study where participants tried a photo-realistic Social VR experience followed by a questionnaire and informal discussion. In the VR environment, users sit beside each other on a couch in a 360-degree 2D VR environment and consume a 2D video, while being able to hear and see each other as photo-realistic video streams. The main contribution of this paper is the study of use cases in Social VR.</p>
<p>Both the MMSys and the ACM TVX2018 are significant dates in the media research agenda. They represent a great opportunity to disseminate the VRTogether objectives and advances and to get to know other ongoing projects in this field.</p>
<figure id="attachment_528" aria-describedby="caption-attachment-528" style="width: 1024px" class="wp-caption aligncenter"><img loading="lazy" class="wp-image-528 size-large" src="http://vrtogether.eu/wp-content/uploads/2018/05/Papers-MMSys-and-TVX2018-1024x344.png" alt="" width="1024" height="344" srcset="https://vrtogether.eu/wp-content/uploads/2018/05/Papers-MMSys-and-TVX2018-1024x344.png 1024w, https://vrtogether.eu/wp-content/uploads/2018/05/Papers-MMSys-and-TVX2018-300x101.png 300w, https://vrtogether.eu/wp-content/uploads/2018/05/Papers-MMSys-and-TVX2018-768x258.png 768w, https://vrtogether.eu/wp-content/uploads/2018/05/Papers-MMSys-and-TVX2018-700x235.png 700w, https://vrtogether.eu/wp-content/uploads/2018/05/Papers-MMSys-and-TVX2018-410x138.png 410w, https://vrtogether.eu/wp-content/uploads/2018/05/Papers-MMSys-and-TVX2018-100x34.png 100w, https://vrtogether.eu/wp-content/uploads/2018/05/Papers-MMSys-and-TVX2018-275x92.png 275w, https://vrtogether.eu/wp-content/uploads/2018/05/Papers-MMSys-and-TVX2018.png 1936w" sizes="(max-width: 1024px) 100vw, 1024px" /><figcaption id="caption-attachment-528" class="wp-caption-text">Left: VR User View, from the paper “Virtual Reality Conferencing: Multi-user immersive VR experiences on the web”. Right: Example view inside VR, showing the other user and a movie projection space, from the paper “Experiencing Virtual Reality Together: Social VR Use Case Study”</figcaption></figure>
<p>The post <a rel="nofollow" href="https://vrtogether.eu/2018/05/08/amsterdam-seoul-disseminating-project-around-globe/">From Amsterdam to Seoul: disseminating the project around the globe</a> appeared first on <a rel="nofollow" href="https://vrtogether.eu">VRTogether</a>.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
