<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>CWI &#8211; VRTogether</title>
	<atom:link href="https://vrtogether.eu/author/cwi/feed/" rel="self" type="application/rss+xml" />
	<link>https://vrtogether.eu</link>
	<description>An end-to-end system for the production and delivery of photorealistic and social virtual reality experiences</description>
	<lastBuildDate>Tue, 15 Sep 2020 13:12:46 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	

 
	<item>
		<title>Mentoring Students in Social VR (Part II)</title>
		<link>https://vrtogether.eu/2020/09/15/mentoring-students-in-social-vr-part-ii/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=mentoring-students-in-social-vr-part-ii</link>
		
		<dc:creator><![CDATA[CWI]]></dc:creator>
		<pubDate>Tue, 15 Sep 2020 08:59:37 +0000</pubDate>
				<category><![CDATA[News]]></category>
		<guid isPermaLink="false">https://vrtogether.eu/?p=2514</guid>

					<description><![CDATA[<p>The post <a rel="nofollow" href="https://vrtogether.eu/2020/09/15/mentoring-students-in-social-vr-part-ii/">Mentoring Students in Social VR (Part II)</a> appeared first on <a rel="nofollow" href="https://vrtogether.eu">VRTogether</a>.</p>
]]></description>
										<content:encoded><![CDATA[<div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><em>For more details, see previous post “<a href="https://vrtogether.eu/2019/09/09/mentoring-students-in-social-vr/">Mentoring Students in Social VR</a>”</em></p>
<p>The VRTogether project conducts ground-breaking research on Social VR, which invites multiple users to share a virtual space and interact with each other’s virtual representation. Apart from the main research activities, this project mentors master’s students in this area, forming <strong>a new generation of graduates that can, in the future, reshape the media landscape</strong>. <a href="https://vrtogether.eu/consortium/cwi/">CWI</a> has been actively offering master’s thesis around the core topics of the project since 2018.</p>
<p>In 2018, two students graduated at TU Delft with the theses “Multi-Camera Registration for VR: A Flexible, Feature-based Approach” (Qian Qinzhuan) and “User Experience in Social Virtual Reality” (Yiping Kong). The latter resulted as a top-quality paper, “Measuring and understanding photo sharing experiences in social Virtual Reality” at ACM CHI 2019. In 2019, another two students graduated: Guo Chen (TU Delft) with the thesis “Designing and Evaluating a Social VR Clinic for Knee Replacement Surgery”, and Jelmer Mulder (VU Amsterdam) the thesis “Temporal Interpolation of Dynamic Point Clouds using Convolutional Neural Networks”. Guo’s work was published as an ACM CHI 2020 late-breaking-work and an improved demo of Guo’s work was published at ACM IMX 2020, which received the <a href="https://vrtogether.eu/2020/06/23/exploitation-activities-from-vrtogether-awarded-at-acm-imx-2020/">best demo award</a>.  Jelmer’s work was published as an invited paper in the 2019 IEEE Conference on Artificial Intelligence and Virtual Reality (AIVR).</p>
<p>In 2020, again two students graduated: <strong>Yanni Mei</strong> at <strong>TU Delft</strong> with the thesis “<strong>Cake VR: Design a Social VR Tool for Remote Co-Design of Customized Cakes</strong>” and <strong>Ignacio Reimat</strong> at <strong>Universitat Politecnica de Catalunya</strong> with the thesis “<strong>Temporal Interpolation of Human Point Clouds Using Neural Networks and Body Part Segmentation</strong>”. Both students graduated with score 9 (out of 10). We are currently working on publishing the students’ work to prestigious conferences.</p>
<p>Yanni’s thesis explored novel use cases in the pastry design domain for Social VR, developing and evaluating a prototype of a social VR tool for clients to co-design cakes with pastry chefs. Exploring such use cases can help better understanding the exploitation opportunities for social VR. Ignacio’s thesis investigated how body part segmentation can aid in performing temporal interpolation for dynamic point clouds. Body part segmentation can be used to improve the accuracy of the neural network in dealing with parts moving at different speed. Moreover, the same principle can be applied in improving compression efficiency for point cloud transmission, by adopting a different level of detail based on salient parts.</p>
<p>Yanni Mei successfully defended her master’s thesis on August 20<sup>th</sup>, 2020. The motivation behind this project was to support clients with limited design skills and pastry knowledge to better communicate and co-design their dreamed cakes with pastry chefs for special celebrations. The cake VR tool provides 3D visualization and facilitates intuitive gestural manipulation of the size and decoration of the virtual cakes, which can be used for both onsite and remote communication (Figure 1 and Figure 2). The project started with a series of ethnographic studies with five pastry chefs and four clients who had experiences in purchasing customized cakes. Then, based on the requirements gathered from the ethnographic studies, we designed and implemented a social VR prototype, which allows two users collaboratively design cakes in a shared virtual space, wearing head-mounted displays (HMDs). We also performed a user validation test with six clients and 3 pastry chefs to see to what extent the cake VR prototype meets the requirements. The prototype successfully meets the 7 (out of 10) requirements, and clients are able to design cakes and communicate the size, decoration and theme of the celebration of the cakes with pastry chefs.</p>
<p><figure id="attachment_2517" aria-describedby="caption-attachment-2517" style="width: 599px" class="wp-caption aligncenter"><a href="https://vrtogether.eu/wp-content/uploads/2020/09/Fig1.png"><img loading="lazy" class="wp-image-2517" src="https://vrtogether.eu/wp-content/uploads/2020/09/Fig1.png" alt="" width="599" height="389" srcset="https://vrtogether.eu/wp-content/uploads/2020/09/Fig1.png 1800w, https://vrtogether.eu/wp-content/uploads/2020/09/Fig1-300x195.png 300w, https://vrtogether.eu/wp-content/uploads/2020/09/Fig1-1024x664.png 1024w, https://vrtogether.eu/wp-content/uploads/2020/09/Fig1-768x498.png 768w, https://vrtogether.eu/wp-content/uploads/2020/09/Fig1-1536x997.png 1536w, https://vrtogether.eu/wp-content/uploads/2020/09/Fig1-700x454.png 700w, https://vrtogether.eu/wp-content/uploads/2020/09/Fig1-410x266.png 410w, https://vrtogether.eu/wp-content/uploads/2020/09/Fig1-100x65.png 100w, https://vrtogether.eu/wp-content/uploads/2020/09/Fig1-275x178.png 275w" sizes="(max-width: 599px) 100vw, 599px" /></a><figcaption id="caption-attachment-2517" class="wp-caption-text">Figure 1. The system setup of the Cake VR prototype</figcaption></figure><br />
<figure id="attachment_2518" aria-describedby="caption-attachment-2518" style="width: 600px" class="wp-caption aligncenter"><a href="https://vrtogether.eu/wp-content/uploads/2020/09/Fig2.png"><img loading="lazy" class="wp-image-2518" src="https://vrtogether.eu/wp-content/uploads/2020/09/Fig2.png" alt="" width="600" height="336" srcset="https://vrtogether.eu/wp-content/uploads/2020/09/Fig2.png 2643w, https://vrtogether.eu/wp-content/uploads/2020/09/Fig2-300x168.png 300w, https://vrtogether.eu/wp-content/uploads/2020/09/Fig2-1024x574.png 1024w, https://vrtogether.eu/wp-content/uploads/2020/09/Fig2-768x430.png 768w, https://vrtogether.eu/wp-content/uploads/2020/09/Fig2-1536x861.png 1536w, https://vrtogether.eu/wp-content/uploads/2020/09/Fig2-2048x1148.png 2048w, https://vrtogether.eu/wp-content/uploads/2020/09/Fig2-700x392.png 700w, https://vrtogether.eu/wp-content/uploads/2020/09/Fig2-410x230.png 410w, https://vrtogether.eu/wp-content/uploads/2020/09/Fig2-100x56.png 100w, https://vrtogether.eu/wp-content/uploads/2020/09/Fig2-275x154.png 275w" sizes="(max-width: 600px) 100vw, 600px" /></a><figcaption id="caption-attachment-2518" class="wp-caption-text">Figure 2. The storyline of the Cake VR prototype: (a) both users (i.e., chef and client) enter the virtual café); (b) the client and the chef upload the reference cake pictures to the system; (c) the client and the chef co-design the cake, making decisions about the size, texture, color and decoration of the cake; and (d) the users switch the virtual café to the wedding location, to see if the cake design fits the wedding theme and the wedding environment.</figcaption></figure></p>
<p>Ignacio Reimat successfully defended his thesis on June 30<sup>th</sup> 2020. The motivation behind the project was to reduce the bandwidth requirements for transmission of dynamic point cloud. Reducing the temporal resolution of the sequence to be transmitted, results in notable bitrate gains; however, it comes at the expense of perceived quality. Performing temporal interpolation at the receiver side allows to cope with bandwidth requirements, while maintaining the appearance of smooth motion. In his thesis, Ignacio provided two main contributions: a) the creation of point cloud contents with annotated body parts, from a publicly available 3D mesh dataset, and b) an architecture capable of performing temporal interpolation of point cloud contents representing body parts. Results show that that applying body part segmentation and predicting the interpolation of individual body parts can improve the accuracy of point cloud temporal interpolation systems.</p>
<p><figure id="attachment_2515" aria-describedby="caption-attachment-2515" style="width: 600px" class="wp-caption aligncenter"><a href="https://vrtogether.eu/wp-content/uploads/2020/09/Fig3.png"><img loading="lazy" class="wp-image-2515" src="https://vrtogether.eu/wp-content/uploads/2020/09/Fig3.png" alt="" width="600" height="496" srcset="https://vrtogether.eu/wp-content/uploads/2020/09/Fig3.png 984w, https://vrtogether.eu/wp-content/uploads/2020/09/Fig3-300x248.png 300w, https://vrtogether.eu/wp-content/uploads/2020/09/Fig3-768x635.png 768w, https://vrtogether.eu/wp-content/uploads/2020/09/Fig3-700x579.png 700w, https://vrtogether.eu/wp-content/uploads/2020/09/Fig3-410x339.png 410w, https://vrtogether.eu/wp-content/uploads/2020/09/Fig3-100x83.png 100w, https://vrtogether.eu/wp-content/uploads/2020/09/Fig3-275x227.png 275w" sizes="(max-width: 600px) 100vw, 600px" /></a><figcaption id="caption-attachment-2515" class="wp-caption-text">Figure 3. Visual comparison of the prediction of the chest, using network trained on full body (left), and network trained on chest part (middle), with respect to the ground truth (right).</figcaption></figure><br />
<figure id="attachment_2516" aria-describedby="caption-attachment-2516" style="width: 600px" class="wp-caption aligncenter"><a href="https://vrtogether.eu/wp-content/uploads/2020/09/Fig4.png"><img loading="lazy" class="wp-image-2516" src="https://vrtogether.eu/wp-content/uploads/2020/09/Fig4.png" alt="" width="600" height="421" srcset="https://vrtogether.eu/wp-content/uploads/2020/09/Fig4.png 1121w, https://vrtogether.eu/wp-content/uploads/2020/09/Fig4-300x211.png 300w, https://vrtogether.eu/wp-content/uploads/2020/09/Fig4-1024x719.png 1024w, https://vrtogether.eu/wp-content/uploads/2020/09/Fig4-768x539.png 768w, https://vrtogether.eu/wp-content/uploads/2020/09/Fig4-700x491.png 700w, https://vrtogether.eu/wp-content/uploads/2020/09/Fig4-410x288.png 410w, https://vrtogether.eu/wp-content/uploads/2020/09/Fig4-100x70.png 100w, https://vrtogether.eu/wp-content/uploads/2020/09/Fig4-275x193.png 275w" sizes="(max-width: 600px) 100vw, 600px" /></a><figcaption id="caption-attachment-2516" class="wp-caption-text">Figure 4. Visual comparison of the prediction of the head, using network trained on full body (left), and network trained on head part (middle), with respect to the ground truth (right).</figcaption></figure></p>
<hr />
<p><em> </em>Yanni Mei, Design a SocialVR Tool for the Remote Co-Design of Customized Cakes (2020, TU Delft): <a href="https://repository.tudelft.nl/islandora/object/uuid%3A78a1147b-e97b-418f-a5e6-3ce944df4f49" target="_blank" rel="noopener noreferrer">https://repository.tudelft.nl/islandora/object/uuid%3A78a1147b-e97b-418f-a5e6-3ce944df4f49</a></p>
<p>Nacho Reimat, Temporal Interpolation of Human Point Clouds Using Neural Networks and Body Part Segmentation (2020, Universitat Politecnica de Catalunya): <a href="https://www.dis.cwi.nl/downloads/Masters-2020-Nacho.pdf" target="_blank" rel="noopener noreferrer">https://www.dis.cwi.nl/downloads/Masters-2020-Nacho.pdf</a></p>

		</div>
	</div>
</div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="vc_row wpb_row vc_inner vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="vc_separator wpb_content_element vc_separator_align_center vc_sep_width_100 vc_sep_pos_align_center vc_separator_no_text vc_sep_color_grey" ><span class="vc_sep_holder vc_sep_holder_l"><span  class="vc_sep_line"></span></span><span class="vc_sep_holder vc_sep_holder_r"><span  class="vc_sep_line"></span></span>
</div></div></div></div></div>
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>Author: <a href="https://vrtogether.eu/consortium/cwi/">CWI</a></p>
<p>Come and follow us in this VR journey with <a href="https://vrtogether.eu/consortium/i2cat/">i2CAT</a>, <a href="https://vrtogether.eu/consortium/cwi/">CWI</a>, <a href="https://vrtogether.eu/consortium/tno/">TNO</a>, <a href="https://vrtogether.eu/consortium/certh/">CERTH</a>, <a href="https://vrtogether.eu/consortium/artanim/">Artanim</a>, <a href="https://vrtogether.eu/consortium/viaccess-orca/">Viaccess-Orca</a>, <a href="https://vrtogether.eu/consortium/the_mo/">TheMo</a> and <a href="https://vrtogether.eu/consortium/motion-spell/">Motion Spell</a>.</p>

		</div>
	</div>
</div></div></div></div><section class="vc_section"><div class="vc_row wpb_row vc_row-fluid vc_column-gap-10"><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="vc_row wpb_row vc_inner vc_row-fluid vc_custom_1547799200116 vc_row-has-fill vc_row-o-content-middle vc_row-flex"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_center">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="75" height="50" src="https://vrtogether.eu/wp-content/uploads/2019/01/EU_flag_75px.png" class="vc_single_image-img attachment-thumbnail" alt="" loading="lazy" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>This project has been funded by the European Commission as part of the H2020 program, under the grant agreement 762111.</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div></div></div></div></div></section>
<p>The post <a rel="nofollow" href="https://vrtogether.eu/2020/09/15/mentoring-students-in-social-vr-part-ii/">Mentoring Students in Social VR (Part II)</a> appeared first on <a rel="nofollow" href="https://vrtogether.eu">VRTogether</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Exploitation activities from VRTogether awarded at ACM IMX 2020</title>
		<link>https://vrtogether.eu/2020/06/23/exploitation-activities-from-vrtogether-awarded-at-acm-imx-2020/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=exploitation-activities-from-vrtogether-awarded-at-acm-imx-2020</link>
		
		<dc:creator><![CDATA[CWI]]></dc:creator>
		<pubDate>Tue, 23 Jun 2020 07:20:16 +0000</pubDate>
				<category><![CDATA[Events]]></category>
		<category><![CDATA[News]]></category>
		<guid isPermaLink="false">https://vrtogether.eu/?p=2315</guid>

					<description><![CDATA[<p>The post <a rel="nofollow" href="https://vrtogether.eu/2020/06/23/exploitation-activities-from-vrtogether-awarded-at-acm-imx-2020/">Exploitation activities from VRTogether awarded at ACM IMX 2020</a> appeared first on <a rel="nofollow" href="https://vrtogether.eu">VRTogether</a>.</p>
]]></description>
										<content:encoded><![CDATA[<div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>VRTogether project partner <a href="https://vrtogether.eu/consortium/cwi/"><strong>CWI</strong></a> participated in the <strong>2020 ACM International Conference on Interactive Media Experiences</strong> (<a href="https://imx.acm.org/2020/" target="_blank" rel="noopener noreferrer">ACM IMX 2020</a>), held virtually from June 17-19, 2020. On the first day of the conference, CWI’s team presented the social VR clinic, one of the<strong> VRTogether exploitation activities targeting the healthcare domain</strong>. The demo is entitled “<strong><a href="https://figshare.com/articles/Demo1_-_Tong_Xue_Jie_Li_Guo_Chen_Pablo_Cesar_et_al_pdf/12479204/2" target="_blank" rel="noopener noreferrer">A Social VR Clinic for Knee Arthritis Patients with Haptics</a></strong>” and was presented using a social virtual reality (VR) platform called Mozilla Hubs. This demo is co-authored by Tong Xue (CWI, BIT), Jie Li (CWI), Guo Chen (IBM Research, Beijing) and Pablo Cesar (CWI). The demo presentation was held in a virtual dome surrounded by virtual Barcelona cultural heritage.</p>
<p>The demo showcases <strong>a social VR clinic that allows patients to consult a nurse represented as a virtual avatar</strong>. It offers a &#8220;walk-in&#8221; virtual surgery room, enables patients to interact with animated virtual 3D artifacts, and trains the patient to use an injection tool while wearing a pair of mechanical VR gloves that provide haptic feedback (in collaboration with the Dutch SME <a href="https://www.senseglove.com" target="_blank" rel="noopener noreferrer">SenseGlove</a>). The demo shows the potential of social VR as a new tool to help patients receive remote personalized medical care. The virtual demo presentation showed a video of the prototype and two posters explaining the research process. The whole session was interactive and immersive with many questions raised from the audience. The conference organizing committee and the audience all found the demo well done and gave positive feedback towards the future work.</p>
<p><strong>This demo was eventually awarded the Best Demo award at IMX 2020</strong>. IMX is the leading international conference for presentation and discussion of research into interactive media experiences. The conference brought together international researchers and practitioners from a wide range of disciplines, ranging from human-computer interaction, multimedia engineering and design to media studies, media psychology and sociology.</p>

		</div>
	</div>
<div class="vc_row wpb_row vc_inner vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_center">
		
		<figure class="wpb_wrapper vc_figure">
			<a href="https://vrtogether.eu/wp-content/uploads/2020/06/image3-e1592896726566-1024x571.png" target="_self" class="vc_single_image-wrapper   vc_box_border_grey"><img width="1999" height="1114" src="https://vrtogether.eu/wp-content/uploads/2020/06/image3-e1592896726566.png" class="vc_single_image-img attachment-full" alt="" loading="lazy" srcset="https://vrtogether.eu/wp-content/uploads/2020/06/image3-e1592896726566.png 1999w, https://vrtogether.eu/wp-content/uploads/2020/06/image3-e1592896726566-300x167.png 300w, https://vrtogether.eu/wp-content/uploads/2020/06/image3-e1592896726566-1024x571.png 1024w, https://vrtogether.eu/wp-content/uploads/2020/06/image3-e1592896726566-768x428.png 768w, https://vrtogether.eu/wp-content/uploads/2020/06/image3-e1592896726566-1536x856.png 1536w, https://vrtogether.eu/wp-content/uploads/2020/06/image3-e1592896726566-700x390.png 700w, https://vrtogether.eu/wp-content/uploads/2020/06/image3-e1592896726566-410x228.png 410w, https://vrtogether.eu/wp-content/uploads/2020/06/image3-e1592896726566-100x56.png 100w, https://vrtogether.eu/wp-content/uploads/2020/06/image3-e1592896726566-275x153.png 275w" sizes="(max-width: 1999px) 100vw, 1999px" /></a><figcaption class="vc_figure-caption">Figure 2. The best demo award certificate</figcaption>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_video_widget wpb_content_element vc_clearfix   vc_video-aspect-ratio-169 vc_video-el-width-100 vc_video-align-left" >
		<div class="wpb_wrapper">
			
			<div class="wpb_video_wrapper"><iframe title="IMX2020 (demo): A Social VR Clinic for Knee Arthritis Patients with Haptics" width="1170" height="658" src="https://www.youtube.com/embed/c89E98SQRqk?feature=oembed" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></div>
		</div>
	</div>
</div></div></div></div>
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>The social VR clinic demo is one of the VRTogether project exploitation activities explored by CWI. Apart from personalized healthcare, where we are as well exploring how 3D medical imaging can help in creating photorealistic models for remote treatment of patients, other domains include:</p>
<ul>
<li><strong>Creative industries</strong>: interactive immersive museum experiences in social VR for both remote and on-site visitors, together with the <a href="https://www.beeldengeluid.nl/en" target="_blank" rel="noopener noreferrer">Netherlands Institute for Sound and Vision</a></li>
<li><strong>Retail sector</strong>: collaborative design and prototyping in social VR for the retail sector, like bakeries and other food services</li>
</ul>
<p>We will keep you updated with the future development of the social VR use cases.</p>

		</div>
	</div>
</div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="vc_row wpb_row vc_inner vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="vc_separator wpb_content_element vc_separator_align_center vc_sep_width_100 vc_sep_pos_align_center vc_separator_no_text vc_sep_color_grey" ><span class="vc_sep_holder vc_sep_holder_l"><span  class="vc_sep_line"></span></span><span class="vc_sep_holder vc_sep_holder_r"><span  class="vc_sep_line"></span></span>
</div></div></div></div></div>
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>Come and follow us in this VR journey with <a href="https://vrtogether.eu/consortium/i2cat/">i2CAT</a>, <a href="https://vrtogether.eu/consortium/cwi/">CWI</a>, <a href="https://vrtogether.eu/consortium/tno/">TNO</a>, <a href="https://vrtogether.eu/consortium/certh/">CERTH</a>, <a href="https://vrtogether.eu/consortium/artanim/">Artanim</a>, <a href="https://vrtogether.eu/consortium/viaccess-orca/">Viaccess-Orca</a>, <a href="https://vrtogether.eu/consortium/the_mo/">TheMo</a> and <a href="https://vrtogether.eu/consortium/motion-spell/">Motion Spell</a>.</p>

		</div>
	</div>
</div></div></div></div><section class="vc_section"><div class="vc_row wpb_row vc_row-fluid vc_column-gap-10"><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="vc_row wpb_row vc_inner vc_row-fluid vc_custom_1547799200116 vc_row-has-fill vc_row-o-content-middle vc_row-flex"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_center">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="75" height="50" src="https://vrtogether.eu/wp-content/uploads/2019/01/EU_flag_75px.png" class="vc_single_image-img attachment-thumbnail" alt="" loading="lazy" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>This project has been funded by the European Commission as part of the H2020 program, under the grant agreement 762111.</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div></div></div></div></div></section>
<p>The post <a rel="nofollow" href="https://vrtogether.eu/2020/06/23/exploitation-activities-from-vrtogether-awarded-at-acm-imx-2020/">Exploitation activities from VRTogether awarded at ACM IMX 2020</a> appeared first on <a rel="nofollow" href="https://vrtogether.eu">VRTogether</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Social VR for remote scientific events? VRTogether co-organizes a CHI2020 Workshop on Social VR in Mozilla Hubs during COVID-19</title>
		<link>https://vrtogether.eu/2020/05/19/social-vr-workshop-chi2020/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=social-vr-workshop-chi2020</link>
		
		<dc:creator><![CDATA[CWI]]></dc:creator>
		<pubDate>Tue, 19 May 2020 05:18:16 +0000</pubDate>
				<category><![CDATA[Events]]></category>
		<category><![CDATA[News]]></category>
		<guid isPermaLink="false">https://vrtogether.eu/?p=2210</guid>

					<description><![CDATA[<p>The post <a rel="nofollow" href="https://vrtogether.eu/2020/05/19/social-vr-workshop-chi2020/">Social VR for remote scientific events? VRTogether co-organizes a CHI2020 Workshop on Social VR in Mozilla Hubs during COVID-19</a> appeared first on <a rel="nofollow" href="https://vrtogether.eu">VRTogether</a>.</p>
]]></description>
										<content:encoded><![CDATA[<div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>Jie Li and Pablo Cesar, from the Distributed Interactive Systems group (<a href="https://www.dis.cwi.nl/">DIS</a>) at Centrum Wiskunde &amp; Informatica (<a href="https://www.cwi.nl/">CWI</a>), have successfully organized the ACM CHI2020 <a href="https://www.socialvr-ws.com/">Social VR Workshop</a> on a social VR platform called <a href="https://hubs.mozilla.com/#/">Mozilla Hubs</a>. This workshop was co-organized together with David A. Shamma (FXPAL Laboratory), Vinoba Vinayagamoorthy (BBC R&amp;D), Raz Schwartz (Facebook AR/VR), and Wijnand Ijsselsteijn (Eindhoven University of Technology). Due to the COVID-19, the event, scheduled on April 25 at the Hawaii Convention Center, was cancelled. However, the co-organizers decided to make use of the quarantine as a unique opportunity to explore how a social VR platform could be used to run the workshop. The workshop, “Social VR: ​A New Medium for Remote Communication &amp; Collaboration”, was sponsored by the EU H2020 VRTogether project, and received technical support from Mozilla and FXPAL laboratory.</p>
<figure id="attachment_2227" aria-describedby="caption-attachment-2227" style="width: 250px" class="wp-caption alignleft"><a href="https://vrtogether.eu/wp-content/uploads/2020/05/Figure1-lr.jpg"><img loading="lazy" class="wp-image-2227" src="https://vrtogether.eu/wp-content/uploads/2020/05/Figure1-lr.jpg" alt="" width="250" height="290" /></a><figcaption id="caption-attachment-2227" class="wp-caption-text">Figure 1. One of the many Social Media posts</figcaption></figure>
<p>The workshop attracted mass attention in the ACM SIGCHI community, and on social media. For example,<a href="https://twitter.com/JieLi3/status/1252336496314650624?s=20" target="_blank" rel="noopener noreferrer"> one of the twitter posts</a> received over 13,000 impressions and 1200 engagements. The organizers also received hundreds of requests about participating in the workshop. In the end, the organizers reviewed and selected 20 participants worldwide based on their submitted high-quality position papers (position papers are available at <a href="https://www.socialvr-ws.com/">https://www.socialvr-ws.com</a>). The time zones of the participants ranged from GMT-7 (e.g., California, US) to GMT+9 (Tokyo, Japan), but they all managed to meet in this virtual workshop.</p>
<p>The 5-hour virtual workshop was intense, fun and engaging. The virtual workshop space had a main hall with two big screens where the presentation slides or videos could be displayed. Linked with the main hall (Figure 2), there were three breakout virtual rooms for group discussions on assigned topics: user representation &amp; ethics, evaluation methods, and interaction techniques. The breakout rooms ensured that the audio of each group discussion was separated, and would not influence each other. The participants selected their own custom avatars to represent themselves.</p>
<figure id="attachment_2217" aria-describedby="caption-attachment-2217" style="width: 450px" class="wp-caption alignright"><a href="https://vrtogether.eu/wp-content/uploads/2020/05/Figure2-1.png"><img loading="lazy" class="wp-image-2217" src="https://vrtogether.eu/wp-content/uploads/2020/05/Figure2-1-1024x576.png" alt="" width="450" height="253" /></a><figcaption id="caption-attachment-2217" class="wp-caption-text">Figure 2. The main hall of the virtual workshop. The cartoon avatar in the middle is the organizer Jie Li, who took this virtual selfie at the main hall.</figcaption></figure>
<p>The organizers designed an interactive and engaging workshop program, which started with a keynote talk from Professor Blair MacIntyre about his experience of organizing the IEEEVR 2020 conference in Mozilla Hubs (Figure 3a). Then, the big screen showed the slides prepared by participants to introduce themselves and their position paper, and each participant was invited to “fly” next to the big screen to give a 2-minute pitch about their paper. Next, they were divided into three discussion groups based on their preferences for the three pre-defined social VR topics by the organizers (Figure 3b). Finally, the organizers brought all the participants back to the main hall to present the discussion results.</p>
<p>The whole workshop went smoothly. Many participants reported that the activities of the virtual workshop were as engaging as the real physical workshop. It was easy and natural for them to follow the keynote talks and to contribute to the discussions. Even during the break, many participants chose to stay and explore the virtual space (e.g., dive into the virtual sea, see Figure 3c). After obtaining consent from the participants, the organizers managed to collect relevant data about the position, movement, and interaction of the participants in the virtual environment and with each other. Currently, the organizers are continuing the exploration of the possibilities offered by Social VR platforms through an online survey, and semi-structured interviews with the participants.</p>
<figure id="attachment_2221" aria-describedby="caption-attachment-2221" style="width: 1024px" class="wp-caption aligncenter"><a href="https://vrtogether.eu/wp-content/uploads/2020/05/Figure3-lr.jpg"><img loading="lazy" class="wp-image-2221 size-large" src="https://vrtogether.eu/wp-content/uploads/2020/05/Figure3-lr-1024x575.jpg" alt="" width="1024" height="575" /></a><figcaption id="caption-attachment-2221" class="wp-caption-text">Figure 3. (a) Professor Blair MacIntyre was giving a keynote talk about his experience organizing IEEEVR 2020 conference in Mozilla Hubs; (b) One of the discussion groups of the workshop; (c) Participants were exploring the virtual environment during the breaks; (d) The final group photo after the workshop ended.</figcaption></figure>
<p>The workshop ended with fruitful discussions about the future of social VR technology. The participants and the organizers foresee the full potential of social VR as a new medium to connect people across the world, and engage them to communicate and collaborate in a virtual space for hours. Many participants shared their engaging and meaningful virtual conferencing experience on social media, and expressed their appreciation towards the organizers through emails. A full report on how well social VR platforms may support remote scientific events will follow.</p>

		</div>
	</div>
</div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="vc_row wpb_row vc_inner vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="vc_separator wpb_content_element vc_separator_align_center vc_sep_width_100 vc_sep_pos_align_center vc_separator_no_text vc_sep_color_grey" ><span class="vc_sep_holder vc_sep_holder_l"><span  class="vc_sep_line"></span></span><span class="vc_sep_holder vc_sep_holder_r"><span  class="vc_sep_line"></span></span>
</div></div></div></div></div>
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>Author: <a href="https://vrtogether.eu/consortium/cwi/">CWI</a></p>
<p>Come and follow us in this VR journey with <a href="https://vrtogether.eu/consortium/i2cat/">i2CAT</a>, <a href="https://vrtogether.eu/consortium/cwi/">CWI</a>, <a href="https://vrtogether.eu/consortium/tno/">TNO</a>, <a href="https://vrtogether.eu/consortium/certh/">CERTH</a>, <a href="https://vrtogether.eu/consortium/artanim/">Artanim</a>, <a href="https://vrtogether.eu/consortium/viaccess-orca/">Viaccess-Orca</a>, <a href="https://vrtogether.eu/consortium/the_mo/">TheMo</a> and <a href="https://vrtogether.eu/consortium/motion-spell/">Motion Spell</a>.</p>

		</div>
	</div>
</div></div></div></div><section class="vc_section"><div class="vc_row wpb_row vc_row-fluid vc_column-gap-10"><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="vc_row wpb_row vc_inner vc_row-fluid vc_custom_1547799200116 vc_row-has-fill vc_row-o-content-middle vc_row-flex"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_center">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="75" height="50" src="https://vrtogether.eu/wp-content/uploads/2019/01/EU_flag_75px.png" class="vc_single_image-img attachment-thumbnail" alt="" loading="lazy" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>This project has been funded by the European Commission as part of the H2020 program, under the grant agreement 762111.</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div></div></div></div></div></section>
<p>The post <a rel="nofollow" href="https://vrtogether.eu/2020/05/19/social-vr-workshop-chi2020/">Social VR for remote scientific events? VRTogether co-organizes a CHI2020 Workshop on Social VR in Mozilla Hubs during COVID-19</a> appeared first on <a rel="nofollow" href="https://vrtogether.eu">VRTogether</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Mentoring Students in Social VR</title>
		<link>https://vrtogether.eu/2019/09/09/mentoring-students-in-social-vr/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=mentoring-students-in-social-vr</link>
		
		<dc:creator><![CDATA[CWI]]></dc:creator>
		<pubDate>Mon, 09 Sep 2019 11:31:42 +0000</pubDate>
				<category><![CDATA[News]]></category>
		<guid isPermaLink="false">https://vrtogether.eu/?p=1729</guid>

					<description><![CDATA[<p>The post <a rel="nofollow" href="https://vrtogether.eu/2019/09/09/mentoring-students-in-social-vr/">Mentoring Students in Social VR</a> appeared first on <a rel="nofollow" href="https://vrtogether.eu">VRTogether</a>.</p>
]]></description>
										<content:encoded><![CDATA[<div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>The VRTogether project conducts ground-breaking research on Social VR. In addition, the project mentors master’s students in this area, forming a new generation of graduates that can, in the future, reshape the media landscape. CWI has been active, offering master’s thesis around the core topics of the project. Last year, two students graduated at TU Delft with the theses “Multi-Camera Registration for VR: A Flexible, Feature-based Approach” (Qian Qinzhuan) and “User Experience in Social Virtual Reality” (Yiping Kong). The latter resulting in a top quality paper, “Measuring and understanding photo sharing experiences in social Virtual Reality”, presented at <a href="https://chi2019.acm.org/" target="_blank" rel="noopener">ACM CHI 2019</a>.</p>
<p><strong>This year, again two students have graduated acquiring deep knowledge about the fundamentals of Social VR</strong>. Guo Chen (TU Delft) has written the thesis “<a href="http://resolver.tudelft.nl/uuid:9255ff17-a051-4914-906a-91da6277880d" target="_blank" rel="noopener">Designing and Evaluating a Social VR Clinic for Knee Replacement Surgery</a>” and Jelmer Mulder (VU Amsterdam) the thesis “<a href="https://github.com/jelmr/pc_temporal_interpolation" target="_blank" rel="noopener">Temporal Interpolation of Dynamic Point Clouds using Convolutional Neural Networks</a>”. The first one explored novel use cases in the healthcare domain for Social VR, developing and evaluating a prototype of a Social VR clinic. Such exploration can help better understanding the exploitation opportunities. The second one proposed a temporal interpolation architecture capable of increasing, at the receiving side, the temporal resolution of dynamic point clouds. The results have consequences in the core architecture of the system, since it may allow for providing the same quality of experience of users, while using less bandwidth for delivering them.</p>
<p><strong>Guo Chen</strong> successfully defended her master’s thesis, which explores<strong> how Social VR could be used in a hospital setting</strong>. The motivation behind this project was to support patients with limited physical mobility to travel fewer times to the hospital, but still communicate well with doctors and nurses. Patients with knee osteoarthritis were the target group of this work. The final goal was to build a Social VR clinic that simulates the real consultation room and facilities in the hospital, in which, patients can interact with the doctors or nurses with visualized information, such as surgery preparation procedures, 3D anatomy models and a tour in the surgery room. A series of ethnographic studies at the <a href="https://reinierdegraaf.nl/" target="_blank" rel="noopener"><em>Reinier de Graaf</em></a> hospital in Delft lead to a better understanding of the complete patient journey, and to the identification of the requirements for the design and prototyping of a Social VR solution. It supports the three main identified activities within the patent journey: visualization of the intervention process, &#8220;walking into&#8221; a 3D virtual surgery room to &#8220;meet&#8221; the medical staff and to get familiar with the equipment, and interacting with an animated virtual 3D knee anatomy model and a virtual knee prosthesis model to see what the differences are before and after the surgery. Twelve users were recruited to evaluate the Social VR clinic, who indicated that the Social VR clinic consultation is comparable with the face-to-face, but does not require patients with knee problems to go to the hospital.</p>

		</div>
	</div>

	<div  class="wpb_single_image wpb_content_element vc_align_center  vc_custom_1568028650522">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="1022" height="630" src="https://vrtogether.eu/wp-content/uploads/2019/09/image1.png" class="vc_single_image-img attachment-full" alt="" loading="lazy" srcset="https://vrtogether.eu/wp-content/uploads/2019/09/image1.png 1022w, https://vrtogether.eu/wp-content/uploads/2019/09/image1-300x185.png 300w, https://vrtogether.eu/wp-content/uploads/2019/09/image1-768x473.png 768w, https://vrtogether.eu/wp-content/uploads/2019/09/image1-700x432.png 700w, https://vrtogether.eu/wp-content/uploads/2019/09/image1-410x253.png 410w, https://vrtogether.eu/wp-content/uploads/2019/09/image1-100x62.png 100w, https://vrtogether.eu/wp-content/uploads/2019/09/image1-275x170.png 275w" sizes="(max-width: 1022px) 100vw, 1022px" /></div>
		</figure>
	</div>

	<div  class="wpb_single_image wpb_content_element vc_align_center">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="1999" height="320" src="https://vrtogether.eu/wp-content/uploads/2019/09/image3.jpg" class="vc_single_image-img attachment-full" alt="" loading="lazy" srcset="https://vrtogether.eu/wp-content/uploads/2019/09/image3.jpg 1999w, https://vrtogether.eu/wp-content/uploads/2019/09/image3-300x48.jpg 300w, https://vrtogether.eu/wp-content/uploads/2019/09/image3-768x123.jpg 768w, https://vrtogether.eu/wp-content/uploads/2019/09/image3-1024x164.jpg 1024w, https://vrtogether.eu/wp-content/uploads/2019/09/image3-700x112.jpg 700w, https://vrtogether.eu/wp-content/uploads/2019/09/image3-410x66.jpg 410w, https://vrtogether.eu/wp-content/uploads/2019/09/image3-100x16.jpg 100w, https://vrtogether.eu/wp-content/uploads/2019/09/image3-275x44.jpg 275w" sizes="(max-width: 1999px) 100vw, 1999px" /></div>
		</figure>
	</div>

	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><strong>Jelmer Mulder</strong> as well successfully defended his master’s thesis, which explores <strong>how machine learning techniques can help in optimizing the distribution of dense point clouds</strong>. Point clouds are a data structure that models volumetric visual data as a set of individual points in space, which it is used in VRTogether for highly realistic representing users in a Social VR session. But point clouds are voluminous in size, and thus require high bandwidth to transmit. In practice this means that concessions have to be made either in spatial or temporal resolution. In his thesis he proposes an architecture capable of increasing the temporal resolution of dynamic point clouds. With this technique, dynamic point clouds can be transmitted in a lower temporal resolution, after which a higher temporal resolution can be obtained by performing the interpolation on the receiving side. The interpolation architecture works by first downsampling the point clouds to a lower spatial resolution, then estimating scene flow, and finally upsampling the result back to the original spatial resolution. To improve the smoothness of the interpolation result, a novel technique called neighbour snapping is applied, and in order to estimate the scene flow, a newly designed neural network architecture is used. The architecture was evaluated through objective metrics and based on a small-scale user study. Existing objective quality metrics for point clouds are known to have poor correlation with user perception and the findings confirm this: the metrics correlate poorly with themselves and with the results from the user study. The user study shows that on average, participants prefer the temporally interpolated sequences generated by our architecture over current state-of-the-art or sequences that have not been interpolated.</p>

		</div>
	</div>

	<div  class="wpb_single_image wpb_content_element vc_align_left">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="1200" height="577" src="https://vrtogether.eu/wp-content/uploads/2019/09/img.jpg" class="vc_single_image-img attachment-full" alt="" loading="lazy" srcset="https://vrtogether.eu/wp-content/uploads/2019/09/img.jpg 1200w, https://vrtogether.eu/wp-content/uploads/2019/09/img-300x144.jpg 300w, https://vrtogether.eu/wp-content/uploads/2019/09/img-768x369.jpg 768w, https://vrtogether.eu/wp-content/uploads/2019/09/img-1024x492.jpg 1024w, https://vrtogether.eu/wp-content/uploads/2019/09/img-700x337.jpg 700w, https://vrtogether.eu/wp-content/uploads/2019/09/img-410x197.jpg 410w, https://vrtogether.eu/wp-content/uploads/2019/09/img-100x48.jpg 100w, https://vrtogether.eu/wp-content/uploads/2019/09/img-275x132.jpg 275w" sizes="(max-width: 1200px) 100vw, 1200px" /></div>
		</figure>
	</div>

	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>Congratulations to the recently graduated students!</p>

		</div>
	</div>
</div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="vc_separator wpb_content_element vc_separator_align_center vc_sep_width_100 vc_sep_pos_align_center vc_separator_no_text vc_sep_color_grey" ><span class="vc_sep_holder vc_sep_holder_l"><span  class="vc_sep_line"></span></span><span class="vc_sep_holder vc_sep_holder_r"><span  class="vc_sep_line"></span></span>
</div>
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><i><span style="font-weight: 400;">Text: </span></i>Jie Li and Pablo Cesar — <a href="https://www.cwi.nl/" target="_blank" rel="noopener">CWI</a></p>
<p>Come and follow us in this VR journey with <a href="http://www.i2cat.net/en" target="_blank" rel="noopener">i2CAT</a>, <a href="https://www.cwi.nl/" target="_blank" rel="noopener">CWI</a>, <a href="https://www.tno.nl/en/" target="_blank" rel="noopener">TNO</a>, <a href="https://www.certh.gr/root.en.aspx" target="_blank" rel="noopener">CERTH</a>, <a href="http://www.artanim.ch/" target="_blank" rel="noopener">Artanim</a>, <a href="https://www.viaccess-orca.com/" target="_blank" rel="noopener">Viaccess-Orca</a>, <a href="https://www.entropystudio.net/" target="_blank" rel="noopener">Entropy Studio</a> and <a href="https://www.gpac-licensing.com/" target="_blank" rel="noopener">Motion Spell</a>.</p>

		</div>
	</div>
</div></div></div></div><section class="vc_section"><div class="vc_row wpb_row vc_row-fluid vc_column-gap-10"><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="vc_row wpb_row vc_inner vc_row-fluid vc_custom_1547799200116 vc_row-has-fill vc_row-o-content-middle vc_row-flex"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_center">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="75" height="50" src="https://vrtogether.eu/wp-content/uploads/2019/01/EU_flag_75px.png" class="vc_single_image-img attachment-thumbnail" alt="" loading="lazy" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>This project has been funded by the European Commission as part of the H2020 program, under the grant agreement 762111.</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div></div></div></div></div></section><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div>
<p>The post <a rel="nofollow" href="https://vrtogether.eu/2019/09/09/mentoring-students-in-social-vr/">Mentoring Students in Social VR</a> appeared first on <a rel="nofollow" href="https://vrtogether.eu">VRTogether</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>VR-Together at IEEE VR 2019 and ACM CHI 2019 International conferences</title>
		<link>https://vrtogether.eu/2019/04/12/ieee-vr-acm-chi-2019-conferences/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=ieee-vr-acm-chi-2019-conferences</link>
		
		<dc:creator><![CDATA[CWI]]></dc:creator>
		<pubDate>Fri, 12 Apr 2019 06:10:12 +0000</pubDate>
				<category><![CDATA[Events]]></category>
		<category><![CDATA[News]]></category>
		<guid isPermaLink="false">https://vrtogether.eu/?p=1280</guid>

					<description><![CDATA[<p>The post <a rel="nofollow" href="https://vrtogether.eu/2019/04/12/ieee-vr-acm-chi-2019-conferences/">VR-Together at IEEE VR 2019 and ACM CHI 2019 International conferences</a> appeared first on <a rel="nofollow" href="https://vrtogether.eu">VRTogether</a>.</p>
]]></description>
										<content:encoded><![CDATA[<div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>VR-Together was present at <a href="http://ieeevr.org/2019/" target="_blank" rel="noopener">IEEE VR 2019</a>, the 26th IEEE Conference on Virtual Reality (March 23rd -27th, 2019) in Osaka, Japan and will be present at <a href="http://chi2019.acm.org" target="_blank" rel="noopener">ACM CHI 2019</a>, the ACM Conference on Human Factors in Computing Systems (May 4th-9th) in Glasgow, United Kingdom. The goal is to present the<strong> successful results on user experience and quality of experience for social VR</strong>. In particular, the protocol for users’ QoE assessment in social VR application, developed in the framework of the project, and the user studies performed by CWI during the first year of the project, will be presented.</p>
<p>At <strong>IEEE VR</strong>, CWI researcher Jie Li presented the poster “<strong>Watching videos together in social Virtual Reality: an experimental study on user’s QoE</strong>”, showcasing the application of the methodology developed by CWI for users’ QoE assessment in social VR in a comparative study where a set of users used three social VR platforms to watch videos together. Find out more on the outcome of our experiment by reading our IEEE VR short paper, <a href="https://ir.cwi.nl/pub/28580/28580.pdf" target="_blank" rel="noopener">here</a>.</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="wpb_images_carousel wpb_content_element vc_clearfix"><div class="wpb_wrapper"><div id="vc_images-carousel-1-1617783245" data-ride="vc_carousel" data-wrap="false" style="width: 100%;" data-interval="5000" data-auto-height="yes" data-mode="horizontal" data-partial="false" data-per-view="1" data-hide-on-end="false" class="vc_slide vc_images_carousel"><ol class="vc_carousel-indicators"><li data-target="#vc_images-carousel-1-1617783245" data-slide-to="0"></li><li data-target="#vc_images-carousel-1-1617783245" data-slide-to="1"></li><li data-target="#vc_images-carousel-1-1617783245" data-slide-to="2"></li></ol><div class="vc_carousel-inner"><div class="vc_carousel-slideline"><div class="vc_carousel-slideline-inner"><div class="vc_item"><div class="vc_inner"><a class="prettyphoto" href="https://vrtogether.eu/wp-content/uploads/2019/04/ieeeVR1.jpg" data-rel="prettyPhoto[rel-1280-1495401943]"><img class="" src="https://vrtogether.eu/wp-content/uploads/2019/04/ieeeVR1-600x467.jpg" width="600" height="467" alt="ieeeVR1" title="ieeeVR1" /></a></div></div><div class="vc_item"><div class="vc_inner"><a class="prettyphoto" href="https://vrtogether.eu/wp-content/uploads/2019/04/ieeeVR3.jpg" data-rel="prettyPhoto[rel-1280-1495401943]"><img class="" src="https://vrtogether.eu/wp-content/uploads/2019/04/ieeeVR3-600x467.jpg" width="600" height="467" alt="ieeeVR3" title="ieeeVR3" /></a></div></div><div class="vc_item"><div class="vc_inner"><a class="prettyphoto" href="https://vrtogether.eu/wp-content/uploads/2019/04/ieeeVR2.jpg" data-rel="prettyPhoto[rel-1280-1495401943]"><img class="" src="https://vrtogether.eu/wp-content/uploads/2019/04/ieeeVR2-600x467.jpg" width="600" height="467" alt="ieeeVR2" title="ieeeVR2" /></a></div></div></div></div></div></div></div></div></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="vc_separator wpb_content_element vc_separator_align_center vc_sep_width_100 vc_sep_pos_align_center vc_separator_no_text vc_sep_color_grey" ><span class="vc_sep_holder vc_sep_holder_l"><span  class="vc_sep_line"></span></span><span class="vc_sep_holder vc_sep_holder_r"><span  class="vc_sep_line"></span></span>
</div><div class="vc_row wpb_row vc_inner vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>At <strong>ACM CHI</strong>, CWI will give <a href="https://chi2019.acm.org/web-program.php?sessionId=93da56b563f28d390d94b91a0751c0e3428526a89da9b8bb07fa97f8f0fae042&amp;publicationId=pn7164" target="_blank" rel="noopener">a talk</a> (Thursday, May 09, 11:00 am) on the methodology used to define the subjective protocol for users’ QoE assessment in social VR application, and its application is a photo-sharing experience using social VR systems, as described in the paper “<strong>Measuring and Understanding Photo Sharing Experiences in Social Virtual Reality</strong>”. Find out more on our protocol by watching <a href="https://www.youtube.com/watch?v=mvP-bMdMaCk" target="_blank" rel="noopener">the video</a> summarizing our contribution and reading our <a href="https://abdoelali.com/pdfs/chi2019-socialvr.pdf" target="_blank" rel="noopener">ACM CHI paper</a>.</p>
<p>The results presented in these two high-quality venues address one of the objectives of the project: to develop appropriate Quality of Experience (QoE) metrics and evaluation methods to quantify the quality of these new social VR experiences. During the first year, the project has created a new protocol and set of metrics, and the associated data analysis toolset, for evaluating social VR with end-users. The impact of this result may go beyond the project, since it can be become the de-facto <strong>standardized manner for evaluating a new genre of experiences: social VR</strong>. The protocol and metrics include both quantitative and qualitative aspects, such as a new questionnaire combining presence, immersion, and togetherness; a set of objective metrics based on the behavior of the user, focusing on speech analysis, neck rotation, body movement, etc.; and performance metrics for profiling the system aspects. This novel evaluation method has been used for the evaluation of the pilot in October 2018, and has been iteratively developed and validated through a human-centered process, including two experiments (with around 100 users in total).</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left">
		
		<figure class="wpb_wrapper vc_figure">
			<a href="https://chi2019.acm.org/web-program.php?sessionId=93da56b563f28d390d94b91a0751c0e3428526a89da9b8bb07fa97f8f0fae042&amp;publicationId=pn7164" target="_blank" class="vc_single_image-wrapper   vc_box_border_grey"><img width="1600" height="314" src="https://vrtogether.eu/wp-content/uploads/2019/04/chi2019_logo_final.jpg" class="vc_single_image-img attachment-full" alt="" loading="lazy" srcset="https://vrtogether.eu/wp-content/uploads/2019/04/chi2019_logo_final.jpg 1600w, https://vrtogether.eu/wp-content/uploads/2019/04/chi2019_logo_final-300x59.jpg 300w, https://vrtogether.eu/wp-content/uploads/2019/04/chi2019_logo_final-768x151.jpg 768w, https://vrtogether.eu/wp-content/uploads/2019/04/chi2019_logo_final-1024x201.jpg 1024w, https://vrtogether.eu/wp-content/uploads/2019/04/chi2019_logo_final-700x137.jpg 700w, https://vrtogether.eu/wp-content/uploads/2019/04/chi2019_logo_final-410x80.jpg 410w, https://vrtogether.eu/wp-content/uploads/2019/04/chi2019_logo_final-100x20.jpg 100w, https://vrtogether.eu/wp-content/uploads/2019/04/chi2019_logo_final-275x54.jpg 275w" sizes="(max-width: 1600px) 100vw, 1600px" /></a>
		</figure>
	</div>

	<div class="wpb_video_widget wpb_content_element vc_clearfix   vc_video-aspect-ratio-169 vc_video-el-width-100 vc_video-align-left" >
		<div class="wpb_wrapper">
			
			<div class="wpb_video_wrapper"><iframe width="1170" height="658" src="https://www.youtube.com/embed/mvP-bMdMaCk?feature=oembed" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></div>
		</div>
	</div>
</div></div></div></div><div class="vc_separator wpb_content_element vc_separator_align_center vc_sep_width_100 vc_sep_pos_align_center vc_separator_no_text vc_sep_color_grey" ><span class="vc_sep_holder vc_sep_holder_l"><span  class="vc_sep_line"></span></span><span class="vc_sep_holder vc_sep_holder_r"><span  class="vc_sep_line"></span></span>
</div>
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>Come and follow us in this VR journey with <a href="http://www.i2cat.net/en" target="_blank" rel="noopener">i2CAT</a>, <a href="https://www.cwi.nl/" target="_blank" rel="noopener">CWI</a>, <a href="https://www.tno.nl/en/" target="_blank" rel="noopener">TNO</a>, <a href="https://www.certh.gr/root.en.aspx" target="_blank" rel="noopener">CERTH</a>, <a href="http://www.artanim.ch/" target="_blank" rel="noopener">Artanim</a>, <a href="https://www.viaccess-orca.com/" target="_blank" rel="noopener">Viaccess-Orca</a>, <a href="https://www.entropystudio.net/" target="_blank" rel="noopener">Entropy Studio</a> and <a href="https://www.gpac-licensing.com/" target="_blank" rel="noopener">Motion Spell</a>.</p>

		</div>
	</div>
</div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper"><div  class="vc_tweetmeme-element"><a href="https://twitter.com/share" class="twitter-share-button" data-via="VRTogether_EU">Tweet</a><script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+'://platform.twitter.com/widgets.js';fjs.parentNode.insertBefore(js,fjs);}}(document, 'script', 'twitter-wjs');</script></div></div></div></div></div><section class="vc_section"><div class="vc_row wpb_row vc_row-fluid vc_column-gap-10"><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="vc_row wpb_row vc_inner vc_row-fluid vc_custom_1547799200116 vc_row-has-fill vc_row-o-content-middle vc_row-flex"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_center">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="75" height="50" src="https://vrtogether.eu/wp-content/uploads/2019/01/EU_flag_75px.png" class="vc_single_image-img attachment-thumbnail" alt="" loading="lazy" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>This project has been funded by the European Commission as part of the H2020 program, under the grant agreement 762111.</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div></div></div></div></div></section>
<p>The post <a rel="nofollow" href="https://vrtogether.eu/2019/04/12/ieee-vr-acm-chi-2019-conferences/">VR-Together at IEEE VR 2019 and ACM CHI 2019 International conferences</a> appeared first on <a rel="nofollow" href="https://vrtogether.eu">VRTogether</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Understanding and Measuring Social VR Experiences</title>
		<link>https://vrtogether.eu/2018/12/12/understanding-and-measuring-social-vr-experiences/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=understanding-and-measuring-social-vr-experiences</link>
		
		<dc:creator><![CDATA[CWI]]></dc:creator>
		<pubDate>Wed, 12 Dec 2018 15:49:20 +0000</pubDate>
				<category><![CDATA[News]]></category>
		<guid isPermaLink="false">http://vrtogether.eu/?p=754</guid>

					<description><![CDATA[<p>By Jie Li, Francesca de Simone, and Pablo Cesar, CWI VRTogether partner CWI has created a new protocol and set of metrics for evaluating social VR experiences with end-users. The protocol and metrics include both quantitative and qualitative aspects, such as a new questionnaire combining presence, immersion, and togetherness; and a set of objective metrics [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://vrtogether.eu/2018/12/12/understanding-and-measuring-social-vr-experiences/">Understanding and Measuring Social VR Experiences</a> appeared first on <a rel="nofollow" href="https://vrtogether.eu">VRTogether</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p><em>By Jie Li, Francesca de Simone, and Pablo Cesar, CWI</em></p>
<p>VRTogether partner CWI has created a new protocol and set of metrics for evaluating social VR experiences with end-users. The protocol and metrics include both quantitative and qualitative aspects, such as a new questionnaire combining presence, immersion, and togetherness; and a set of objective metrics to analyse the behaviour of the user, focusing on speech analysis, head rotation, body movement, etc. This novel evaluation method has been iteratively developed and validated through a human-centred process, including two experiments (with around 100 users in total).</p>
<p>The first study was based on photo sharing in Social VR. We ran context mapping (N=10), an expert creative session (N=6), and an online experience clustering questionnaire (N=20), which resulted in a generalizable Social VR questionnaire that can measure three dimensions of experiences: Quality of Interaction (QoI), Social Meaning (SM) and Presence/Immersion (PI). We then ran a controlled, within-subject study (N=26 pairs) to compare photo sharing under F2F, Skype, and Facebook Spaces (see Figure 1). Using semi-structured interviews, audio analysis, and our Social VR questionnaire, we found that Social VR can closely approximate F2F sharing. This experiment contributes empirical findings on the differences of the experience when using different digital communication media. It also evaluates the new Social VR questionnaire, showing that the questionnaire items measuring the three dimensions of experiences (QoI, SM, PI) are properly constructed.</p>
<figure id="attachment_755" aria-describedby="caption-attachment-755" style="width: 610px" class="wp-caption aligncenter"><img loading="lazy" class="wp-image-755 size-full" src="http://vrtogether.eu/wp-content/uploads/2018/12/Figure1.png" alt="" width="610" height="600" srcset="https://vrtogether.eu/wp-content/uploads/2018/12/Figure1.png 610w, https://vrtogether.eu/wp-content/uploads/2018/12/Figure1-300x295.png 300w, https://vrtogether.eu/wp-content/uploads/2018/12/Figure1-410x403.png 410w, https://vrtogether.eu/wp-content/uploads/2018/12/Figure1-100x98.png 100w, https://vrtogether.eu/wp-content/uploads/2018/12/Figure1-275x270.png 275w" sizes="(max-width: 610px) 100vw, 610px" /><figcaption id="caption-attachment-755" class="wp-caption-text">Figure 1: Illustrations of the three experimental conditions: (a) Face-to-face (b) Skype (c) Facebook Spaces.</figcaption></figure>
<p>The second study aimed at develop and test both subjective and objective methodologies to evaluate and compare Social VR systems. This study (N=16 pairs) follows also a within-subjects design. This time, the case study was to watch together movie trailers. The experiment included three conditions: Facebook Spaces (with avatar representation of the users), the VRTogether Web-based system (with 2D real-time video representation of the users), and a face-to face condition (see Figure 2). The subjective assessment was the same as the first study, using the Social VR questionnaire and semi-structured interviews. This time we collected as well data on verbal interactions, visual patterns, and body movements, based on the log of the head rotation of the participants, the capture of the HMD viewport and audio channel, and the user’s body recording using a webcam. In other words, we collected objective data on:</p>
<ol>
<li>how much time participants spent looking at and speaking to each other;</li>
<li>how much participants moved their body and head.</li>
</ol>
<figure id="attachment_756" aria-describedby="caption-attachment-756" style="width: 684px" class="wp-caption aligncenter"><img loading="lazy" class="size-full wp-image-756" src="http://vrtogether.eu/wp-content/uploads/2018/12/Figure2.png" alt="" width="684" height="600" srcset="https://vrtogether.eu/wp-content/uploads/2018/12/Figure2.png 684w, https://vrtogether.eu/wp-content/uploads/2018/12/Figure2-300x263.png 300w, https://vrtogether.eu/wp-content/uploads/2018/12/Figure2-410x360.png 410w, https://vrtogether.eu/wp-content/uploads/2018/12/Figure2-100x88.png 100w, https://vrtogether.eu/wp-content/uploads/2018/12/Figure2-275x241.png 275w" sizes="(max-width: 684px) 100vw, 684px" /><figcaption id="caption-attachment-756" class="wp-caption-text">Figure 2. Experiment setup: Face-to-face versus Social VR conditions.</figcaption></figure>
<p>We found that, in terms of QoI and SM, the VRTogether system and the face-to-face condition were significantly higher rated than the Facebook Spaces. For PI, no statistically significant differences were found between the two Social VR systems.</p>
<p>From the interviews, 47% of the participants pointed out that the avatar representation in the Facebook Spaces is of low realism. They prefer the 2D video representation offered by the VRTogether system. Some participants complained about the fact of missing the eyes (37.5%), missing part of the self-representation (28.1%) in the VRTogether system, but others experienced no problems about missing eye contacts (40.6%). Approximately a quarter (21%) of the participants found annoying the controllers in Facebook Spaces. Half of the participants (50%) expressed preference toward the VRTogether system for watching movies together. Moreover, participants were generally satisfied with the virtual environment. 37.5% of them had a sense of “being there” in the virtual environments. Regarding the future improvement, suggestions were cumulated on the ergonomics of the HMD (28.1%), better user representation (21%) and wider field of view (12.5%).</p>
<p>These two studies are our first step towards the development of an experimental protocol for a new medium: social VR. We will keep you updated!</p>
<p>The post <a rel="nofollow" href="https://vrtogether.eu/2018/12/12/understanding-and-measuring-social-vr-experiences/">Understanding and Measuring Social VR Experiences</a> appeared first on <a rel="nofollow" href="https://vrtogether.eu">VRTogether</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>CWI Standardization Efforts in VQEG (and ITU)</title>
		<link>https://vrtogether.eu/2018/09/19/cwi-standardization-efforts-in-vqeg-and-itu/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=cwi-standardization-efforts-in-vqeg-and-itu</link>
		
		<dc:creator><![CDATA[CWI]]></dc:creator>
		<pubDate>Wed, 19 Sep 2018 11:58:59 +0000</pubDate>
				<category><![CDATA[News]]></category>
		<category><![CDATA[CWI]]></category>
		<category><![CDATA[ITU]]></category>
		<category><![CDATA[standardisation]]></category>
		<category><![CDATA[VQEG]]></category>
		<guid isPermaLink="false">http://vrtogether.eu/?p=720</guid>

					<description><![CDATA[<p>With the recent advances in capture and display technologies, VR and AR applications are on the spot again. These applications involve new kinds of visual signals, such as omnidirectional images and video, and volumetric signals, such as meshes and point clouds. Additionally, they imply a truly interactive and immersive user experience: the end user can [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://vrtogether.eu/2018/09/19/cwi-standardization-efforts-in-vqeg-and-itu/">CWI Standardization Efforts in VQEG (and ITU)</a> appeared first on <a rel="nofollow" href="https://vrtogether.eu">VRTogether</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>With the recent advances in capture and display technologies, <strong>VR and AR applications are on the spot again</strong>. These applications involve new kinds of visual signals, such as omnidirectional images and video, and volumetric signals, such as meshes and point clouds. Additionally, they imply a truly interactive and immersive user experience: the end user can navigate the scene, with three or six degrees of freedom (3DoF or 6DoF), depending on the scenario.</p>
<p>Assessing the quality of the signals and the user’s quality of the experience for VR and AR applications opens up many <strong>new research challenges</strong> concerning human perception and interaction. Therefore, it is no surprise that standardisation and experts groups are looking into the problem of quality assessment of immersive media.</p>
<p><strong>CWI has recently started to actively participate in the activities of the Immersive Media Group</strong> (IMG) of the Video Quality Expert Group (<a href="https://www.its.bldrdoc.gov/vqeg/vqeg-home.aspx">VQEG</a>). VQEG provides a forum, via email lists and face-to-face meetings for video quality assessment experts to exchange information and work together on common goals. The general motivation of VQEG is <strong>to advance the field of video quality assessment</strong> by investigating new and advanced subjective and objective methods for assessing quality. VQEG activities, such as validation tests, are documented in reports and submitted to relevant ITU Study Groups (e.g., ITU-T SG9, ITU-T SG12, ITU-R WP6C), and other SDOs as appropriate. Several VQEG studies have resulted in ITU Recommendations.</p>
<p>IMG is a group of VQEG that is currently looking at the <strong>quality assessment of immersive media</strong>, involved in virtual and augmented reality applications. CWI is involved in the current activity of the group, which is focusing on the definition of a joint test plan for the design of a subjective test campaign concerning subjective quality assessment of 360-degree content. The group has also established a liaison with the <a href="https://www.itu.int/en/ITU-T/studygroups/2017-2020/12/Pages/q13.aspx">ITU-T Question 13</a>, on Quality of experience (QoE), quality of service (QoS) and performance requirements and assessment methods for multimedia.</p>
<p>The next face-to-face meeting of the VQEG IMG is scheduled for November 12 to 16 and will be <strong>hosted by Google</strong> in Mountain View, CA, USA. CWI is planning to participate in the meeting and present the current activities concerning quality assessment of point cloud signals and user’s QoE in social VR applications.</p>
<h5>Who we are</h5>
<p><a href="https://www.cwi.nl/" target="_blank" rel="noopener noreferrer">CWI</a> is the national research institute for mathematics and computer science of the Dutch National Science Foundation (NWO). CWI performs frontier research in mathematics and computer science and transfers new knowledge in these fields to society in general and trade and industry in particular.</p>
<p>Come and follow us in this VR journey with <a href="http://www.i2cat.net/en" target="_blank" rel="noopener noreferrer">i2Cat</a>, <a href="https://www.tno.nl/en/" target="_blank" rel="noopener noreferrer">TNO</a>, <a href="http://motionspell.com/" target="_blank" rel="noopener noreferrer">Motion Spell</a>, <a href="https://www.certh.gr/root.en.aspx" target="_blank" rel="noopener noreferrer">CERTH</a>, <a href="http://www.artanim.ch/" target="_blank" rel="noopener noreferrer">Artanim</a>, <a href="https://www.viaccess-orca.com/" target="_blank" rel="noopener noreferrer">Viaccess Orca</a>, <a href="http://www.entropystudio.net/" target="_blank" rel="noopener noreferrer">Entropy Studio</a>.</p>
<p><img loading="lazy" class="size-full wp-image-380 alignleft" src="http://vrtogether.eu/wp-content/uploads/2018/02/eu-logo.png" alt="" width="226" height="111" srcset="https://vrtogether.eu/wp-content/uploads/2018/02/eu-logo.png 226w, https://vrtogether.eu/wp-content/uploads/2018/02/eu-logo-100x49.png 100w" sizes="(max-width: 226px) 100vw, 226px" /></p>
<p><em>This project has been funded by the European Commission as part of the H2020 program, under the grant agreement 762111.</em></p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>The post <a rel="nofollow" href="https://vrtogether.eu/2018/09/19/cwi-standardization-efforts-in-vqeg-and-itu/">CWI Standardization Efforts in VQEG (and ITU)</a> appeared first on <a rel="nofollow" href="https://vrtogether.eu">VRTogether</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>MPEG Activity on Point Cloud Compression</title>
		<link>https://vrtogether.eu/2018/02/06/mpeg-activity-on-point-cloud-compression/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=mpeg-activity-on-point-cloud-compression</link>
		
		<dc:creator><![CDATA[CWI]]></dc:creator>
		<pubDate>Tue, 06 Feb 2018 06:35:41 +0000</pubDate>
				<category><![CDATA[News]]></category>
		<guid isPermaLink="false">http://vrtogether.eu/?p=372</guid>

					<description><![CDATA[<p>A point cloud is defined as a set of points in 3D space with specified geometry coordinates and associated attributes such as colour. Point clouds are an important emerging format for VR applications because of their simplicity and versatility. There are no restrictions on the attributes associated with every point in the cloud. Point clouds [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://vrtogether.eu/2018/02/06/mpeg-activity-on-point-cloud-compression/">MPEG Activity on Point Cloud Compression</a> appeared first on <a rel="nofollow" href="https://vrtogether.eu">VRTogether</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>A point cloud is defined as a <strong>set of points in 3D space with specified geometry coordinates and associated attributes such as colour</strong>. Point clouds are an important emerging format for VR applications because of their simplicity and versatility. There are no restrictions on the attributes associated with every point in the cloud.</p>
<p>Point clouds are usually captured using <strong>multiple cameras or depth sensors</strong> and can contain <strong>millions of points</strong> in order to create a photorealistic reconstruction of an object. <strong>Compression</strong> of point cloud geometry and attributes is essential in order to efficiently store and transmit point cloud data for applications such as <strong>teleimmersive video (figure 1)</strong> and <strong>free viewpoint sports replays (figure 2).</strong></p>
<p><figure id="attachment_374" aria-describedby="caption-attachment-374" style="width: 475px" class="wp-caption aligncenter"><img loading="lazy" class="wp-image-374 size-full" src="http://vrtogether.eu/wp-content/uploads/2018/02/reverie.jpg" alt="" width="475" height="288" srcset="https://vrtogether.eu/wp-content/uploads/2018/02/reverie.jpg 475w, https://vrtogether.eu/wp-content/uploads/2018/02/reverie-300x182.jpg 300w, https://vrtogether.eu/wp-content/uploads/2018/02/reverie-410x249.jpg 410w, https://vrtogether.eu/wp-content/uploads/2018/02/reverie-100x61.jpg 100w, https://vrtogether.eu/wp-content/uploads/2018/02/reverie-275x167.jpg 275w" sizes="(max-width: 475px) 100vw, 475px" /><figcaption id="caption-attachment-374" class="wp-caption-text">Figure 1: Screenshot of a point cloud rendered compositely with synthetic content for immersive telepresence [MBC16]</figcaption></figure>An <strong>ad hoc group</strong> was created by MPEG in order to start the <strong>standardisation activity on point cloud compression</strong>. In April 2017 MPEG issued a call for proposals on point cloud compression [mpe17] and divided the activity into three categories for static frames, dynamic sequences and dynamically acquired/fused point clouds. Nine leading technology companies responded to the CfP and the proposals were evaluated in October 2017. In addition to objective metrics, each proposal was also evaluated through subjective testing at GBTech and CWI. The <strong>winning proposals were selected as test models for the next step of the standardization activity.</strong></p>
<p>For the <strong>compression of dynamic sequences</strong> it was found that compression performance can be significantly improved by leveraging existing video codecs after performing a 3D to 2D conversion using a suitable mapping scheme. This also allows the use of hardware acceleration for video codecs such as HEVC that is supported by many current generation GPUs. In this manner, synergies with existing hardware and software infrastructure can allow rapid deployment of new immersive experiences [ctb].</p>
<p>The next step was to <strong>identify and investigate methods to optimize the test models by performing core experiments</strong>. The core experiments for dynamic sequences explored the use of different mapping schemes from 3D to 2D, hybrid codecs that use 3D geometry compression (using octrees) as well as video codecs and the use of motion field coding.</p>
<p><figure id="attachment_375" aria-describedby="caption-attachment-375" style="width: 1024px" class="wp-caption aligncenter"><img loading="lazy" class="size-large wp-image-375" src="http://vrtogether.eu/wp-content/uploads/2018/02/freed-1024x576.jpg" alt="" width="1024" height="576" srcset="https://vrtogether.eu/wp-content/uploads/2018/02/freed-1024x576.jpg 1024w, https://vrtogether.eu/wp-content/uploads/2018/02/freed-300x169.jpg 300w, https://vrtogether.eu/wp-content/uploads/2018/02/freed-768x432.jpg 768w, https://vrtogether.eu/wp-content/uploads/2018/02/freed-700x394.jpg 700w, https://vrtogether.eu/wp-content/uploads/2018/02/freed-410x231.jpg 410w, https://vrtogether.eu/wp-content/uploads/2018/02/freed-100x56.jpg 100w, https://vrtogether.eu/wp-content/uploads/2018/02/freed-275x155.jpg 275w, https://vrtogether.eu/wp-content/uploads/2018/02/freed.jpg 1280w" sizes="(max-width: 1024px) 100vw, 1024px" /><figcaption id="caption-attachment-375" class="wp-caption-text">Figure 2: Free viewpoint live sports replay technology from Intel [ifr]</figcaption></figure>The <strong>AhG cross checked each test model</strong> and reviewed the results of the core experiments in the recent <strong>MPEG 121 meeting in Gwangju</strong>. Other developments from this meeting include the addition of new high quality point cloud datasets for further testing and the collection of input documents from MPEG members to further improve each of the test models.</p>
<p><strong>VR Together is actively participating in this group through <a href="https://www.cwi.nl/" target="_blank" rel="noopener noreferrer">CWI </a>and <a href="https://www.gpac-licensing.com/" target="_blank" rel="noopener noreferrer">Motion Spell</a>.</strong> We intend to contribute new compression technologies [Sub17], delivery mechanisms and rendering techniques to the ongoing MPEG standardisation activity.</p>
<h5>References</h5>
<p>[ctb]Mpeg news: a report from the 120th meeting, macau, china. <a href="https://multimediacommunication.blogspot.com.es/2017/12/mpeg-news-report-from-120th-meeting.html" target="_blank" rel="noopener noreferrer">https://multimediacommunication.blogspot.nl/2017/12/mpeg-news-report-from-120th-meeting.html</a></p>
<p>[ifr] Intel freed technology. <a href="https://www.intel.com/content/www/us/en/sports/technology/intel-freed-360-replay-technology.html" target="_blank" rel="noopener noreferrer">https://www.intel.com/content/www/us/en/sports/technology/intel-freed-360-replay-technology.html</a></p>
<p>[MBC16] Rufael Mekuria, Kees Blom, and Pablo Cesar. Design, implementation and evaluation of a point cloud codec for tele-immersive video. IEEE Transactions on Circuits and Systems for Video Technology, January 2016.</p>
<p>[mpe17] Call for proposals for point cloud compression iso/iec jtc1/sc29 wg11 n16732, geneva ch. January 2017.</p>
<p>[Sub17] Shishir Subramanyam. Interframe compression for 3D dynamic point clouds. Master’s thesis, Delft University of Technology, The Netherlands, November 2017.</p>
<p><em>Text and pictures: Shishir Subramanyam &#8211; <a href="https://www.cwi.nl/" target="_blank" rel="noopener noreferrer">CWI</a></em></p>
<p>The post <a rel="nofollow" href="https://vrtogether.eu/2018/02/06/mpeg-activity-on-point-cloud-compression/">MPEG Activity on Point Cloud Compression</a> appeared first on <a rel="nofollow" href="https://vrtogether.eu">VRTogether</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Understanding Point Cloud Quality Perception</title>
		<link>https://vrtogether.eu/2018/01/30/understanding-point-cloud-quality-perception/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=understanding-point-cloud-quality-perception</link>
		
		<dc:creator><![CDATA[CWI]]></dc:creator>
		<pubDate>Tue, 30 Jan 2018 06:22:57 +0000</pubDate>
				<category><![CDATA[News]]></category>
		<guid isPermaLink="false">http://vrtogether.eu/?p=364</guid>

					<description><![CDATA[<p>Human body point clouds were used in the experiment, taken from the 8i Voxelixed Full Bodies (8iVFB v2) dataset VR-Together partner CWI conducted a series of mixed method experiments on understanding user perception of point cloud quality. 24 people participated in the experiment, which was divided into two parts. In the first part, participants were [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://vrtogether.eu/2018/01/30/understanding-point-cloud-quality-perception/">Understanding Point Cloud Quality Perception</a> appeared first on <a rel="nofollow" href="https://vrtogether.eu">VRTogether</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p><img loading="lazy" class="aligncenter wp-image-365 size-full" src="http://vrtogether.eu/wp-content/uploads/2018/01/Pointcloud1.jpg" alt="" width="967" height="338" srcset="https://vrtogether.eu/wp-content/uploads/2018/01/Pointcloud1.jpg 967w, https://vrtogether.eu/wp-content/uploads/2018/01/Pointcloud1-300x105.jpg 300w, https://vrtogether.eu/wp-content/uploads/2018/01/Pointcloud1-768x268.jpg 768w, https://vrtogether.eu/wp-content/uploads/2018/01/Pointcloud1-700x245.jpg 700w, https://vrtogether.eu/wp-content/uploads/2018/01/Pointcloud1-410x143.jpg 410w, https://vrtogether.eu/wp-content/uploads/2018/01/Pointcloud1-100x35.jpg 100w, https://vrtogether.eu/wp-content/uploads/2018/01/Pointcloud1-275x96.jpg 275w" sizes="(max-width: 967px) 100vw, 967px" /></p>
<p style="text-align: center;"><em>Human body point clouds were used in the experiment, taken from the 8i Voxelixed Full Bodies (8iVFB v2) dataset</em></p>
<p><strong>VR-Together partner <a href="https://www.cwi.nl/" target="_blank" rel="noopener noreferrer">CWI</a></strong> conducted a series of mixed method experiments on <strong>understanding user perception of point cloud quality</strong>.</p>
<p><strong>24 people</strong> participated in the experiment, which was divided into two parts. In the first part, participants were asked to rate the quality of a compressed point cloud compared to its uncompressed version. This allows us to see how user quality ratings change over different compression parameters.</p>
<p>In the second part of the experiment, participants were asked to perform a <strong>Sorted Napping</strong> of point cloud stimuli. Using a tablet interface, they indicated groups of point clouds that are similar in quality, and explained why they chose to group the point clouds together or apart. This experiment allowed participants to freely express what they consider as important attributes to point cloud quality.</p>
<p>The experiments were done in a standard setup for subjective quality assessments, and serve as initial steps to create <strong>better evaluations of point clouds.</strong></p>
<p><em>Text and pictures: Ernestasia Siahaan &#8211; <a href="https://www.cwi.nl/" target="_blank" rel="noopener noreferrer">CWI</a></em></p>
<p>The post <a rel="nofollow" href="https://vrtogether.eu/2018/01/30/understanding-point-cloud-quality-perception/">Understanding Point Cloud Quality Perception</a> appeared first on <a rel="nofollow" href="https://vrtogether.eu">VRTogether</a>.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
