VR Viewport Pose Model for Quantifying and Exploiting Frame Correlations

Overview

This repository contains the introduction to the collected VRViewportPose dataset and the code for the IEEE INFOCOM 2022 paper: "VR Viewport Pose Model for Quantifying and Exploiting Frame Correlations" by Ying Chen, Hojung Kwon, Hazer Inaltekin, and Maria Gorlatova.

Outline

I. VRViewportPose Dataset

1. Data Collection

We conducted an IRB-approved data collection of the viewport pose in 3 VR games and across 3 different types of VR user interfaces, with over 5.5 hours of user data in total.

A. Stimuli

We collected the viewport pose for desktop, headset, and phone-based virtual reality (VR), with open-source VR games with different scene complexities from Unity store, containing 1 indoor (Office [1]) and 2 outdoor (Viking Village [2], Lite [3]) scenarios. In desktop VR, rotational and translational movements are made using the mouse and up arrow key. The poses in the headset VR are collected with a standalone Oculus Quest 2, where rotational and translational movements are made by moving the head and by using the controller thumbstick. The poses in the phone-based VR are collected with Google Pixel 2 XL and Nokia 7.1 with Android 9, and rotational and translational movements are made by moving the motion-sensor-equipped phone and by tapping on the screen using one finger.

Figure 1: Open-source VR games used for the data collection: (a) Office; (b) Viking Village; (c) Lite.

B. Procedure

The data collection, conducted under COVID-19 restrictions, involved unaided and Zoom-supported remote data collection by distributing desktop and phone-based VR apps, and a small number of socially distanced in-lab experiments for headset and phone-based VR. We recorded the viewport poses of 20 participants (9 male, 11 female, age 20-48), 5 participants (2 male, 3 female, age 23-33), and 5 participants (3 male, 2 female, age 23-33) in desktop, headset, and phone-based VR, respectively. The participants were seated in front of a PC, wore the headset while standing, and held a phone in landscape mode while standing in desktop, headset, and phone-based VR, respectively. For desktop and phone-based VR, each participant explored VK, Lite, and Office for 5, 5, and 2 minutes, respectively. For headset VR, the participants only explored each game for 2 minutes to avoid simulator sickness. Considering the device computation capability and the screen refresh rate, the timestamp and viewport pose of each participant are recorded at a target frame rate of 60 Hz, 72 Hz, and 60 Hz for desktop, headset, and phone-based VR, respectively. For each frame, we record the timestamp, the x, y, z positions and the roll, pitch, and yaw Euler orientation angles. For the Euler orientation angles β, γ, α, the intrinsic rotation orders are adopted, i.e., the viewport pose is rotated α degrees around the z-axis, β degrees around the x-axis, and γ degrees around the y-axis. We randomize the initial viewport position in VR games over the whole bounding area. We fix the initial polar angle of the viewport to be 90 degree, and uniformly randomize the initial azimuth angle on [-180,180) degree.

2. Download the Dataset

The dataset can be download here.

A. The structure of the dataset

The dataset follows the hierarchical file structure shown below:

VR_Pose
└───data_Desktop
│   │
│   └───Office_Desktop_1.txt
│   └───VikingVillage_Desktop_1.txt
│   └───Lite_Desktop_1.txt
│   └───Office_Desktop_2.txt
│   └───VikingVillage_Desktop_2.txt
│   └───Lite_Desktop_2.txt
│   ...
│
└───data_Oculus
│   │
│   └───Office_Oculus_1.txt
│   └───VikingVillage_Oculus_1.txt
│   └───Lite_Oculus_1.txt
│   └───Office_Oculus_2.txt
│   └───VikingVillage_Oculus_2.txt
│   └───Lite_Oculus_2.txt
|   ...
|
└───data_Phone
...

There are 3 sub-folders corresponding to the different VR interfaces. In the subfolder of data_Desktop, there are 60 TXT files, corresponding to 20 participants, each of them experiencing 3 VR games. There are 15 TXT files in both the data_Oculus and data_Phone subfolders, corresponding to 5 participants experiencing 3 VR games. In total, there are over 5.5 hours of user data.

3. Extract the Orientation and Position Models

The OrientationModel.py and PositionModel.py are used to extract the orientation and position models for VR viewport pose, respectively. Before running the scripts in this repository, you need to download the repository and install the necessary tools and libraries on your computer, including scipy, numpy, pandas, fitter, and matplotlib.

A. Orientation model

Data processing

We convert the recorded Euler angles to polar angle θ and azimuth angle ϕ. After applying rotation matrix R, we have

From the above equation, θ is calculated as θ=sinαsinγ-cosαsinβcosγ, and ϕ is given by

where ϕ=atan((cosγsinαsinβ+cosαsinγ)/(cosβcosγ)).

After we obtain the polar and azimuth angles, we fit the polar angle, polar angle change, and azimuth angle change to a set of statistical models and mixed models (of two statistical models).

Orientation model script

The orientaion model script is provided via https://github.com/VRViewportPose/VRViewportPose/blob/main/OrientationModel.py. To obtain the orientation model, follow the procedure below:

a. Download and extract the VR viewport pose dataset.

b. Change the filePath variable in OrientationModel.py to the file location of the pose dataset.

c. You can directly run OrientationModel.py (python .\OrientationModel.py). It will automatically run the pipeline.

d. The generated EPS images named "polar_fit_our_dataset.eps", "polar_change.eps", "azimuth_change.eps", and "ACF_our_dataset.eps" will be saved in a folder. "polar_fit_our_dataset.eps", "polar_change.eps", and "azimuth_change.eps" show the distribution of the experimental data for polar angle, polar angle change, and azimuth angle change fitted by different statistical distributions, respectively. "ACF_our_dataset.eps" shows the autocorrelation function (ACF) of polar and azimuth angle samples that are Δt s apart.

B. Position model

Data processing

We apply the standard angle model proposed in [5] to extract flights from the trajectories. An example of the collected trajectory for one user in Lite and the extracted flights is shown below.

Position model script

The position model script is provided via https://github.com/VRViewportPose/VRViewportPose/blob/main/PositionModel.py. To obtain the position model, follow the procedure below:

a. Download and extract the VR viewport pose dataset.

b. Change the filePath variable in PositionModel.py to the file location of the pose dataset.

c. You can directly run PositionModel.py (python .\PositionModel.py). It will automatically run the pipeline.

d. The generated EPS images named "flight_sample.eps", "flight.eps", "pausetime_distribution.eps", and "correlation.eps" will be saved in a folder. "flight_sample.eps" shows an example of the collected trajectories and the corresponding flights. "flight.eps" and "pausetime_distribution.eps" show distributions of the flight time and the pause duration for collected samples, respectively. "correlation.eps" shows the correlation of the azimuth angle and the walking direction.

II. Visibility Similarity

4. Analytical Results

The codes for analyzing the visibility similarity can be download here.

a. You will see three files after extracting the ZIP file. Analysis_Visibility_Similarity.m sets the parameters for the orientation model, position model, and the visibility similarity model, and calculates the analytical results of visibility similarity. calculate_m_k.m calculates the k-th moment of the position displacement, and calculate_hypergeom.m is used to calculate the hypergeometric function. b. Run the Analysis_Visibility_Similarity.m. You can get the analytical results of visibility similarity.

5. Implementation of ALG-ViS

The codes for implementing the ALG-ViS can be downloaded here. Tested with Unity 2019.2.14f1 and Oculus Quest 2 with build 30.0.

a. In Unity Hub, create a new 3D Unity project. Download ZIP file and unzip in the "Assets" folder of the Unity project.

b. Install Android 9.0 'Pie' (API Level 28) or higher installed using the SDK Manager in Android Studio.

c. Navigate to File>Build Settings>Player Settings. Set 'Minimum API Level' to be Android 9.0 'Pie' (API Level 28) or higher. In 'Other Settings', make sure only 'OpenGLES3' is selected. In 'XR Settings', check 'Virtual Reality Selected' and add 'Oculus' to the 'Virtual Reality SDKs'. Rename your 'CompanyName' and 'GameName', and the Bundle Identifier string com.CompanyName.GameName will be the unique package name of your application installed on the Oculus device.

d. Copy the "pose.txt" and "visValue.txt" to the Application.persistentDataPath which points to /storage/emulated/0/Android/data/ /files, where is com.CompanyName.GameName.

e. Navigate to Window>Asset Store. Search for the virtual reality game (e.g., the 'Make Your Fantasy Game - Lite' game [3]) in the Asset Store, and select 'Buy Now' and 'Import'.

f. Make sure only the 'ALG_ViS' scene is selected in 'Scenes in Build'. Select your connected target device (Oculus Quest 2) and click 'Build and Run'.

g. The output APK package will be saved to the file path you specify, while the app will be installed on the Oculus Quest 2 device connected to your computer.

h. Disconnect the Oculus Quest 2 from the computer. After setting up a new Guardian Boundary, the vritual reality game with ALG-ViS will be automatically loaded.

Citation

Please cite the following paper in your publications if the dataset or code helps your research.

 @inproceedings{Chen22VRViewportPose,
  title={{VR} Viewport Pose Model for Quantifying and Exploiting Frame Correlations},
  author={Chen, Ying and Kwon, Hojung and Inaltekin, Hazer and Gorlatova, Maria},
  booktitle={Proc. IEEE INFOCOM},
  year={2022}
}

Acknowledgments

We thank the study's participants for their time in the data collection. The contributors of the dataset and code are Ying Chen and Maria Gorlatova. For questions on this repository or the related paper, please contact Ying Chen at yc383 [AT] duke [DOT] edu.

References

[1] Unity Asset Store. (2020) Office. https://assetstore.unity.com/packages/3d/environments/snapsprototype-office-137490

[2] Unity Technologies. (2015) Viking Village. https://assetstore.unity.com/packages/essentials/tutorialprojects/viking-village-29140

[3] Xiaolianhua Studio. (2017) Lite. https://assetstore.unity.com/packages/3d/environments/fantasy/makeyour-fantasy-game-lite-8312

[4] Oculus. (2021) Oculus Quest 2. https://www.oculus.com/quest-2/

[5] I. Rhee, M. Shin, S. Hong, K. Lee, and S. Chong, “On the Levy-walk nature of human mobility,” in Proc. IEEE INFOCOM, 2008.

Multi-Scale Aligned Distillation for Low-Resolution Detection (CVPR2021)

MSAD Multi-Scale Aligned Distillation for Low-Resolution Detection Lu Qi*, Jason Kuen*, Jiuxiang Gu, Zhe Lin, Yi Wang, Yukang Chen, Yanwei Li, Jiaya J

Jia Research Lab 115 Dec 23, 2022
Code for "Offline Meta-Reinforcement Learning with Advantage Weighting" [ICML 2021]

Offline Meta-Reinforcement Learning with Advantage Weighting (MACAW) MACAW code used for the experiments in the ICML 2021 paper. Installing the enviro

Eric Mitchell 28 Jan 01, 2023
SwinIR: Image Restoration Using Swin Transformer

SwinIR: Image Restoration Using Swin Transformer This repository is the official PyTorch implementation of SwinIR: Image Restoration Using Shifted Win

Jingyun Liang 2.4k Jan 08, 2023
Repo público onde postarei meus estudos de Python, buscando aprender por meio do compartilhamento do aprendizado!

Seja bem vindo à minha repo de Estudos em Python 3! Este é um repositório criado por um programador amador que estuda tópicos de finanças, estatística

32 Dec 24, 2022
🛠️ SLAMcore SLAM Utilities

slamcore_utils Description This repo contains the slamcore-setup-dataset script. It can be used for installing a sample dataset for offline testing an

SLAMcore 7 Aug 04, 2022
Towards Interpretable Deep Metric Learning with Structural Matching

DIML Created by Wenliang Zhao*, Yongming Rao*, Ziyi Wang, Jiwen Lu, Jie Zhou This repository contains PyTorch implementation for paper Towards Interpr

Wenliang Zhao 75 Nov 11, 2022
BookMyShowPC - Movie Ticket Reservation App made with Tkinter

Book My Show PC What is this? Movie Ticket Reservation App made with Tkinter. Tk

The Nithin Balaji 3 Dec 09, 2022
Simulation environments for the CrazyFlie quadrotor: Used for Reinforcement Learning and Sim-to-Real Transfer

Phoenix-Drone-Simulation An OpenAI Gym environment based on PyBullet for learning to control the CrazyFlie quadrotor: Can be used for Reinforcement Le

Sven Gronauer 8 Dec 07, 2022
Code for `BCD Nets: Scalable Variational Approaches for Bayesian Causal Discovery`, Neurips 2021

This folder contains the code for 'Scalable Variational Approaches for Bayesian Causal Discovery'. Installation To install, use conda with conda env c

14 Sep 21, 2022
Code for "Learning Canonical Representations for Scene Graph to Image Generation", Herzig & Bar et al., ECCV2020

Learning Canonical Representations for Scene Graph to Image Generation (ECCV 2020) Roei Herzig*, Amir Bar*, Huijuan Xu, Gal Chechik, Trevor Darrell, A

roei_herzig 24 Jul 07, 2022
Statistical and Algorithmic Investing Strategies for Everyone

Eiten - Algorithmic Investing Strategies for Everyone Eiten is an open source toolkit by Tradytics that implements various statistical and algorithmic

Tradytics 2.5k Jan 02, 2023
Deep Reinforcement Learning for mobile robot navigation in ROS Gazebo simulator

DRL-robot-navigation Deep Reinforcement Learning for mobile robot navigation in ROS Gazebo simulator. Using Twin Delayed Deep Deterministic Policy Gra

87 Jan 07, 2023
This demo showcase the use of onnxruntime-rs with a GPU on CUDA 11 to run Bert in a data pipeline with Rust.

Demo BERT ONNX pipeline written in rust This demo showcase the use of onnxruntime-rs with a GPU on CUDA 11 to run Bert in a data pipeline with Rust. R

Xavier Tao 14 Dec 17, 2022
[CVPR 2021] MiVOS - Mask Propagation module. Reproduced STM (and better) with training code :star2:. Semi-supervised video object segmentation evaluation.

MiVOS (CVPR 2021) - Mask Propagation Ho Kei Cheng, Yu-Wing Tai, Chi-Keung Tang [arXiv] [Paper PDF] [Project Page] [Papers with Code] This repo impleme

Rex Cheng 106 Jan 03, 2023
EMNLP 2021: Single-dataset Experts for Multi-dataset Question-Answering

MADE (Multi-Adapter Dataset Experts) This repository contains the implementation of MADE (Multi-adapter dataset experts), which is described in the pa

Princeton Natural Language Processing 68 Jul 18, 2022
Pyramid addon for OpenAPI3 validation of requests and responses.

Validate Pyramid views against an OpenAPI 3.0 document Peace of Mind The reason this package exists is to give you peace of mind when providing a REST

Pylons Project 79 Dec 30, 2022
TransMVSNet: Global Context-aware Multi-view Stereo Network with Transformers.

TransMVSNet This repository contains the official implementation of the paper: "TransMVSNet: Global Context-aware Multi-view Stereo Network with Trans

旷视研究院 3D 组 155 Dec 29, 2022
EMNLP 2021 paper Models and Datasets for Cross-Lingual Summarisation.

This repository contains data and code for our EMNLP 2021 paper Models and Datasets for Cross-Lingual Summarisation. Please contact me at

9 Oct 28, 2022
Pytorch implementation of “Recursive Non-Autoregressive Graph-to-Graph Transformer for Dependency Parsing with Iterative Refinement”

Graph-to-Graph Transformers Self-attention models, such as Transformer, have been hugely successful in a wide range of natural language processing (NL

Idiap Research Institute 40 Aug 14, 2022
a pytorch implementation of auto-punctuation learned character by character

Learning Auto-Punctuation by Reading Engadget Articles Link to Other of my work 🌟 Deep Learning Notes: A collection of my notes going from basic mult

Ge Yang 137 Nov 09, 2022