Source code for "OmniPhotos: Casual 360° VR Photography"

Overview

OmniPhotos: Casual 360° VR Photography

Project Page | Video | Paper | Demo | Data

This repository contains the source code for creating and viewing OmniPhotos – a new approach for casual 360° VR photography using a consumer 360° video camera.

OmniPhotos: Casual 360° VR Photography
Tobias Bertel, Mingze Yuan, Reuben Lindroos, Christian Richardt
ACM Transactions on Graphics (SIGGRAPH Asia 2020)

Demo

The quickest way to try out OmniPhotos is via our precompiled demo (610 MB). Download and unzip to get started. Documentation for the precompiled binaries, which can also be downloaded separately (25 MB), can be found in the downloaded demo directory.

For the demo to run smoothly, we recommend a recently updated Windows 10 machine with a discrete GPU.

Additional OmniPhotos

We provide 31 OmniPhotos for download:

  • 9 preprocessed datasets that are ready for viewing (3.2 GB zipped, 12.8 GB uncompressed)
  • 31 unprocessed datasets with their input videos, camera poses etc.; this includes the 9 preprocessed datasets (17.4 GB zipped, 17.9 GB uncompressed)

Note: A few of the .insv files are missing for the 5.7k datasets. If you need to process these from scratch (using the insv files) these files can be found here.

How to view OmniPhotos

OmniPhotos are viewed using the "Viewer" executable, either in windowed mode (default) or in a compatible VR headset (see below). To run the viewer executable on the preprocessed datasets above, run the command:

Viewer.exe path-to-datasets/Preprocessed/

with paths adjusted for your machine. The viewer will automatically load the first dataset in the directory (in alphabetical order) and give you the option to load any of the datasets in the directory.

If you would like to run the viewer with VR enabled, please ensure that the firmware for your HMD is updated, you have SteamVR installed on your machine, and then run the command:

Viewer.exe --vr path-to-datasets/Preprocessed/

The OmniPhotos viewer can also load a specific single dataset directly:

Viewer.exe [--vr] path-to-datasets/Preprocessed/Temple3/Config/config-viewer.yaml

How to preprocess datasets

If you would like to preprocess additional datasets, for example "Ship" in the "Unprocessed" directory, run the command:

Preprocessing.exe path-to-datasets/Unpreprocessed/Ship/Config/config-viewer.yaml

This will preprocess the dataset according to the options specified in the config file. Once the preprocessing is finished, the dataset can be opened in the Viewer.

For processing new datasets from scratch, please follow the detailed documentation at Python/preprocessing/readme.md.

Compiling from source

The OmniPhotos Preprocessing and Viewer applications are written in C++11, with some Python used for preparing datasets.

Both main applications and the included libraries use CMake as build system generator. We recommend CMake 3.16 or newer, but older 3.x versions might also work.

Our code has been developed and tested with Microsoft Visual Studio 2015 and 2019 (both 64 bit).

Required dependencies

  1. GLFW 3.3 (version 3.3.1 works)
  2. Eigen 3.3 (version 3.3.2 works)
    • Please note: Ceres (an optional dependency) requires Eigen version "3.3.90" (~Eigen master branch).
  3. OpenCV 4.2
    • OpenCV 4.2 includes DIS flow in the main distribution, so precompiled OpenCV can be used.
    • OpenCV 4.1.1 needs to be compiled from source with the optflow contrib package (for DIS flow).
    • We also support the CUDA Brox flow from the cudaoptflow module, if it is compiled in. In this case, tick USE_CUDA_IN_OPENCV in CMake.
  4. OpenGL 4.1: provided by the operating system
  5. glog (newer than 0.4.0 master works)
  6. gflags (version 2.2.2 works)

Included dependencies (in /src/3rdParty/)

  1. DearImGui 1.79: included automatically as a git submodule.
  2. GL3W
  3. JsonCpp 1.8.0: almalgamated version
  4. nlohmann/json 3.6.1
  5. OpenVR 1.10.30: enable with WITH_OPENVR in CMake.
  6. TCLAP
  7. tinyfiledialogs 3.3.8

Optional dependencies

  1. Ceres (with SuiteSparse) is required for the scene-adaptive proxy geometry fitting. Enable with USE_CERES in CMake.
  2. googletest (master): automatically added when WITH_TEST is enabled in CMake.

Citation

Please cite our paper if you use this code or any of our datasets:

@article{OmniPhotos,
  author    = {Tobias Bertel and Mingze Yuan and Reuben Lindroos and Christian Richardt},
  title     = {{OmniPhotos}: Casual 360° {VR} Photography},
  journal   = {ACM Transactions on Graphics},
  year      = {2020},
  volume    = {39},
  number    = {6},
  pages     = {266:1--12},
  month     = dec,
  issn      = {0730-0301},
  doi       = {10.1145/3414685.3417770},
  url       = {https://richardt.name/omniphotos/},
}

Acknowledgements

We thank the reviewers for their thorough feedback that has helped to improve our paper. We also thank Peter Hedman, Ana Serrano and Brian Cabral for helpful discussions, and Benjamin Attal for his layered mesh rendering code.

This work was supported by EU Horizon 2020 MSCA grant FIRE (665992), the EPSRC Centre for Doctoral Training in Digital Entertainment (EP/L016540/1), RCUK grant CAMERA (EP/M023281/1), an EPSRC-UKRI Innovation Fellowship (EP/S001050/1), a Rabin Ezra Scholarship and an NVIDIA Corporation GPU Grant.

Comments
  • Documentation pipeline update

    Documentation pipeline update

    Pipeline to automatically create documenation on read the docs using doxygen.

    • [ ] link to github page on index.html (mainpage.hpp)
    • [ ] add documentation convention to docs\README.md
    • [ ] fork branch from cr333/main and apply changes to that
    • [ ] create new PR from forked branch
    opened by reubenlindroos 1
  • Adds a progress bar when circleselector is running

    Adds a progress bar when circleselector is running

    Also improves speed of the circleselector module by ~50%

    Todo:

    • [x] add tqdm to requirements.txt
    • [x] np.diffs for find_path_length
    • [x] atomic lock for incrementing the progress bar?
    opened by reubenlindroos 0
  • Circle Selector

    Circle Selector

    • [x] clean up requirements.txt
    • [x] save plot of heatmaps to the cache/dataset directory.
    • [x] move json file to capture directory
    • [x] update the template with option to switch off circlefitting
    • [x] update template to remove some of the options (e.g op_filename_expression)
    • [x] Update README.md with automatic circle selection (section 2.2)
    • [x] Update documentation for installation?
    • [x] Linting (spacing), comment convention, Pep convention
    • [x] sort imports
    • [x] replace op_filename_epression with original_filename_expression

    cv_utils

    • [x] remove extra copy of computeColor
    • [x] change pjoin to os.path.join
    • [x] add more documentation for parameters in cv_utils (change lookatang to look_at_angle)
    • [x] 'nxt' to 'next'
    • [x] comment on line 100 (slice_equirect)

    datatypes

    • [x] more comments on some of the methods in PointDict
    opened by reubenlindroos 0
  • Documentation pipeline update

    Documentation pipeline update

    Pipeline to automatically create documenation on read the docs using doxygen.

    • [x] link to github page on index.html (mainpage.hpp)
    • [x] add documentation convention to docs\README.md
    • [x] fork branch from cr333/main and apply changes to that
    • [x] create new PR from forked branch
    • [x] remove documentation for header comment block in docs/README.md
    • [x] cleanup index.rst (try removing, see if sphinx can build anyway)
    • [ ] mainpage.hpp cleanup (capitalise, centralise)
    • [x] clarify line 48 in README.md
    opened by reubenlindroos 0
  • Demo updated

    Demo updated

    Converts demo documentation files from rst and based in sphinx to be hosted in Github. The sites markdown API now renders the documentation files rather than using sphinx + rtd.

    opened by reubenlindroos 0
  • Adds build test to master branch on push and PR

    Adds build test to master branch on push and PR

    build

    • [ ] change actions to not send email for every build
    • [x] fix requested changes
    • [x] make into squash merge to not mess with main branch history
    • [x] group build steps (building dependencies which ahve been left seperate for debugging purposes)
    • [x] check glog build variables in cmake (e.g BUILD_TEST should not be enabled)
    • [x] check eigen warnings in build log
    • [x] check if precompiled headers might speed up build
    • [x] check if multithread build could be used
    • [x] remove verbose flag from extraction of opencv

    test

    • [x] add test data download
    • [x] reduce size of test dataset
    • [x] check what happens on failure
    • [ ] check if we can "publish" test results (xml?)
    opened by reubenlindroos 0
  • Get problems while preprocessing

    Get problems while preprocessing

    I did download all of those binary files from here:https://github.com/cr333/OmniPhotos/releases/download/v1.1/OmniPhotos-v1.1-win10-x64.zip

    And I did put ffmpeg.exe into system Path. However, Im getting errors saying this below:

    $ ./preproc/preproc.exe -c preproc-config-template.yaml [23276] Failed to execute script 'main' due to unhandled exception! Traceback (most recent call last): File "main.py", line 24, in File "preproc_app.py", line 39, in init File "data_preprocessor.py", line 32, in init File "abs_preprocessor.py", line 70, in init File "abs_preprocessor.py", line 225, in load_origin_data_info File "ffmpeg_probe.py", line 20, in probe File "subprocess.py", line 800, in init File "subprocess.py", line 1207, in _execute_child FileNotFoundError: [WinError 2]

    image

    opened by BlairLeng 2
Releases(v1.1)
Owner
Christian Richardt
Christian Richardt
PyTorch implementation for STIN

STIN This repository contains PyTorch implementation for STIN. Abstract: In single-photon LiDAR, photon-efficient imaging captures the 3D structure of

Yiweins 2 Nov 22, 2022
Portfolio asset allocation strategies: from Markowitz to RNNs

Portfolio asset allocation strategies: from Markowitz to RNNs Research project to explore different approaches for optimal portfolio allocation starti

Luigi Filippo Chiara 1 Feb 05, 2022
Code for "Learning Graph Cellular Automata"

Learning Graph Cellular Automata This code implements the experiments from the NeurIPS 2021 paper: "Learning Graph Cellular Automata" Daniele Grattaro

Daniele Grattarola 37 Oct 26, 2022
Final report with code for KAIST Course KSE 801.

Orthogonal collocation is a method for the numerical solution of partial differential equations

Chuanbo HUA 4 Apr 06, 2022
Machine Unlearning with SISA

Machine Unlearning with SISA Lucas Bourtoule, Varun Chandrasekaran, Christopher Choquette-Choo, Hengrui Jia, Adelin Travers, Baiwu Zhang, David Lie, N

CleverHans Lab 70 Jan 01, 2023
The code for MM2021 paper "Multi-Level Counterfactual Contrast for Visual Commonsense Reasoning"

The Code for MM2021 paper "Multi-Level Counterfactual Contrast for Visual Commonsense Reasoning" Setting up and using the repo Get the dataset. Follow

4 Apr 20, 2022
Get a Grip! - A robotic system for remote clinical environments.

Get a Grip! Within clinical environments, sterilization is an essential procedure for disinfecting surgical and medical instruments. For our engineeri

Jay Sharma 1 Jan 05, 2022
A video scene detection algorithm is designed to detect a variety of different scenes within a video

Scene-Change-Detection - A video scene detection algorithm is designed to detect a variety of different scenes within a video. There is a very simple definition for a scene: It is a series of logical

1 Jan 04, 2022
Official implementation of Neural Bellman-Ford Networks (NeurIPS 2021)

NBFNet: Neural Bellman-Ford Networks This is the official codebase of the paper Neural Bellman-Ford Networks: A General Graph Neural Network Framework

MilaGraph 136 Dec 21, 2022
The code release of paper Low-Light Image Enhancement with Normalizing Flow

[AAAI 2022] Low-Light Image Enhancement with Normalizing Flow Paper | Project Page Low-Light Image Enhancement with Normalizing Flow Yufei Wang, Renji

Yufei Wang 176 Jan 06, 2023
Experiments on continual learning from a stream of pretrained models.

Ex-model CL Ex-model continual learning is a setting where a stream of experts (i.e. model's parameters) is available and a CL model learns from them

Antonio Carta 6 Dec 04, 2022
Code for our ACL 2021 paper - ConSERT: A Contrastive Framework for Self-Supervised Sentence Representation Transfer

ConSERT Code for our ACL 2021 paper - ConSERT: A Contrastive Framework for Self-Supervised Sentence Representation Transfer Requirements torch==1.6.0

Yan Yuanmeng 478 Dec 25, 2022
Sequence Modeling with Structured State Spaces

Structured State Spaces for Sequence Modeling This repository provides implementations and experiments for the following papers. S4 Efficiently Modeli

HazyResearch 896 Jan 01, 2023
免费获取http代理并生成proxifier配置文件

freeproxy 免费获取http代理并生成proxifier配置文件 公众号:台下言书 工具说明:https://mp.weixin.qq.com/s?__biz=MzIyNDkwNjQ5Ng==&mid=2247484425&idx=1&sn=56ccbe130822aa35038095317

说书人 32 Mar 25, 2022
This is code of book "Learn Deep Learning with PyTorch"

深度学习入门之PyTorch Learn Deep Learning with PyTorch 非常感谢您能够购买此书,这个github repository包含有深度学习入门之PyTorch的实例代码。由于本人水平有限,在写此书的时候参考了一些网上的资料,在这里对他们表示敬意。由于深度学习的技术在

Xingyu Liao 2.5k Jan 04, 2023
This repository attempts to replicate the SqueezeNet architecture and implement the same on an image classification task.

SqueezeNet-Implementation This repository attempts to replicate the SqueezeNet architecture using TensorFlow discussed in the research paper: "Squeeze

Rohan Mathur 3 Dec 13, 2022
MonoScene: Monocular 3D Semantic Scene Completion

MonoScene: Monocular 3D Semantic Scene Completion MonoScene: Monocular 3D Semantic Scene Completion] [arXiv + supp] | [Project page] Anh-Quan Cao, Rao

298 Jan 08, 2023
Awesome Monocular 3D detection

Awesome Monocular 3D detection Paper list of 3D detetction, keep updating! Contents Paper List 2022 2021 2020 2019 2018 2017 2016 KITTI Results Paper

Zhikang Zou 184 Jan 04, 2023
GAN example for Keras. Cuz MNIST is too small and there should be something more realistic.

Keras-GAN-Animeface-Character GAN example for Keras. Cuz MNIST is too small and there should an example on something more realistic. Some results Trai

160 Sep 20, 2022
A Joint Video and Image Encoder for End-to-End Retrieval

Frozen️ in Time ❄️ ️️️️ ⏳ A Joint Video and Image Encoder for End-to-End Retrieval project page | arXiv | webvid-data Repository containing the code,

225 Dec 25, 2022