This project is based on our SIGGRAPH 2021 paper, ROSEFusion: Random Optimization for Online DenSE Reconstruction under Fast Camera Motion .

Overview

ROSEFusion 🌹

This project is based on our SIGGRAPH 2021 paper, ROSEFusion: Random Optimization for Online DenSE Reconstruction under Fast Camera Motion .

Introduction

ROSEFsuion is proposed to tackle the difficulties in fast-motion camera tracking using random optimization with depth information only. Our method attains good quality pose tracking under fast camera motion in a realtime framerate without including loop closure or global pose optimization.

Installation

The code is based on C++ and CUDA with the support of:

  • Pangolin
  • OpenCV with CUDA (v.4.5 is required, for instance you can follow the link)
  • Eigen
  • CUDA (v.11 and above is required)

Befor building, please make sure the architecture (sm_xx and compute_xx) in the L22 of CMakeLists.txt is compatible with your own graphics card.

Our code has been tested with Nvidia GeForce RTX 2080 SUPER on Ubuntu 16.04.

[Option] Test with Docker

We have already upload a docker image with all the lib, code and data. Please download the image from the google drive.

Prepare

Make sure you have successfully installed the docker and nvidia docker. Once the environment is ready, you can using following commands to boot the docker image:

sudo docker load -i rosefusion_docker.tar 
sudo docker run -it  --gpus all jiazhao/rosefusion:v7 /bin/bash

And please check the architecture in the L22 of /home/code/ROSEFusion-main/CMakeList.txt is compatible with your own graphics card. If not, change the sm_xx and compute_xx, then rebuild the ROSEFusion.

QuickStart

All the data and configuration files are ready for using. You can find "run_example.sh" and "run_stairwell.sh" in /home/code/ROSEFusion-main/build. After running the scripts, the trajectory and reconstuciton results woulSd be generated in /home/code/rosefusion_xxx_data.

Configuration File

We use the following configuration files to make the parameters setting easier. There are four types of configuration files.

  • seq_generation_config.yaml: data information
  • camera_config.yaml: camera and image information.
  • data_config.yaml: output path, sequence file path and parameters of the volume.
  • controller_config.yaml: visualization, saving and parameters of tacking.

The seq_generation_config.yaml is only used in data preparation, and the other three types of configuration files are necessary to run the fusion part. The configuration files of many common datasets are given in [type]_config/ directory, you can change the settings to fit your own dataset.

Data Preparation

The details of data prepartiation can be found in src/seq_gen.cpp. By using the seq_generation_config.yaml introduced above, you can run the program:

./seq_gen  sequence_information.yaml

Once finished, there will be a .seq file containing all the information of the sequence.

Particle Swarm Template

We share the same pre-sampled PST as we used in our paper. Each PST is saved as an N×6 image and the N represents the number of particles. You can find the .tiff images in PST dicrectory, and please prelace the PST path in controller_config.yaml with your own path.

Running

To run the fusion code, you need to provide the camera_config.yaml, data_config.yaml and controller_config.yaml. We already share configuration files of many common datasets in ./camera_config, ./data_config, /controller_config. All the parameters of configuration can be modified as you want. With all the preparation done, you can run the code below:

./ROSEFsuion  your_camera_config.yaml your_data_config.yaml your_controller_config.yaml

For a quick start, you can download and use a small size synthesis seq file and related configuration files. Here is a preview.

FastCaMo Dataset

We present the Fast Camera Motion dataset, which contains both synthesis and real captured sequences. You are welcome to download the sequences and take a try.

FastCaMo-Synth

With 10 diverse room-scale scenes from Replica Dataset, we render the color images and depth maps along the synthesis trajectories. The raw sequences are provided in FastCaMo-synth-data(raw).zip, and we also provide the FastCaMo-synth-data(noise).zip with synthesis noise. We use the same noise model as simkinect. For evaluation, you can download the ground truth trajectories.

FastCaMo-Real

There are 12 real captured RGB-D sequences with fast camera motions are released. Each sequence is recorded in a challenging scene like gym or stairwell by using Azure Kinect DK. We offer a full and dense reconstruction scanned using the high-end laser scanner, serving as ground truth. However, The original file is extremely large, we will share the dense reconstruction in another platform or release the sub-sampled version only.

Citation

If you find our work useful in your research, please consider citing:

@article {zhang_sig21,
    title = {ROSEFusion: Random Optimization for Online Dense Reconstruction under Fast Camera Motion},
    author = {Jiazhao Zhang and Chenyang Zhu and Lintao Zheng and Kai Xu},
    journal = {ACM Transactions on Graphics (SIGGRAPH 2021)},
    volume = {40},
    number = {4},
    year = {2021}
}

Acknowledgments

Our code is inspired by KinectFusionLib.

This is an open-source version of ROSEFusion, some functions have been rewritten to avoid certain license. It would not be expected to reproduce the result exactly, but the result is almost the same.

License

The source code is released under GPLv3 license.

Contact

If you have any questions, feel free to email Jiazhao Zhang at [email protected].

Official Implementation of CVPR 2022 paper: "Mimicking the Oracle: An Initial Phase Decorrelation Approach for Class Incremental Learning"

(CVPR 2022) Mimicking the Oracle: An Initial Phase Decorrelation Approach for Class Incremental Learning ArXiv This repo contains Official Implementat

Yujun Shi 24 Nov 01, 2022
Creating predictive checklists from data using integer programming.

Learning Optimal Predictive Checklists A Python package to learn simple predictive checklists from data subject to customizable constraints. For more

Healthy ML 5 Apr 19, 2022
Contains modeling practice materials and homework for the Computational Neuroscience course at Okinawa Institute of Science and Technology

A310 Computational Neuroscience - Okinawa Institute of Science and Technology, 2022 This repository contains modeling practice materials and homework

Sungho Hong 1 Jan 24, 2022
Use graph-based analysis to re-classify stocks and to improve Markowitz portfolio optimization

Dynamic Stock Industrial Classification Use graph-based analysis to re-classify stocks and experiment different re-classification methodologies to imp

Sheng Yang 10 Dec 05, 2022
Beginner-friendly repository for Hacktober Fest 2021. Start your contribution to open source through baby steps. 💜

Hacktober Fest 2021 🎉 Open source is changing the world – one contribution at a time! 🎉 This repository is made for beginners who are unfamiliar wit

Abhilash M Nair 32 Dec 11, 2022
Code for EMNLP 2021 paper: "Learning Implicit Sentiment in Aspect-based Sentiment Analysis with Supervised Contrastive Pre-Training"

SCAPT-ABSA Code for EMNLP2021 paper: "Learning Implicit Sentiment in Aspect-based Sentiment Analysis with Supervised Contrastive Pre-Training" Overvie

Zhengyan Li 66 Dec 04, 2022
Complex Answer Generation For Conversational Search Systems.

Complex Answer Generation For Conversational Search Systems. Code for Does Structure Matter? Leveraging Data-to-Text Generation for Answering Complex

Hanane Djeddal 0 Dec 06, 2021
Second-order Attention Network for Single Image Super-resolution (CVPR-2019)

Second-order Attention Network for Single Image Super-resolution (CVPR-2019) "Second-order Attention Network for Single Image Super-resolution" is pub

516 Dec 28, 2022
Megaverse is a new 3D simulation platform for reinforcement learning and embodied AI research

Megaverse Megaverse is a new 3D simulation platform for reinforcement learning and embodied AI research. The efficient design of the engine enables ph

Aleksei Petrenko 191 Dec 23, 2022
RepVGG: Making VGG-style ConvNets Great Again

RepVGG: Making VGG-style ConvNets Great Again (PyTorch) This is a super simple ConvNet architecture that achieves over 80% top-1 accuracy on ImageNet

2.8k Jan 04, 2023
Light-SERNet: A lightweight fully convolutional neural network for speech emotion recognition

Light-SERNet This is the Tensorflow 2.x implementation of our paper "Light-SERNet: A lightweight fully convolutional neural network for speech emotion

Arya Aftab 29 Nov 12, 2022
This repository contains the code for "Self-Diagnosis and Self-Debiasing: A Proposal for Reducing Corpus-Based Bias in NLP".

Self-Diagnosis and Self-Debiasing This repository contains the source code for Self-Diagnosis and Self-Debiasing: A Proposal for Reducing Corpus-Based

Timo Schick 62 Dec 12, 2022
Two-Stream Adaptive Graph Convolutional Networks for Skeleton-Based Action Recognition in CVPR19

2s-AGCN Two-Stream Adaptive Graph Convolutional Networks for Skeleton-Based Action Recognition in CVPR19 Note PyTorch version should be 0.3! For PyTor

LShi 547 Dec 26, 2022
Python KNN model: Predicting a probability of getting a work visa. Tableau: Non-immigrant visas over the years.

The value of international students to the United States. Probability of getting a non-immigrant visa. Project timeline: Jan 2021 - April 2021 Project

Zinaida Dvoskina 2 Nov 21, 2021
VideoGPT: Video Generation using VQ-VAE and Transformers

VideoGPT: Video Generation using VQ-VAE and Transformers [Paper][Website][Colab][Gradio Demo] We present VideoGPT: a conceptually simple architecture

Wilson Yan 470 Dec 30, 2022
Repo for my Tensorflow/Keras CV experiments. Mostly revolving around the Danbooru20xx dataset

SW-CV-ModelZoo Repo for my Tensorflow/Keras CV experiments. Mostly revolving around the Danbooru20xx dataset Framework: TF/Keras 2.7 Training SQLite D

20 Dec 27, 2022
BADet: Boundary-Aware 3D Object Detection from Point Clouds (Pattern Recognition 2022)

BADet: Boundary-Aware 3D Object Detection from Point Clouds (Pattern Recognition

Rui Qian 17 Dec 12, 2022
WebUAV-3M: A Benchmark Unveiling the Power of Million-Scale Deep UAV Tracking

WebUAV-3M: A Benchmark Unveiling the Power of Million-Scale Deep UAV Tracking [Paper Link] Abstract In this work, we contribute a new million-scale Un

25 Jan 01, 2023
A platform to display the carbon neutralization information for researchers, decision-makers, and other participants in the community.

Welcome to Carbon Insight Carbon Insight is a platform aiming to display the carbon neutralization roadmap for researchers, decision-makers, and other

Microsoft 14 Oct 24, 2022
DNA-RECON { Automatic Web Reconnaissance Tool }

ABOUT TOOL : DNA-RECON is an automatic web reconnaissance tool written in python. This tool made for reconnaissance and information gathering with an

NIKUNJ BHATT 25 Aug 11, 2021