Real-time Neural Representation Fusion for Robust Volumetric Mapping

Overview

NeuralBlox: Real-Time Neural Representation Fusion for Robust Volumetric Mapping

Paper | Supplementary

teaser

This repository contains the implementation of the paper:

NeuralBlox: Real-Time Neural Representation Fusion for Robust Volumetric Mapping
Stefan Lionar*, Lukas Schmid*, Cesar Cadena, Roland Siegwart, and Andrei Cramariuc
International Conference on 3D Vision (3DV) 2021
(*equal contribution)

If you find our code or paper useful, please consider citing us:

@inproceedings{lionar2021neuralblox,
 title = {NeuralBlox: Real-Time Neural Representation Fusion for Robust Volumetric Mapping},
 author={Stefan Lionar, Lukas Schmid, Cesar Cadena, Roland Siegwart, Andrei Cramariuc},
 booktitle = {International Conference on 3D Vision (3DV)},
 year = {2021}}

Installation

conda env create -f environment.yaml
conda activate neuralblox
pip install torch-scatter==2.0.4 -f https://pytorch-geometric.com/whl/torch-1.4.0+cu101.html

Note: Make sure torch-scatter and PyTorch have the same cuda toolkit version. If PyTorch has a different cuda toolkit version, run:

conda install pytorch==1.4.0 cudatoolkit=10.1 -c pytorch

Next, compile the extension modules. You can do this via

python setup.py build_ext --inplace

Optional: For a noticeably faster inference on CPU-only settings, upgrade PyTorch and PyTorch Scatter to a newer version:

pip install torch==1.7.1+cu101 torchvision==0.8.2+cu101 -f https://download.pytorch.org/whl/torch_stable.html
pip install --upgrade --no-deps --force-reinstall torch-scatter==2.0.5 -f https://pytorch-geometric.com/whl/torch-1.7.1+cu101.html

Demo

To generate meshes using our pretrained models and evaluation dataset, you can select several configurations below and run it.

python generate_sequential.py configs/fusion/pretrained/redwood_0.5voxel_demo.yaml
python generate_sequential.py configs/fusion/pretrained/redwood_1voxel_demo.yaml
python generate_sequential.py configs/fusion/pretrained/redwood_1voxel_demo_cpu.yaml --no_cuda
  • The mesh will be generated to out_mesh/mesh folder.
  • To add noise, change the values under test.scene.noise in the config files.

Training backbone encoder and decoder

The backbone encoder and decoder mainly follow Convolutional Occupancy Networks (https://github.com/autonomousvision/convolutional_occupancy_networks) with some modifications adapted for our use case. Our pretrained model is provided in this repository.

Dataset

ShapeNet

The proprocessed ShapeNet dataset is from Occupancy Networks (https://github.com/autonomousvision/occupancy_networks). You can download it (73.4 GB) by running:

bash scripts/download_shapenet_pc.sh

After that, you should have the dataset in data/ShapeNet folder.

Training

To train the backbone network from scratch, run

python train_backbone.py configs/pointcloud/shapenet_grid24_pe.yaml

Latent code fusion

The pretrained fusion network is also provided in this repository.

Training dataset

To train from scratch, you can download our preprocessed Redwood Indoor RGBD Scan dataset by running:

bash scripts/download_redwood_preprocessed.sh

We align the gravity direction to be the same as ShapeNet ([0,1,0]) and convert the RGBD scans following ShapeNet format.

More information about the dataset is provided here: http://redwood-data.org/indoor_lidar_rgbd/.

Training

To train the fusion network from scratch, run

python train_fusion.py configs/fusion/train_fusion_redwood.yaml

Adjust the path to the encoder-decoder model in training.backbone_file of the .yaml file if necessary.

Generation

python generate_sequential.py CONFIG.yaml

If you are interested in generating the meshes from other dataset, e.g., ScanNet:

  • Structure the dataset following the format in demo/redwood_apartment_13k.
  • Adjust path, data_preprocessed_interval and intrinsics in the config file.
  • If necessary, align the dataset to have the same gravity direction as ShapeNet by adjusting align in the config file.

For example,

# ScanNet scene ID 0
python generate_sequential.py configs/fusion/pretrained/scannet_000.yaml

# ScanNet scene ID 24
python generate_sequential.py configs/fusion/pretrained/scannet_024.yaml

To use your own models, replace test.model_file (encoder-decoder) and test.merging_model_file (fusion network) in the config file to the path of your models.

Evaluation

You can evaluate the predicted meshes with respect to a ground truth mesh by following the steps below:

  1. Install CloudCompare
sudo apt install cloudcompare
  1. Copy a ground truth mesh (no RGB information expected) to evaluation/mesh_gt
  2. Copy prediction meshes to evaluation/mesh_pred
  3. If the prediction mesh does not contain RGB information, such as the output from our method, run:
python evaluate.py

Else, if it contains RGB information, such as the output from Voxblox, run:

python evaluate.py --color_mesh

We provide the trimmed mesh used for the ground truth of our quantitative evaluation. It can be downloaded here: https://polybox.ethz.ch/index.php/s/gedC9YpQPMPiucU/download

Lastly, to evaluate prediction meshes with respect to the trimmed mesh as ground truth, run:

python evaluate.py --demo

Or for colored mesh (e.g. from Voxblox):

python evaluate.py --demo --color_mesh

evaluation.csv will be generated to evaluation directory.

Acknowledgement

Some parts of the code are inherited from the official repository of Convolutional Occupancy Networks (https://github.com/autonomousvision/convolutional_occupancy_networks).

Owner
ETHZ ASL
ETHZ ASL
Random Erasing Data Augmentation. Experiments on CIFAR10, CIFAR100 and Fashion-MNIST

Random Erasing Data Augmentation =============================================================== black white random This code has the source code for

Zhun Zhong 654 Dec 26, 2022
Efficient Deep Learning Systems course

Efficient Deep Learning Systems This repository contains materials for the Efficient Deep Learning Systems course taught at the Faculty of Computer Sc

Max Ryabinin 173 Dec 29, 2022
U-Net for GBM

My Final Year Project(FYP) In National University of Singapore(NUS) You need Pytorch(stable 1.9.1) Both cuda version and cpu version are OK File Str

PinkR1ver 1 Oct 27, 2021
Vision Transformer and MLP-Mixer Architectures

Vision Transformer and MLP-Mixer Architectures Update (2.7.2021): Added the "When Vision Transformers Outperform ResNets..." paper, and SAM (Sharpness

Google Research 6.4k Jan 04, 2023
Deeply Supervised, Layer-wise Prediction-aware (DSLP) Transformer for Non-autoregressive Neural Machine Translation

Non-Autoregressive Translation with Layer-Wise Prediction and Deep Supervision Training Efficiency We show the training efficiency of our DSLP model b

Chenyang Huang 36 Oct 31, 2022
Offical implementation of Shunted Self-Attention via Multi-Scale Token Aggregation

Shunted Transformer This is the offical implementation of Shunted Self-Attention via Multi-Scale Token Aggregation by Sucheng Ren, Daquan Zhou, Shengf

156 Dec 27, 2022
This repository is an implementation of paper : Improving the Training of Graph Neural Networks with Consistency Regularization

CRGNN Paper : Improving the Training of Graph Neural Networks with Consistency Regularization Environments Implementing environment: GeForce RTX™ 3090

THUDM 28 Dec 09, 2022
Generative vs Discriminative: Rethinking The Meta-Continual Learning (NeurIPS 2021)

Generative vs Discriminative: Rethinking The Meta-Continual Learning (NeurIPS 2021) In this repository we provide PyTorch implementations for GeMCL; a

4 Apr 15, 2022
Official implementation of the RAVE model: a Realtime Audio Variational autoEncoder

Official implementation of the RAVE model: a Realtime Audio Variational autoEncoder

Antoine Caillon 589 Jan 02, 2023
Pytorch implementation of Deep Recursive Residual Network for Super Resolution (DRRN)

DRRN-pytorch This is an unofficial implementation of "Deep Recursive Residual Network for Super Resolution (DRRN)", CVPR 2017 in Pytorch. [Paper] You

yun_yang 192 Dec 12, 2022
Points2Surf: Learning Implicit Surfaces from Point Clouds (ECCV 2020 Spotlight)

Points2Surf: Learning Implicit Surfaces from Point Clouds (ECCV 2020 Spotlight)

Philipp Erler 329 Jan 06, 2023
The codebase for our paper "Generative Occupancy Fields for 3D Surface-Aware Image Synthesis" (NeurIPS 2021)

Generative Occupancy Fields for 3D Surface-Aware Image Synthesis (NeurIPS 2021) Project Page | Paper Xudong Xu, Xingang Pan, Dahua Lin and Bo Dai GOF

xuxudong 97 Nov 10, 2022
Code for our EMNLP 2021 paper "Learning Kernel-Smoothed Machine Translation with Retrieved Examples"

KSTER Code for our EMNLP 2021 paper "Learning Kernel-Smoothed Machine Translation with Retrieved Examples" [paper]. Usage Download the processed datas

jiangqn 23 Nov 24, 2022
Project ArXiv Citation Network

Project ArXiv Citation Network Overview This project involved the analysis of the ArXiv citation network. Usage The complete code of this project is i

Dennis Núñez-Fernández 5 Oct 20, 2022
The project was to detect traffic signs, based on the Megengine framework.

trafficsign 赛题 旷视AI智慧交通开源赛道,初赛1/177,复赛1/12。 本赛题为复杂场景的交通标志检测,对五种交通标志进行识别。 框架 megengine 算法方案 网络框架 atss + resnext101_32x8d 训练阶段 图片尺寸 最终提交版本输入图片尺寸为(1500,2

20 Dec 02, 2022
[SIGMETRICS 2022] One Proxy Device Is Enough for Hardware-Aware Neural Architecture Search

One Proxy Device Is Enough for Hardware-Aware Neural Architecture Search paper | website One Proxy Device Is Enough for Hardware-Aware Neural Architec

10 Dec 16, 2022
A simple but complete full-attention transformer with a set of promising experimental features from various papers

x-transformers A concise but fully-featured transformer, complete with a set of promising experimental features from various papers. Install $ pip ins

Phil Wang 2.3k Jan 03, 2023
Official implementation of Representer Point Selection via Local Jacobian Expansion for Post-hoc Classifier Explanation of Deep Neural Networks and Ensemble Models at NeurIPS 2021

Representer Point Selection via Local Jacobian Expansion for Classifier Explanation of Deep Neural Networks and Ensemble Models This repository is the

Yi(Amy) Sui 2 Dec 01, 2021
Fast, modular reference implementation of Instance Segmentation and Object Detection algorithms in PyTorch.

Faster R-CNN and Mask R-CNN in PyTorch 1.0 maskrcnn-benchmark has been deprecated. Please see detectron2, which includes implementations for all model

Facebook Research 9k Jan 04, 2023
Manipulation OpenAI Gym environments to simulate robots at the STARS lab

Manipulator Learning This repository contains a set of manipulation environments that are compatible with OpenAI Gym and simulated in pybullet. In par

STARS Laboratory 5 Dec 08, 2022