Compute descriptors for 3D point cloud registration using a multi scale sparse voxel architecture

Overview

MS-SVConv : 3D Point Cloud Registration with Multi-Scale Architecture and Self-supervised Fine-tuning

Compute features for 3D point cloud registration. The article is available on Arxiv. It relies on:

  • A multi scale sparse voxel architecture
  • Self-supervised fine-tuning The combination of both allows better generalization capabilities and transfer across different datasets.

The code is available on the torch-points3d repository. This repository is to show how to launch the code for training and testing.

Demo

If you want to try MS-SVConv without installing anything on your computer, A Google colab notebook is available here (it takes few minutes to install everything). In the colab, we compute features using MS-SVConv and use Ransac (implementation of Open3D) to compute the transformation. You can try on 3DMatch on ETH. With this notebook, you can directly use the pretrained model on your project !

Installation

The code have been tried on an NVDIA RTX 1080 Ti with CUDA version 10.1. The OS was Ubuntu 18.04.

Installation for training and evaluation

This installation step is necessary if you want to train and evaluate MS-SVConv.

first you need, to clone the torch-points3d repository

git clone https://github.com/nicolas-chaulet/torch-points3d.git

Torch-points3d uses poetry to manage the packages. after installing Poetry, run :

poetry install --no-root

Activate the environnement

poetry shell

If you want to train MS-SVConv on 3DMatch, you will need pycuda (It's optional for testing).

pip install pycuda

You will also need to install Minkowski Engine and torchsparse Finally, you will need TEASER++ for testing.

If you have problems with installation (espaecially with pytorch_geometric), please visit the Troubleshooting section of torch-points3d page.

Training

registration

If you want to train MS-SVConv with 3 heads starting at the scale 2cm, run this command:

poetry run python train.py task=registration model_type=ms_svconv_base model_name=MS_SVCONV_B2cm_X2_3head dataset=fragment3dmatch training=sparse_fragment_reg tracker_options.make_submission=True training.epochs=200 eval_frequency=10

automatically, the code will call the right yaml file in conf/data/registration for the dataset and conf/model/registration for the model. If you just want to train MS-SVConv with 1 head, run this command

poetry run python train.py task=registration models=registration/ms_svconv_base model_name=MS_SVCONV_B2cm_X2_1head data=registration/fragment3dmatch training=sparse_fragment_reg tracker_options.make_submission=True epochs=200 eval_frequency=10

You can modify some hyperparameters directly on the command line. For example, if you want to change the learning rate of 1e-2, you can run:

poetry run python train.py task=registration models=registration/ms_svconv_base model_name=MS_SVCONV_B2cm_X2_1head data=registration/fragment3dmatch training=sparse_fragment_reg tracker_options.make_submission=True epochs=200 eval_frequency=10 optim.base_lr=1e-2

To resume training:

poetry run python train.py task=registration models=registration/ms_svconv_base model_name=MS_SVCONV_B2cm_X2_3head data=registration/fragment3dmatch training=sparse_fragment_reg tracker_options.make_submission=True epochs=200 eval_frequency=10 checkpoint_dir=/path/of/directory/containing/pretrained/model

WARNING : On 3DMatch, you will need a lot of disk space because the code will download the RGBD image on 3DMatch and build the fragments from scratch. Also the code takes time (few hours).

For 3DMatch, it was supervised training because the pose is necessary. But we can also fine-tune in a self-supervised fashion (without needing the pose).

To train on Modelnet run this command:

poetry run python train.py task=registration models=registration/ms_svconv_base model_name=MS_SVCONV_B2cm_X2_3head data=registration/modelnet_sparse_ss training=sparse_fragment_reg tracker_options.make_submission=True epochs=200 eval_frequency=10

To fine-tune on ETH run this command (First, download the pretrained model from 3DMatch here):

poetry run python train.py task=registration models=registration/ms_svconv_base model_name=MS_SVCONV_B4cm_X2_3head data=registration/eth_base training=sparse_fragment_reg_finetune tracker_options.make_submission=True epochs=200 eval_frequency=10 models.path_pretrained=/path/to/your/pretrained/model.pt

To fine-tune on TUM, run this command:

poetry run python train.py task=registration models=registration/ms_svconv_base model_name=MS_SVCONV_B4cm_X2_3head data=registration/testtum_ss training=sparse_fragment_reg_finetune tracker_options.make_submission=True epochs=200 eval_frequency=10 models.path_pretrained=/path/to/your/pretrained/model.pt

For all these command, it will save in outputs directory log of the training, it will save a .pt file which is the weights of

semantic segmentation

You can also train MS-SVConv on scannet for semantic segmentation. To do this simply run:

poetry run python train.py task=segmentation models=segmentation/ms_svconv_base model_name=MS_SVCONV_B4cm_X2_3head lr_scheduler.params.gamma=0.9922 data=segmentation/scannet-sparse training=minkowski_scannet tracker_options.make_submission=False tracker_options.full_res=False data.process_workers=1 wandb.log=True eval_frequency=10 batch_size=4

And you can easily transfer from registration to segmantation, with this command:

poetry run python train.py task=segmentation models=segmentation/ms_svconv_base model_name=MS_SVCONV_B4cm_X2_3head lr_scheduler.params.gamma=0.9922 data=segmentation/scannet-sparse training=minkowski_scannet tracker_options.make_submission=False tracker_options.full_res=False data.process_workers=1 wandb.log=True eval_frequency=10 batch_size=4 models.path_pretrained=/path/to/your/pretrained/model.pt

Evaluation

If you want to evaluate the models on 3DMatch, download the model here and run:

poetry run python scripts/test_registration_scripts/evaluate.py task=registration models=registration/ms_svconv_base model_name=MS_SVCONV_B2cm_X2_3head data=registration/fragment3dmatch training=sparse_fragment_reg cuda=True data.sym=True checkpoint_dir=/directory/of/the/models/

on ETH (model here),

poetry run python scripts/test_registration_scripts/evaluate.py task=registration models=registration/ms_svconv_base model_name=MS_SVCONV_B4cm_X2_3head data=registration/eth_base training=sparse_fragment_reg cuda=True data.sym=True checkpoint_dir=/directory/of/the/models/

on TUM (model here),

poetry run python scripts/test_registration_scripts/evaluate.py task=registration models=registration/ms_svconv_base model_name=MS_SVCONV_B2cm_X2_3head data=registration/testtum_ss training=sparse_fragment_reg cuda=True data.sym=True checkpoint_dir=/directory/of/the/models/

You can also visualize matches, you can run:

python scripts/test_registration_scripts/see_matches.py task=registration models=registration/ms_svconv_base model_name=MS_SVCONV_B4cm_X2_3head data=registration/eth_base training=sparse_fragment_reg cuda=True data.sym=True checkpoint_dir=/directory/of/the/models/ data.first_subsampling=0.04 +ind=548 +t=22

You should obtain this image

Model Zoo

You can find all the pretrained model (More will be added in the future)

citation

If you like our work, please cite it :

@inproceedings{horache2021mssvconv,
      title={3D Point Cloud Registration with Multi-Scale Architecture and Self-supervised Fine-tuning},
      author={Sofiane Horache and Jean-Emmanuel Deschaud and François Goulette},
      year={2021},
      journal={arXiv preprint arXiv:2103.14533}
}

And if you use ETH, 3DMatch, TUM or ModelNet as dataset, please cite the respective authors.

TODO

  • Add other pretrained models on the model zoo
  • Add others datasets such as KITTI Dataset
Static-test - A playground to play with ideas related to testing the comparability of the code

Static test playground ⚠️ The code is just an experiment. Compiles and runs on U

Igor Bogoslavskyi 4 Feb 18, 2022
Official pytorch implementation of paper "Image-to-image Translation via Hierarchical Style Disentanglement".

HiSD: Image-to-image Translation via Hierarchical Style Disentanglement Official pytorch implementation of paper "Image-to-image Translation

364 Dec 14, 2022
Relative Positional Encoding for Transformers with Linear Complexity

Stochastic Positional Encoding (SPE) This is the source code repository for the ICML 2021 paper Relative Positional Encoding for Transformers with Lin

Antoine Liutkus 48 Nov 16, 2022
Clockwork Variational Autoencoder

Clockwork Variational Autoencoders (CW-VAE) Vaibhav Saxena, Jimmy Ba, Danijar Hafner If you find this code useful, please reference in your paper: @ar

Vaibhav Saxena 35 Nov 06, 2022
QQ Browser 2021 AI Algorithm Competition Track 1 1st Place Program

QQ Browser 2021 AI Algorithm Competition Track 1 1st Place Program

249 Jan 03, 2023
Implementation of ViViT: A Video Vision Transformer

ViViT: A Video Vision Transformer Unofficial implementation of ViViT: A Video Vision Transformer. Notes: This is in WIP. Model 2 is implemented, Model

Rishikesh (ऋषिकेश) 297 Jan 06, 2023
Simple renderer for use with MuJoCo (>=2.1.2) Python Bindings.

Viewer for MuJoCo in Python Interactive renderer to use with the official Python bindings for MuJoCo. Starting with version 2.1.2, MuJoCo comes with n

Rohan P. Singh 62 Dec 30, 2022
[Pedestron] Generalizable Pedestrian Detection: The Elephant In The Room. @ CVPR2021

Pedestron Pedestron is a MMdetection based repository, that focuses on the advancement of research on pedestrian detection. We provide a list of detec

Irtiza Hasan 594 Jan 05, 2023
Learning What and Where to Draw

###Learning What and Where to Draw Scott Reed, Zeynep Akata, Santosh Mohan, Samuel Tenka, Bernt Schiele, Honglak Lee This is the code for our NIPS 201

Scott Ellison Reed 337 Nov 18, 2022
Company clustering with K-means/GMM and visualization with PCA, t-SNE, using SSAN relation extraction

RE results graph visualization and company clustering Installation pip install -r requirements.txt python -m nltk.downloader stopwords python3.7 main.

Jieun Han 1 Oct 06, 2022
Explainable Zero-Shot Topic Extraction

Zero-Shot Topic Extraction with Common-Sense Knowledge Graph This repository contains the code for reproducing the results reported in the paper "Expl

D2K Lab 56 Dec 14, 2022
This is a demo app to be used in the video streaming applications

MoViDNN: A Mobile Platform for Evaluating Video Quality Enhancement with Deep Neural Networks MoViDNN is an Android application that can be used to ev

ATHENA Christian Doppler (CD) Laboratory 7 Jul 21, 2022
Self-Supervised Image Denoising via Iterative Data Refinement

Self-Supervised Image Denoising via Iterative Data Refinement Yi Zhang1, Dasong Li1, Ka Lung Law2, Xiaogang Wang1, Hongwei Qin2, Hongsheng Li1 1CUHK-S

Zhang Yi 72 Jan 01, 2023
Transfer Learning Shootout for PyTorch's model zoo (torchvision)

pytorch-retraining Transfer Learning shootout for PyTorch's model zoo (torchvision). Load any pretrained model with custom final layer (num_classes) f

Alexander Hirner 169 Jun 29, 2022
A "gym" style toolkit for building lightweight Neural Architecture Search systems

A "gym" style toolkit for building lightweight Neural Architecture Search systems

Jack Turner 12 Nov 05, 2022
A Context-aware Visual Attention-based training pipeline for Object Detection from a Webpage screenshot!

CoVA: Context-aware Visual Attention for Webpage Information Extraction Abstract Webpage information extraction (WIE) is an important step to create k

Keval Morabia 41 Jan 01, 2023
Api for getting bin info and getting encrypted card details for adyen.

Bin Info And Adyen Cse Enc Python api for getting bin info and getting encrypted

Roldex Stark 8 Dec 30, 2022
A flexible and extensible framework for gait recognition.

A flexible and extensible framework for gait recognition. You can focus on designing your own models and comparing with state-of-the-arts easily with the help of OpenGait.

Shiqi Yu 335 Dec 22, 2022
Code for Motion Representations for Articulated Animation paper

Motion Representations for Articulated Animation This repository contains the source code for the CVPR'2021 paper Motion Representations for Articulat

Snap Research 851 Jan 09, 2023
Proposed n-stage Latent Dirichlet Allocation method - A Novel Approach for LDA

n-stage Latent Dirichlet Allocation (n-LDA) Proposed n-LDA & A Novel Approach for classical LDA Latent Dirichlet Allocation (LDA) is a generative prob

Anıl Güven 4 Mar 07, 2022