PCAM: Product of Cross-Attention Matrices for Rigid Registration of Point Clouds

Related tags

Deep LearningPCAM
Overview

PCAM: Product of Cross-Attention Matrices for Rigid Registration of Point Clouds

PCAM: Product of Cross-Attention Matrices for Rigid Registration of Point Clouds
Anh-Quan Cao1,2, Gilles Puy1, Alexandre Boulch1, Renaud Marlet1,3
1valeo.ai, France and 2Inria, France and 3ENPC, France

If you find this code or work useful, please cite our paper:

@inproceedings{cao21pcam,
  title={{PCAM}: {P}roduct of {C}ross-{A}ttention {M}atrices for {R}igid {R}egistration of {P}oint {C}louds},
  author={Cao, Anh-Quan and Puy, Gilles and Boulch, Alexandre and Marlet, Renaud},
  booktitle={International Conference on Computer Vision (ICCV)},
  year={2021},
}

Preparation

Installation

  1. This code was implemented with python 3.7, pytorch 1.6.0 and CUDA 10.2. Please install PyTorch.
pip install torch==1.6.0 torchvision==0.7.0
  1. A part of the code (voxelisation) is using MinkowskiEngine 0.4.3. Please install it on your system.
sudo apt-get update
sudo apt install libgl1-mesa-glx
sudo apt install libopenblas-dev g++-7
export CXX=g++-7 
pip install -U MinkowskiEngine==0.4.3 --install-option="--blas=openblas" -v
  1. Clone this repository and install the additional dependencies:
$ git clone https://github.com/valeoai/PCAM.git
$ cd PCAM/
$ pip install -r requirements.txt
  1. Install lightconvpoint [5], which is an early version of FKAConv:
$ pip install -e ./lcp
  1. Finally, install pcam:
$ pip install -e ./

You can edit pcam's code on the fly and import function and classes of pcam in other project as well.

Datasets

3DMatch and KITTI

Follow the instruction on DGR github repository to download both datasets.

Place 3DMatch in the folder /path/to/pcam/data/3dmatch/, which should have the structure described here.

Place KITTI in the folder /path/to/pcam/data/kitti/, which should have the structure described here.

You can create soft links with the command ln -s if the datasets are stored somewhere else on your system.

For these datasets, we use the same dataloaders as in DGR [1-3], up to few modifications for code compatibility.

Modelnet40

Download the dataset here and unzip it in the folder /path/to/pcam/data/modelnet/, which should have the structure described here.

Again, you can create soft links with the command ln -s if the datasets are stored somewhere else on your system.

For this dataset, we use the same dataloader as in PRNet [4], up to few modifications for code compatibility.

Pretrained models

Download PCAM pretrained models here and unzip the file in the folder /path/to/pcam/trained_models/, which should have the structure described here.

Testing PCAM

As we randomly subsample the point clouds in PCAM, there are some slight variations from one run to another. In our paper, we ran 3 independent evaluations on the complete test set and averaged the scores.

3DMatch

We provide two different pre-trained models for 3DMatch: one for PCAM-sparse and one for PCAM-soft, both trained using 4096 input points.

To test the PCAM-soft model, type:

$ cd /path/to/pcam/scripts/
$ python eval.py with ../configs/3dmatch/soft.yaml

To test the PCAM-sparse model on the test set of , type:

$ cd /path/to/pcam/scripts/
$ python eval.py with ../configs/3dmatch/sparse.yaml

Optional

As in DGR [1], the results can be improved using different levels of post-processing.

  1. Keeping only the pairs of points with highest confidence score (the threshold was optimised on the validation set of 3DMatch).
$ cd /path/to/pcam/scripts/
$ python eval.py with ../configs/3dmatch/soft_filter.yaml
$ python eval.py with ../configs/3dmatch/sparse_filter.yaml
  1. Using in addition the refinement by optimisation proposed by DGR [1].
$ cd /path/to/pcam/scripts/
$ python eval.py with ../configs/3dmatch/soft_refinement.yaml
$ python eval.py with ../configs/3dmatch/sparse_refinement.yaml
  1. Using as well the safeguard proposed by DGR [1].
$ cd /path/to/pcam/scripts/
$ python eval.py with ../configs/3dmatch/soft_safeguard.yaml
$ python eval.py with ../configs/3dmatch/sparse_safeguard.yaml

Note: For a fair comparison, we fixed the safeguard condition so that it is applied on the same proportion of scans as in DGR [1].

KITTI

We provide two different pre-trained models for KITTI: one for PCAM-sparse and one for PCAM-soft, both trained using 2048 input points.

To test the PCAM-soft model, type:

$ cd /path/to/pcam/scripts/
$ python eval.py with ../configs/kitti/soft.yaml

To test the PCAM-sparse model, type:

$ cd /path/to/pcam/scripts/
$ python eval.py with ../configs/kitti/sparse.yaml

Optional

As in DGR [1], the results can be improved by refining the results using ICP.

$ cd /path/to/pcam/scripts/
$ python eval.py with ../configs/kitti/soft_icp.yaml
$ python eval.py with ../configs/kitti/sparse_icp.yaml 

ModelNet40

There exist 3 different variants of this dataset. Please refer to [4] for the construction of these variants.

Unseen objects

To test the PCAM models, type:

$ cd /path/to/pcam/scripts/
$ python eval.py with ../configs/modelnet/soft.yaml
$ python eval.py with ../configs/modelnet/sparse.yaml

Unseen categories

To test the PCAM models, type:

$ cd /path/to/pcam/scripts/
$ python eval.py with ../configs/modelnet/soft_unseen.yaml
$ python eval.py with ../configs/modelnet/sparse_unseen.yaml

Unseen objects with noise

To test the PCAM models, type:

$ cd /path/to/pcam/scripts/
$ python eval.py with ../configs/modelnet/soft_noise.yaml
$ python eval.py with ../configs/modelnet/sparse_noise.yaml

Training

The models are saved in the folder /path/to/pcam/trained_models/new_training/{DATASET}/{CONFIG}, where {DATASET} is the name of the dataset and {CONFIG} give a description of the PCAM architecture and the losses used for training.

3DMatch

To train a PCAM-soft model, type:

$ cd /path/to/pcam/scripts/
$ python train.py with ../configs/3dmatch/soft.yaml

You can then test this new model by typing:

$ python eval.py with ../configs/3dmatch/soft.yaml PREFIX='new_training'

To train a PCAM-sparse model, type:

$ cd /path/to/pcam/scripts/
$ python train.py with ../configs/3dmatch/sparse.yaml

Training took about 12 days on a Nvidia Tesla V100S-32GB.

You can then test this new model by typing:

$ python eval.py with ../configs/3dmatch/sparse.yaml PREFIX='new_training'

KITTI

To train PCAM models, type:

$ cd /path/to/pcam/scripts/
$ python train.py with ../configs/kitti/soft.yaml
$ python train.py with ../configs/kitti/sparse.yaml

Training took about 1 day on a Nvidia GeForce RTX 2080 Ti.

You can then test these new models by typing:

$ python eval.py with ../configs/kitti/soft.yaml PREFIX='new_training'
$ python eval.py with ../configs/kitti/sparse.yaml PREFIX='new_training'

ModelNet

Training PCAM on ModelNet took about 10 hours on Nvidia GeForce RTX 2080.

Unseen objects

To train PCAM models, type:

$ cd /path/to/pcam/scripts/
$ python train.py with ../configs/modelnet/soft.yaml NB_EPOCHS=10
$ python train.py with ../configs/modelnet/sparse.yaml NB_EPOCHS=10

You can then test these new models by typing:

$ python eval.py with ../configs/modelnet/soft.yaml PREFIX='new_training'
$ python eval.py with ../configs/modelnet/sparse.yaml PREFIX='new_training'

Unseen categories

To train PCAM models, type:

$ cd /path/to/pcam/scripts/
$ python train.py with ../configs/modelnet/soft_unseen.yaml NB_EPOCHS=10
$ python train.py with ../configs/modelnet/sparse_unseen.yaml NB_EPOCHS=10

You can then test these new models by typing:

$ python eval.py with ../configs/modelnet/soft_unseen.yaml PREFIX='new_training'
$ python eval.py with ../configs/modelnet/sparse_unseen.yaml PREFIX='new_training'

Unseen objects with noise

To train PCAM models, type:

$ cd /path/to/pcam/scripts/
$ python train.py with ../configs/modelnet/soft_noise.yaml NB_EPOCHS=10
$ python train.py with ../configs/modelnet/sparse_noise.yaml NB_EPOCHS=10

You can then test these new models by typing:

$ python eval.py with ../configs/modelnet/soft_noise.yaml PREFIX='new_training'
$ python eval.py with ../configs/modelnet/sparse_noise.yaml PREFIX='new_training'

References

[1] Christopher Choy, Wei Dong, Vladlen Koltun. Deep Global Registration, CVPR, 2020.

[2] Christopher Choy, Jaesik Park, Vladlen Koltun. Fully Convolutional Geometric Features. ICCV, 2019.

[3] Christopher Choy, JunYoung Gwak, Silvio Savarese. 4D Spatio-Temporal ConvNets: Minkowski Convolutional Neural Networks. CVPR, 2019.

[4] Yue Wang and Justin M. Solomon. PRNet: Self-Supervised Learning for Partial-to-Partial Registration. NeurIPS, 2019.

[5] Alexandre Boulch, Gilles Puy, Renaud Marlet. FKAConv: Feature-Kernel Alignment for Point Cloud Convolution. ACCV, 2020.

License

PCAM is released under the Apache 2.0 license.

You might also like...
This project is the official implementation of our accepted ICLR 2021 paper BiPointNet: Binary Neural Network for Point Clouds.
This project is the official implementation of our accepted ICLR 2021 paper BiPointNet: Binary Neural Network for Point Clouds.

BiPointNet: Binary Neural Network for Point Clouds Created by Haotong Qin, Zhongang Cai, Mingyuan Zhang, Yifu Ding, Haiyu Zhao, Shuai Yi, Xianglong Li

(CVPR 2021) PAConv: Position Adaptive Convolution with Dynamic Kernel Assembling on Point Clouds
(CVPR 2021) PAConv: Position Adaptive Convolution with Dynamic Kernel Assembling on Point Clouds

PAConv: Position Adaptive Convolution with Dynamic Kernel Assembling on Point Clouds by Mutian Xu*, Runyu Ding*, Hengshuang Zhao, and Xiaojuan Qi. Int

《A-CNN: Annularly Convolutional Neural Networks on Point Clouds》(2019)
《A-CNN: Annularly Convolutional Neural Networks on Point Clouds》(2019)

A-CNN: Annularly Convolutional Neural Networks on Point Clouds Created by Artem Komarichev, Zichun Zhong, Jing Hua from Department of Computer Science

(CVPR 2021) Back-tracing Representative Points for Voting-based 3D Object Detection in Point Clouds
(CVPR 2021) Back-tracing Representative Points for Voting-based 3D Object Detection in Point Clouds

BRNet Introduction This is a release of the code of our paper Back-tracing Representative Points for Voting-based 3D Object Detection in Point Clouds,

Self-Supervised Learning for Domain Adaptation on Point-Clouds
Self-Supervised Learning for Domain Adaptation on Point-Clouds

Self-Supervised Learning for Domain Adaptation on Point-Clouds Introduction Self-supervised learning (SSL) allows to learn useful representations from

Rendering Point Clouds with Compute Shaders
Rendering Point Clouds with Compute Shaders

Compute Shader Based Point Cloud Rendering This repository contains the source code to our techreport: Rendering Point Clouds with Compute Shaders and

This is a package for LiDARTag, described in paper: LiDARTag: A Real-Time Fiducial Tag System for Point Clouds
This is a package for LiDARTag, described in paper: LiDARTag: A Real-Time Fiducial Tag System for Point Clouds

LiDARTag Overview This is a package for LiDARTag, described in paper: LiDARTag: A Real-Time Fiducial Tag System for Point Clouds (PDF)(arXiv). This wo

Uncertainty-aware Semantic Segmentation of LiDAR Point Clouds for Autonomous Driving
Uncertainty-aware Semantic Segmentation of LiDAR Point Clouds for Autonomous Driving

SalsaNext: Fast, Uncertainty-aware Semantic Segmentation of LiDAR Point Clouds for Autonomous Driving Abstract In this paper, we introduce SalsaNext f

Code for
Code for "CloudAAE: Learning 6D Object Pose Regression with On-line Data Synthesis on Point Clouds" @ICRA2021

CloudAAE This is an tensorflow implementation of "CloudAAE: Learning 6D Object Pose Regression with On-line Data Synthesis on Point Clouds" Files log:

Comments
  • How to get the results in the paper?

    How to get the results in the paper?

    I use the eval method from the README, but the results is worse:

    SOFT result: RTE all: 2.6929195 RRE all 1.755938845188313 Recall: 0.8468468468468469 RTE: 0.30647033 RRE: 0.41620454047369715 Times: 0.27450611107738326

    Sparse Result: RTE all: 3.8984199 RRE all 2.97438877706469 Recall: 0.4900900900900901 RTE: 0.37603837 RRE: 0.4989037670898464 Times: 0.2832888589950377

    Do I need to modify any code to get the results showed in paper?

    opened by Outlande 3
Releases(v0.1)
Owner
valeo.ai
We are an international team based in Paris, conducting AI research for Valeo automotive applications, in collaboration with world-class academics.
valeo.ai
Source code for Fathony, Sahu, Willmott, & Kolter, "Multiplicative Filter Networks", ICLR 2021.

Multiplicative Filter Networks This repository contains a PyTorch MFN implementation and code to perform & reproduce experiments from the ICLR 2021 pa

Bosch Research 66 Jan 04, 2023
Few-Shot Object Detection via Association and DIscrimination

Few-Shot Object Detection via Association and DIscrimination Code release of our NeurIPS 2021 paper: Few-Shot Object Detection via Association and DIs

Cao Yuhang 49 Dec 18, 2022
Pynomial - a lightweight python library for implementing the many confidence intervals for the risk parameter of a binomial model

Pynomial - a lightweight python library for implementing the many confidence intervals for the risk parameter of a binomial model

Demetri Pananos 9 Oct 04, 2022
Vrcwatch - Supply the local time to VRChat as Avatar Parameters through OSC

English: README-EN.md VRCWatch VRCWatch は、VRChat 内のアバター向けに現在時刻を送信するためのプログラムです。 使

Kosaki Mezumona 17 Nov 30, 2022
PyTorch implementation of PP-LCNet

PP-LCNet-Pytorch Pre-Trained Models Google Drive p018 Accuracy Models Top1 Top5 PPLCNet_x0_25 0.5186 0.7565 PPLCNet_x0_35 0.5809 0.8083 PPLCNet_x0_5 0

24 Dec 12, 2022
Iran Open Source Hackathon

Iran Open Source Hackathon is an open-source hackathon (duh) with the aim of encouraging participation in open-source contribution amongst Iranian dev

OSS Hackathon 121 Dec 25, 2022
StackNet is a computational, scalable and analytical Meta modelling framework

StackNet This repository contains StackNet Meta modelling methodology (and software) which is part of my work as a PhD Student in the computer science

Marios Michailidis 1.3k Dec 15, 2022
Using PyTorch Perform intent classification using three different models to see which one is better for this task

Using PyTorch Perform intent classification using three different models to see which one is better for this task

Yoel Graumann 1 Feb 14, 2022
This program will stylize your photos with fast neural style transfer.

Neural Style Transfer (NST) Using TensorFlow Demo TensorFlow TensorFlow is an end-to-end open source platform for machine learning. It has a comprehen

Ismail Boularbah 1 Aug 08, 2022
Reproduced Code for Image Forgery Detection papers.

Image Forgery Detection With over 4.5 billion active internet users, the amount of multimedia content being shared every day has surpassed everyone’s

Umar Masud 15 Dec 06, 2022
The code for two papers: Feedback Transformer and Expire-Span.

transformer-sequential This repo contains the code for two papers: Feedback Transformer Expire-Span The training code is structured for long sequentia

Facebook Research 125 Dec 25, 2022
Python package for Bayesian Machine Learning with scikit-learn API

Python package for Bayesian Machine Learning with scikit-learn API Installing & Upgrading package pip install https://github.com/AmazaspShumik/sklearn

Amazasp Shaumyan 482 Jan 04, 2023
Python package for dynamic system estimation of time series

PyDSE Toolset for Dynamic System Estimation for time series inspired by DSE. It is in a beta state and only includes ARMA models right now. Documentat

Blue Yonder GmbH 40 Oct 07, 2022
Implementations of paper Controlling Directions Orthogonal to a Classifier

Classifier Orthogonalization Implementations of paper Controlling Directions Orthogonal to a Classifier , ICLR 2022, Yilun Xu, Hao He, Tianxiao Shen,

Yilun Xu 33 Dec 01, 2022
Train CNNs for the fruits360 data set in NTOU CS「Machine Vision」class.

CNNs fruits360 Train CNNs for the fruits360 data set in NTOU CS「Machine Vision」class. CNN on a pretrained model Build a CNN on a pretrained model, Res

Ricky Chuang 1 Mar 07, 2022
Gans-in-action - Companion repository to GANs in Action: Deep learning with Generative Adversarial Networks

GANs in Action by Jakub Langr and Vladimir Bok List of available code: Chapter 2: Colab, Notebook Chapter 3: Notebook Chapter 4: Notebook Chapter 6: C

GANs in Action 914 Dec 21, 2022
My implementation of DeepMind's Perceiver

DeepMind Perceiver (in PyTorch) Disclaimer: This is not official and I'm not affiliated with DeepMind. My implementation of the Perceiver: General Per

Louis Arge 55 Dec 12, 2022
Pose Transformers: Human Motion Prediction with Non-Autoregressive Transformers

Pose Transformers: Human Motion Prediction with Non-Autoregressive Transformers This is the repo used for human motion prediction with non-autoregress

Idiap Research Institute 26 Dec 14, 2022
DeepLab resnet v2 model in pytorch

pytorch-deeplab-resnet DeepLab resnet v2 model implementation in pytorch. The architecture of deepLab-ResNet has been replicated exactly as it is from

Isht Dwivedi 601 Dec 22, 2022
POPPY (Physical Optics Propagation in Python) is a Python package that simulates physical optical propagation including diffraction

POPPY: Physical Optics Propagation in Python POPPY (Physical Optics Propagation in Python) is a Python package that simulates physical optical propaga

Space Telescope Science Institute 132 Dec 15, 2022