[ICCV 2021] HRegNet: A Hierarchical Network for Large-scale Outdoor LiDAR Point Cloud Registration

Related tags

Deep LearningHRegNet
Overview

HRegNet: A Hierarchical Network for Large-scale Outdoor LiDAR Point Cloud Registration

Introduction

The repository contains the source code and pre-trained models of our paper (published on ICCV 2021): HRegNet: A Hierarchical Network for Large-scale Outdoor LiDAR Point Cloud Registration.

The overall network architecture is shown below:

Environments

The code mainly requires the following libraries and you can check requirements.txt for more environment requirements.

Please run the following commands to install point_utils

cd models/PointUtils
python setup.py install

Training device: NVIDIA RTX 3090

Datasets

The point cloud pairs list and the ground truth relative transformation are stored in data/kitti_list and data/nuscenes_list. The data of the two datasets should be organized as follows:

KITTI odometry dataset

DATA_ROOT
├── 00
│   ├── velodyne
│   ├── calib.txt
├── 01
├── ...

NuScenes dataset

DATA_ROOT
├── v1.0-trainval
│   ├── maps
│   ├── samples
│   │   ├──LIDAR_TOP
│   ├── sweeps
│   ├── v1.0-trainval
├── v1.0-test
│   ├── maps
│   ├── samples
│   │   ├──LIDAR_TOP
│   ├── sweeps
│   ├── v1.0-test

Train

The training of the whole network is divided into two steps: we firstly train the feature extraction module and then train the network based on the pretrain features.

Train feature extraction

  • Train keypoints detector by running sh scripts/train_kitti_det.sh or sh scripts/train_nusc_det.sh, please reminder to specify the GPU,DATA_ROOT,CKPT_DIR,RUNNAME,WANDB_DIR in the scripts.
  • Train descriptor by running sh scripts/train_kitti_desc.sh or sh scripts/train_nusc_desc.sh, please reminder to specify the GPU,DATA_ROOT,CKPT_DIR,RUNNAME,WANDB_DIR and PRETRAIN_DETECTOR in the scripts.

Train the whole network

Train the network by running sh scripts/train_kitti_reg.sh or sh scripts/train_nusc_reg.sh, please reminder to specify the GPU,DATA_ROOT,CKPT_DIR,RUNNAME,WANDB_DIR and PRETRAIN_FEATS in the scripts.

Update: Pretrained weights for detector and descriptor are provided in ckpt/pretrained. If you want to train descriptor, you can set PRETRAIN_DETECTOR to DATASET_keypoints.pth. If you want to train the whole network, you can set PRETRAIN_FEATS to DATASET_feats.pth.

Test

We provide pretrain models in ckpt/pretrained, please run sh scripts/test_kitti.sh or sh scripts/test_nusc.sh, please reminder to specify GPU,DATA_ROOT,SAVE_DIR in the scripts. The test results will be saved in SAVE_DIR.

Citation

If you find this project useful for your work, please consider citing:

@InProceedings{Lu_2021_HRegNet,
        author = {Lu, Fan and Chen, Guang and Liu, Yinlong and Zhang Lijun, Qu Sanqing, Liu Shu, Gu Rongqi},
        title = {HRegNet: A Hierarchical Network for Large-scale Outdoor LiDAR Point Cloud Registration},
        booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision},
        year = {2021}
}

Acknowledgments

We want to thank all the ICCV reviewers and the following open-source projects for the help of the implementation:

  • DGR(Point clouds preprocessing and evaluation)
  • PointNet++(unofficial implementation, for Furthest Points Sampling)
Owner
Intelligent Sensing, Perception and Computing Group
Intelligent Sensing, Perception and Computing Group
STEAL - Learning Semantic Boundaries from Noisy Annotations (CVPR 2019)

STEAL This is the official inference code for: Devil Is in the Edges: Learning Semantic Boundaries from Noisy Annotations David Acuna, Amlan Kar, Sanj

469 Dec 26, 2022
PyTorch implementation of Advantage async actor-critic Algorithms (A3C) in PyTorch

Advantage async actor-critic Algorithms (A3C) in PyTorch @inproceedings{mnih2016asynchronous, title={Asynchronous methods for deep reinforcement lea

LEI TAI 111 Dec 08, 2022
Very deep VAEs in JAX/Flax

Very Deep VAEs in JAX/Flax Implementation of the experiments in the paper Very Deep VAEs Generalize Autoregressive Models and Can Outperform Them on I

Jamie Townsend 42 Dec 12, 2022
Code reproduce for paper "Vehicle Re-identification with Viewpoint-aware Metric Learning"

VANET Code reproduce for paper "Vehicle Re-identification with Viewpoint-aware Metric Learning" Introduction This is the implementation of article VAN

EMDATA-AILAB 23 Dec 26, 2022
Code for the paper "Unsupervised Contrastive Learning of Sound Event Representations", ICASSP 2021.

Unsupervised Contrastive Learning of Sound Event Representations This repository contains the code for the following paper. If you use this code or pa

Eduardo Fonseca 81 Dec 22, 2022
HINet: Half Instance Normalization Network for Image Restoration

HINet: Half Instance Normalization Network for Image Restoration Liangyu Chen, Xin Lu, Jie Zhang, Xiaojie Chu, Chengpeng Chen Paper: https://arxiv.org

303 Dec 31, 2022
Benchmark for the generalization of 3D machine learning models across different remeshing/samplings of a surface.

Discretization Robust Correspondence Benchmark One challenge of machine learning on 3D surfaces is that there are many different representations/sampl

Nicholas Sharp 10 Sep 30, 2022
A minimalist tool to display a network graph.

A tool to get a minimalist view of any architecture This tool has only be tested with the models included in this repo. Therefore, I can't guarantee t

Thibault Castells 1 Feb 11, 2022
Unofficial implementation of One-Shot Free-View Neural Talking Head Synthesis

face-vid2vid Usage Dataset Preparation cd datasets wget https://yt-dl.org/downloads/latest/youtube-dl -O youtube-dl chmod a+rx youtube-dl python load_

worstcoder 68 Dec 30, 2022
This is the paddle code for SeBoW(Self-Born wiring for neural trees), a kind of neural tree born form a large search space

SeBoW: Self-Born Wiring for neural trees(PaddlePaddle version) This is the paddle code for SeBoW(Self-Born wiring for neural trees), a kind of neural

HollyLee 13 Dec 08, 2022
This Repostory contains the pretrained DTLN-aec model for real-time acoustic echo cancellation.

This Repostory contains the pretrained DTLN-aec model for real-time acoustic echo cancellation.

Nils L. Westhausen 182 Jan 07, 2023
A framework for GPU based high-performance medical image processing and visualization

FAST is an open-source cross-platform framework with the main goal of making it easier to do high-performance processing and visualization of medical images on heterogeneous systems utilizing both mu

Erik Smistad 315 Dec 30, 2022
Chess reinforcement learning by AlphaGo Zero methods.

About Chess reinforcement learning by AlphaGo Zero methods. This project is based on these main resources: DeepMind's Oct 19th publication: Mastering

Samuel 2k Dec 29, 2022
Source code for our paper "Molecular Mechanics-Driven Graph Neural Network with Multiplex Graph for Molecular Structures"

Molecular Mechanics-Driven Graph Neural Network with Multiplex Graph for Molecular Structures Code for the Multiplex Molecular Graph Neural Network (M

shzhang 59 Dec 10, 2022
I will implement Fastai in each projects present in this repository.

DEEP LEARNING FOR CODERS WITH FASTAI AND PYTORCH The repository contains a list of the projects which I have worked on while reading the book Deep Lea

Thinam Tamang 43 Dec 20, 2022
Flexible time series feature extraction & processing

tsflex is a toolkit for flexible time series processing & feature extraction, that is efficient and makes few assumptions about sequence data. Useful

PreDiCT.IDLab 206 Dec 28, 2022
A library for uncertainty representation and training in neural networks.

Epistemic Neural Networks A library for uncertainty representation and training in neural networks. Introduction Many applications in deep learning re

DeepMind 211 Dec 12, 2022
Codes for NeurIPS 2021 paper "Adversarial Neuron Pruning Purifies Backdoored Deep Models"

Adversarial Neuron Pruning Purifies Backdoored Deep Models Code for NeurIPS 2021 "Adversarial Neuron Pruning Purifies Backdoored Deep Models" by Dongx

Dongxian Wu 31 Dec 11, 2022
Metrics to evaluate quality and efficacy of synthetic datasets.

An Open Source Project from the Data to AI Lab, at MIT Metrics for Synthetic Data Generation Projects Website: https://sdv.dev Documentation: https://

The Synthetic Data Vault Project 129 Jan 03, 2023