Pytorch implementation for "Density-aware Chamfer Distance as a Comprehensive Metric for Point Cloud Completion" (NeurIPS 2021)

Overview

Density-aware Chamfer Distance

This repository contains the official PyTorch implementation of our paper:

Density-aware Chamfer Distance as a Comprehensive Metric for Point Cloud Completion, NeurIPS 2021

Tong Wu, Liang Pan, Junzhe Zhang, Tai Wang, Ziwei Liu, Dahua Lin

avatar

We present a new point cloud similarity measure named Density-aware Chamfer Distance (DCD). It is derived from CD and benefits from several desirable properties: 1) it can detect disparity of density distributions and is thus a more intensive measure of similarity compared to CD; 2) it is stricter with detailed structures and significantly more computationally efficient than EMD; 3) the bounded value range encourages a more stable and reasonable evaluation over the whole test set. DCD can be used as both an evaluation metric and the training loss. We mainly validate its performance on point cloud completion in our paper.

This repository includes:

  • Implementation of Density-aware Chamfer Distance (DCD).
  • Implementation of our method for this task and the pre-trained model.

Installation

Requirements

  • PyTorch 1.2.0
  • Open3D 0.9.0
  • Other dependencies are listed in requirements.txt.

Install

Install PyTorch 1.2.0 first, and then get the other requirements by running the following command:

bash setup.sh

Dataset

We use the MVP Dataset. Please download the train set and test set and then modify the data path in data/mvp_new.py to the your own data location. Please refer to their codebase for further instructions.

Usage

Density-aware Chamfer Distance

The function for DCD calculation is defined in def calc_dcd() in utils/model_utils.py.

Users of higher PyTorch versions may try def calc_dcd() in utils_v2/model_utils.py, which has been tested on PyTorch 1.6.0 .

Model training and evaluation

  • To train a model: run python train.py ./cfgs/*.yaml, for example:
python train.py ./cfgs/vrc_plus.yaml
  • To test a model: run python train.py ./cfgs/*.yaml --test_only, for example:
python train.py ./cfgs/vrc_plus_eval.yaml --test_only
  • Config for each algorithm can be found in cfgs/.
  • run_train.sh and run_test.sh are provided for SLURM users.

We provide the following config files:

  • pcn.yaml: PCN trained with CD loss.
  • vrc.yaml: VRCNet trained with CD loss.
  • pcn_dcd.yaml: PCN trained with DCD loss.
  • vrc_dcd.yaml: VRCNet trained with DCD loss.
  • vrc_plus.yaml: training with our method.
  • vrc_plus_eval.yaml: evaluation of our method with guided down-sampling.

Attention: We empirically find that using DP or DDP for training would slightly hurt the performance. So training on multiple cards is not well supported currently.

Pre-trained models

We provide the pre-trained model that reproduce the results in our paper. Download and extract it to the ./log/pretrained/ directory, and then evaluate it with cfgs/vrc_plus_eval.yaml. The setting prob_sample: True turns on the guided down-sampling. We also provide the model for VRCNet trained with DCD loss here.

Citation

If you find our code or paper useful, please cite our paper:

@inproceedings{wu2021densityaware,
  title={Density-aware Chamfer Distance as a Comprehensive Metric for Point Cloud Completion},
  author={Tong Wu, Liang Pan, Junzhe Zhang, Tai WANG, Ziwei Liu, Dahua Lin},
  booktitle={In Advances in Neural Information Processing Systems (NeurIPS), 2021},
  year={2021}
}

Acknowledgement

The code is based on the VRCNet implementation. We include the following PyTorch 3rd-party libraries: ChamferDistancePytorch, emd, expansion_penalty, MDS, and Pointnet2.PyTorch. Thanks for these great projects.

Contact

Please contact @wutong16 for questions, comments and reporting bugs.

Owner
Tong WU
Tong WU
Offline Reinforcement Learning with Implicit Q-Learning

Offline Reinforcement Learning with Implicit Q-Learning This repository contains the official implementation of Offline Reinforcement Learning with Im

Ilya Kostrikov 125 Dec 31, 2022
https://arxiv.org/abs/2102.11005

LogME LogME: Practical Assessment of Pre-trained Models for Transfer Learning How to use Just feed the features f and labels y to the function, and yo

THUML: Machine Learning Group @ THSS 149 Dec 19, 2022
Information Gain Filtration (IGF) is a method for filtering domain-specific data during language model finetuning. IGF shows significant improvements over baseline fine-tuning without data filtration.

Information Gain Filtration Information Gain Filtration (IGF) is a method for filtering domain-specific data during language model finetuning. IGF sho

4 Jul 28, 2022
Anchor-free Oriented Proposal Generator for Object Detection

Anchor-free Oriented Proposal Generator for Object Detection Gong Cheng, Jiabao Wang, Ke Li, Xingxing Xie, Chunbo Lang, Yanqing Yao, Junwei Han, Intro

jbwang1997 56 Nov 15, 2022
This repository provides code for "On Interaction Between Augmentations and Corruptions in Natural Corruption Robustness".

On Interaction Between Augmentations and Corruptions in Natural Corruption Robustness This repository provides the code for the paper On Interaction B

Meta Research 33 Dec 08, 2022
Python wrapper to access the amazon selling partner API

PYTHON-AMAZON-SP-API Amazon Selling-Partner API If you have questions, please join on slack Contributions very welcome! Installation pip install pytho

Michael Primke 330 Jan 06, 2023
Galactic and gravitational dynamics in Python

Gala is a Python package for Galactic and gravitational dynamics. Documentation The documentation for Gala is hosted on Read the docs. Installation an

Adrian Price-Whelan 101 Dec 22, 2022
Neural Point-Based Graphics

Neural Point-Based Graphics Project   Video   Paper Neural Point-Based Graphics Kara-Ali Aliev1 Artem Sevastopolsky1,2 Maria Kolos1,2 Dmitry Ulyanov3

Ali Aliev 252 Dec 13, 2022
Convert Python 3 code to CUDA code.

Py2CUDA Convert python code to CUDA. Usage To convert a python file say named py_file.py to CUDA, run python generate_cuda.py --file py_file.py --arch

Yuval Rosen 3 Jul 14, 2021
Multi-layer convolutional LSTM with Pytorch

Convolution_LSTM_pytorch Thanks for your attention. I haven't got time to maintain this repo for a long time. I recommend this repo which provides an

Zijie Zhuang 734 Jan 03, 2023
A PyTorch Implementation of "Watch Your Step: Learning Node Embeddings via Graph Attention" (NeurIPS 2018).

Attention Walk ⠀⠀ A PyTorch Implementation of Watch Your Step: Learning Node Embeddings via Graph Attention (NIPS 2018). Abstract Graph embedding meth

Benedek Rozemberczki 303 Dec 09, 2022
PyTorch implementation of the wavelet analysis from Torrence & Compo

Continuous Wavelet Transforms in PyTorch This is a PyTorch implementation for the wavelet analysis outlined in Torrence and Compo (BAMS, 1998). The co

Tom Runia 262 Dec 21, 2022
Graph Representation Learning via Graphical Mutual Information Maximization

GMI (Graphical Mutual Information) Graph Representation Learning via Graphical Mutual Information Maximization (Peng Z, Huang W, Luo M, et al., WWW 20

93 Dec 29, 2022
Keyword2Text This repository contains the code of the paper: "A Plug-and-Play Method for Controlled Text Generation"

Keyword2Text This repository contains the code of the paper: "A Plug-and-Play Method for Controlled Text Generation", if you find this useful and use

57 Dec 27, 2022
[NeurIPS 2021] Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training

Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training Code for NeurIPS 2021 paper "Better Safe Than Sorry: Preventing Delu

Lue Tao 29 Sep 20, 2022
Convert game ISO and archives to CD CHD for emulation on Linux.

tochd Convert game ISO and archives to CD CHD for emulation. Author: Tuncay D. Source: https://github.com/thingsiplay/tochd Releases: https://github.c

Tuncay 20 Jan 02, 2023
An open source implementation of CLIP.

OpenCLIP Welcome to an open source implementation of OpenAI's CLIP (Contrastive Language-Image Pre-training). The goal of this repository is to enable

2.7k Dec 31, 2022
Efficient face emotion recognition in photos and videos

This repository contains code of face emotion recognition that was developed in the RSF (Russian Science Foundation) project no. 20-71-10010 (Efficien

Andrey Savchenko 239 Jan 04, 2023
Variational Attention: Propagating Domain-Specific Knowledge for Multi-Domain Learning in Crowd Counting (ICCV, 2021)

DKPNet ICCV 2021 Variational Attention: Propagating Domain-Specific Knowledge for Multi-Domain Learning in Crowd Counting Baseline of DKPNet is availa

19 Oct 14, 2022
YouRefIt: Embodied Reference Understanding with Language and Gesture

YouRefIt: Embodied Reference Understanding with Language and Gesture YouRefIt: Embodied Reference Understanding with Language and Gesture by Yixin Che

16 Jul 11, 2022