Official Code for ICML 2021 paper "Revisiting Point Cloud Shape Classification with a Simple and Effective Baseline"

Overview

Revisiting Point Cloud Shape Classification with a Simple and Effective Baseline
Ankit Goyal, Hei Law, Bowei Liu, Alejandro Newell, Jia Deng
International Conference on Machine Learning (ICML), 2021

If you find our work useful in your research, please consider citing:

@article{goyal2021revisiting,
  title={Revisiting Point Cloud Shape Classification with a Simple and Effective Baseline},
  author={Goyal, Ankit and Law, Hei and Liu, Bowei and Newell, Alejandro and Deng, Jia},
  journal={International Conference on Machine Learning},
  year={2021}
}

Getting Started

First clone the repository. We would refer to the directory containing the code as SimpleView.

git clone [email protected]:princeton-vl/SimpleView.git

Requirements

The code is tested on Linux OS with Python version 3.7.5, CUDA version 10.0, CuDNN version 7.6 and GCC version 5.4. We recommend using these versions especially for installing pointnet++ custom CUDA modules.

Install Libraries

We recommend you first install Anaconda and create a virtual environment.

conda create --name simpleview python=3.7.5

Activate the virtual environment and install the libraries. Make sure you are in SimpleView.

conda activate simpleview
pip install -r requirements.txt
conda install sed  # for downloading data and pretrained models

For PointNet++, we need to install custom CUDA modules. Make sure you have access to a GPU during this step. You might need to set the appropriate TORCH_CUDA_ARCH_LIST environment variable depending on your GPU model. The following command should work for most cases export TORCH_CUDA_ARCH_LIST="6.0;6.1;6.2;7.0;7.5". However, if the install fails, check if TORCH_CUDA_ARCH_LIST is correctly set. More details could be found here.

cd pointnet2_pyt && pip install -e . && cd ..

Download Datasets and Pre-trained Models

Make sure you are in SimpleView. download.sh script can be used for downloading all the data and the pretrained models. It also places them at the correct locations. First, use the following command to provide execute permission to the download.sh script.

chmod +x download.sh

To download ModelNet40 execute the following command. This will download the ModelNet40 point cloud dataset released with pointnet++ as well as the validation splits used in our work.

./download.sh modelnet40

To download the pretrained models, execute the following command.

./download.sh pretrained

Code Organization

  • SimpleView/models: Code for various models in PyTorch.
  • SimpleView/configs: Configuration files for various models.
  • SimpleView/main.py: Training and testing any models.
  • SimpleView/configs.py: Hyperparameters for different models and dataloader.
  • SimpleView/dataloader.py: Code for different variants of the dataloader.
  • SimpleView/*_utils.py: Code for various utility functions.

Running Experiments

Training and Config files

To train or test any model, we use the main.py script. The format for running this script is as follows.

python main.py --exp-config <path to the config>

The config files are named as <protocol>_<model_name><_extra>_run_<seed>.yaml (<protocol> ∈ [dgcnn, pointnet2, rscnn]; <model_name> ∈ [dgcnn, pointnet2, rscnn, pointnet, simpleview]; <_extra> ∈ ['',valid,0.5,0.25] ). For example, the config file to run an experiment for PointNet++ in DGCNN protocol with seed 1 dgcnn_pointnet2_run_1.yaml. To run a new experiment with a different seed, you need to change the SEED parameter in the config file. For all our experiments (including on the validation set) we do 4 runs with different seeds.

As discussed in the paper for the PointNet++ and SimpleView protocols, we need to first run an experiment to tune the number of epochs on the validation set. This could be done by first running the experiment <pointnet2/dgcnn>_<model_name>_valid_run_<seed>.yaml and then running the experiment <pointnet2/dgcnn>_<model_name>_run_<seed>.yaml. Based on the number of epochs achieving the best performance on the validation set, one could use the model trained on the complete training set to get the final test performance.

To train models on the partial training set (Table 7), use the configs named as dgcnn_<model_name>_valid_<0.25/0.5>_run_<seed>.yaml and <dgcnn>_<model_name>_<0.25/0.5>_run_<seed>.yaml.

Even with the same SEED the results could vary slightly because of the randomization introduced for faster cuDNN operations. More details could be found here

SimpleView Protocol

To run an experiment in the SimpleView protocol, there are two stages.

  • First tune the number of epochs on the validation set. This is done using configs dgcnn_<model_name>_valid_run_<seed>.yaml. Find the best number of epochs on the validation set, evaluated at every 25th epoch.
  • Train the model on the complete training set using configs dgcnn_<model_name>_run_<seed>.yaml. Use the performance on the test set at the fine-tuned number of epochs as the final performance.

Evaluate a pretrained model

We provide pretrained models. They can be downloaded using the ./download pretrained command and are stored in the SimpleView/pretrained folder. To test a pretrained model, the command is of the following format.

python main.py --entry <test/rscnn_vote/pn2_vote> --model-path pretrained/<cfg_name>/<model_name>.pth --exp-config configs/<cfg_name>.yaml

We list the evaluation commands in the eval_models.sh script. For example to evaluate models on the SimpleView protocol, use the commands here. Note that for the SimpleView and the Pointnet2 protocols, the model path has names in the format model_<epoch_id>.pth. Here epoch_id represents the number of epochs tuned on the validation set.

Performance of the released pretrained models on ModelNet40

Protocol → DGCNN - Smooth DCGNN - CE. RSCNN - No Vote PointNet - No Vote SimpleView
Method↓ (Tab. 2, Col. 7) (Tab. 2, Col. 6) (Tab. 2, Col. 5) (Tab. 2, Col. 2) (Tab. 4, Col. 2)
SimpleView 93.9 93.2 92.7 90.8 93.3
PointNet++ 93.0 92.8 92.6 89.7 92.6
DGCNN 92.6 91.8 92.2 89.5 92.0
RSCNN 92.3 92.0 92.2 89.4 91.6
PointNet 90.7 90.0 89.7 88.8 90.1

Acknowlegements

We would like to thank the authors of the following reposities for sharing their code.

  • PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation: 1, 2
  • PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space: 1, 2
  • Relation-Shape Convolutional Neural Network for Point Cloud Analysis: 1
  • Dynamic Graph CNN for Learning on Point Clouds: 1
Owner
Princeton Vision & Learning Lab
Princeton Vision & Learning Lab
The code for paper "Learning Implicit Fields for Generative Shape Modeling".

implicit-decoder The tensorflow code for paper "Learning Implicit Fields for Generative Shape Modeling", Zhiqin Chen, Hao (Richard) Zhang. Project pag

Zhiqin Chen 353 Dec 30, 2022
A Novel Plug-in Module for Fine-grained Visual Classification

Pytorch implementation for A Novel Plug-in Module for Fine-Grained Visual Classification. fine-grained visual classification task.

ChouPoYung 109 Dec 20, 2022
PyTorch implementation for COMPLETER: Incomplete Multi-view Clustering via Contrastive Prediction (CVPR 2021)

Completer: Incomplete Multi-view Clustering via Contrastive Prediction This repo contains the code and data of the following paper accepted by CVPR 20

XLearning Group 72 Dec 07, 2022
Implementation of gaze tracking and demo

Predicting Customer Demand by Using Gaze Detecting and Object Tracking This project is the integration of gaze detecting and object tracking. Predict

2 Oct 20, 2022
KwaiRec: A Fully-observed Dataset for Recommender Systems (Density: Almost 100%)

KuaiRec: A Fully-observed Dataset for Recommender Systems (Density: Almost 100%) KuaiRec is a real-world dataset collected from the recommendation log

Chongming GAO (高崇铭) 70 Dec 28, 2022
unet for image segmentation

Implementation of deep learning framework -- Unet, using Keras The architecture was inspired by U-Net: Convolutional Networks for Biomedical Image Seg

zhixuhao 4.1k Dec 31, 2022
Volumetric parameterization of the placenta to a flattened template

placenta-flattening A MATLAB algorithm for volumetric mesh parameterization. Developed for mapping a placenta segmentation derived from an MRI image t

Mazdak Abulnaga 12 Mar 14, 2022
Open-source implementation of Google Vizier for hyper parameters tuning

Advisor Introduction Advisor is the hyper parameters tuning system for black box optimization. It is the open-source implementation of Google Vizier w

tobe 1.5k Jan 04, 2023
Stroke-predictions-ml-model - Machine learning model to predict individuals chances of having a stroke

stroke-predictions-ml-model machine learning model to predict individuals chance

Alex Volchek 1 Jan 03, 2022
Novel and high-performance medical image classification pipelines are heavily utilizing ensemble learning strategies

An Analysis on Ensemble Learning optimized Medical Image Classification with Deep Convolutional Neural Networks Novel and high-performance medical ima

14 Dec 18, 2022
A web-based application for quick, scalable, and automated hyperparameter tuning and stacked ensembling in Python.

Xcessiv Xcessiv is a tool to help you create the biggest, craziest, and most excessive stacked ensembles you can think of. Stacked ensembles are simpl

Reiichiro Nakano 1.3k Nov 17, 2022
Infrastructure as Code (IaC) for a self-hosted version of Gnosis Safe on AWS

Welcome to Yearn Gnosis Safe! Setting up your local environment Infrastructure Deploying Gnosis Safe Prerequisites 1. Create infrastructure for secret

Numan 16 Jul 18, 2022
Repository for paper "Non-intrusive speech intelligibility prediction from discrete latent representations"

Non-Intrusive Speech Intelligibility Prediction from Discrete Latent Representations Official repository for paper "Non-Intrusive Speech Intelligibili

Alex McKinney 5 Oct 25, 2022
Fine-grained Post-training for Improving Retrieval-based Dialogue Systems - NAACL 2021

Fine-grained Post-training for Multi-turn Response Selection Implements the model described in the following paper Fine-grained Post-training for Impr

Janghoon Han 83 Dec 20, 2022
Train Yolov4 using NBX-Jobs

yolov4-trainer-nbox Train Yolov4 using NBX-Jobs. Use the powerfull functionality available in nbox-SDK repo to train a tiny-Yolo v4 model on Pascal VO

Yash Bonde 1 Jan 12, 2022
Churn prediction

Churn-prediction Churn-prediction Data preprocessing:: Label encoder is used to normalize the categorical variable Data Transformation:: For each data

1 Sep 28, 2022
Editing a classifier by rewriting its prediction rules

This repository contains the code and data for our paper: Editing a classifier by rewriting its prediction rules Shibani Santurkar*, Dimitris Tsipras*

Madry Lab 86 Dec 27, 2022
CRNN With PyTorch

CRNN-PyTorch Implementation of https://arxiv.org/abs/1507.05717

Vadim 4 Sep 01, 2022
Bachelor's Thesis in Computer Science: Privacy-Preserving Federated Learning Applied to Decentralized Data

federated is the source code for the Bachelor's Thesis Privacy-Preserving Federated Learning Applied to Decentralized Data (Spring 2021, NTNU) Federat

Dilawar Mahmood 25 Nov 30, 2022
Racing line optimization algorithm in python that uses Particle Swarm Optimization.

Racing Line Optimization with PSO This repository contains a racing line optimization algorithm in python that uses Particle Swarm Optimization. Requi

Parsa Dahesh 6 Dec 14, 2022