[NeurIPS'21] Shape As Points: A Differentiable Poisson Solver

Overview

Shape As Points (SAP)

Paper | Project Page | Short Video (6 min) | Long Video (12 min)

This repository contains the implementation of the paper:

Shape As Points: A Differentiable Poisson Solver
Songyou Peng, Chiyu "Max" Jiang, Yiyi Liao, Michael Niemeyer, Marc Pollefeys and Andreas Geiger
NeurIPS 2021 (Oral)

If you find our code or paper useful, please consider citing

@inproceedings{Peng2021SAP,
 author    = {Peng, Songyou and Jiang, Chiyu "Max" and Liao, Yiyi and Niemeyer, Michael and Pollefeys, Marc and Geiger, Andreas},
 title     = {Shape As Points: A Differentiable Poisson Solver},
 booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
 year      = {2021}}

Installation

First you have to make sure that you have all dependencies in place. The simplest way to do so, is to use anaconda.

You can create an anaconda environment called sap using

conda env create -f environment.yaml
conda activate sap

Now, you can install PyTorch3D 0.6.0 from the official instruction as follows

pip install pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/py38_cu102_pyt190/download.html

And install PyTorch Scatter:

conda install pytorch-scatter -c pyg

Demo - Quick Start

First, run the script to get the demo data:

bash scripts/download_demo_data.sh

Optimization-based 3D Surface Reconstruction

You can now quickly test our code on the data shown in the teaser. To this end, simply run:

python optim_hierarchy.py configs/optim_based/teaser.yaml

This script should create a folder out/demo_optim where the output meshes and the optimized oriented point clouds under different grid resolution are stored.

To visualize the optimization process on the fly, you can set o3d_show: Frue in configs/optim_based/teaser.yaml.

Learning-based 3D Surface Reconstruction

You can also test SAP on another application where we can reconstruct from unoriented point clouds with either large noises or outliers with a learned network.

For the point clouds with large noise as shown above, you can run:

python generate.py configs/learning_based/demo_large_noise.yaml

The results can been found at out/demo_shapenet_large_noise/generation/vis.

As for the point clouds with outliers, you can run:

python generate.py configs/learning_based/demo_outlier.yaml

You can find the reconstrution on out/demo_shapenet_outlier/generation/vis.

Dataset

We have different dataset for our optimization-based and learning-based settings.

Dataset for Optimization-based Reconstruction

Here we consider the following dataset:

Please cite the corresponding papers if you use the data.

You can download the processed dataset (~200 MB) by running:

bash scripts/download_optim_data.sh

Dataset for Learning-based Reconstruction

We train and evaluate on ShapeNet. You can download the processed dataset (~220 GB) by running:

bash scripts/download_shapenet.sh

After, you should have the dataset in data/shapenet_psr folder.

Alternatively, you can also preprocess the dataset yourself. To this end, you can:

Usage for Optimization-based 3D Reconstruction

For our optimization-based setting, you can consider running with a coarse-to-fine strategy:

python optim_hierarchy.py configs/optim_based/CONFIG.yaml

We start from a grid resolution of 32^3, and increase to 64^3, 128^3 and finally 256^3.

Alternatively, you can also run on a single resolution with:

python optim.py configs/optim_based/CONFIG.yaml

You might need to modify the CONFIG.yaml accordingly.

Usage for Learning-based 3D Reconstruction

Mesh Generation

To generate meshes using a trained model, use

python generate.py configs/learning_based/CONFIG.yaml

where you replace CONFIG.yaml with the correct config file.

Use a pre-trained model

The easiest way is to use a pre-trained model. You can do this by using one of the config files with postfix _pretrained.

For example, for 3D reconstruction from point clouds with outliers using our model with 7x offsets, you can simply run:

python generate.py configs/learning_based/outlier/ours_7x_pretrained.yaml

The script will automatically download the pretrained model and run the generation. You can find the outputs in the out/.../generation_pretrained folders.

Note config files are only for generation, not for training new models: when these configs are used for training, the model will be trained from scratch, but during inference our code will still use the pretrained model.

We provide the following pretrained models:

noise_small/ours.pt
noise_large/ours.pt
outlier/ours_1x.pt
outlier/ours_3x.pt
outlier/ours_5x.pt
outlier/ours_7x.pt
outlier/ours_3plane.pt

Evaluation

To evaluate a trained model, we provide the script eval_meshes.py. You can run it using:

python eval_meshes.py configs/learning_based/CONFIG.yaml

The script takes the meshes generated in the previous step and evaluates them using a standardized protocol. The output will be written to .pkl and .csv files in the corresponding generation folder that can be processed using pandas.

Training

Finally, to train a new network from scratch, simply run:

python train.py configs/learning_based/CONFIG.yaml

For available training options, please take a look at configs/default.yaml.

Attack classification models with transferability, black-box attack; unrestricted adversarial attacks on imagenet

Attack classification models with transferability, black-box attack; unrestricted adversarial attacks on imagenet, CVPR2021 安全AI挑战者计划第六期:ImageNet无限制对抗攻击 决赛第四名(team name: Advers)

51 Dec 01, 2022
PyTorch implementation of normalizing flow models

PyTorch implementation of normalizing flow models

Vincent Stimper 242 Jan 02, 2023
Medical Insurance Cost Prediction using Machine earning

Medical-Insurance-Cost-Prediction-using-Machine-learning - Here in this project, I will use regression analysis to predict medical insurance cost for people in different regions, and based on several

1 Dec 27, 2021
a general-purpose Transformer based vision backbone

Swin Transformer By Ze Liu*, Yutong Lin*, Yue Cao*, Han Hu*, Yixuan Wei, Zheng Zhang, Stephen Lin and Baining Guo. This repo is the official implement

Microsoft 9.9k Jan 08, 2023
Interpretable and Generalizable Person Re-Identification with Query-Adaptive Convolution and Temporal Lifting

QAConv Interpretable and Generalizable Person Re-Identification with Query-Adaptive Convolution and Temporal Lifting This PyTorch code is proposed in

Shengcai Liao 166 Dec 28, 2022
Rendering color and depth images for ShapeNet models.

Color & Depth Renderer for ShapeNet This library includes the tools for rendering multi-view color and depth images of ShapeNet models. Physically bas

Yinyu Nie 41 Dec 19, 2022
A PyTorch implementation of a Factorization Machine module in cython.

fmpytorch A library for factorization machines in pytorch. A factorization machine is like a linear model, except multiplicative interaction terms bet

Jack Hessel 167 Jul 06, 2022
Provide baselines and evaluation metrics of the task: traffic flow prediction

Note: This repo is adpoted from https://github.com/UNIMIBInside/Smart-Mobility-Prediction. Due to technical reasons, I did not fork their code. Introd

Zhangzhi Peng 11 Nov 02, 2022
ColossalAI-Examples - Examples of training models with hybrid parallelism using ColossalAI

ColossalAI-Examples This repository contains examples of training models with Co

HPC-AI Tech 185 Jan 09, 2023
This library is a location of the LegacyLogger for PyTorch Lightning.

neptune-contrib Documentation See neptune-contrib documentation site Installation Get prerequisites python versions 3.5.6/3.6 are supported Install li

neptune.ai 26 Oct 07, 2021
Generalized Data Weighting via Class-level Gradient Manipulation

Generalized Data Weighting via Class-level Gradient Manipulation This repository is the official implementation of Generalized Data Weighting via Clas

18 Nov 12, 2022
NeurIPS workshop paper 'Counter-Strike Deathmatch with Large-Scale Behavioural Cloning'

Counter-Strike Deathmatch with Large-Scale Behavioural Cloning Tim Pearce, Jun Zhu Offline RL workshop, NeurIPS 2021 Paper: https://arxiv.org/abs/2104

Tim Pearce 169 Dec 26, 2022
Learning to Prompt for Continual Learning

Learning to Prompt for Continual Learning (L2P) Official Jax Implementation L2P is a novel continual learning technique which learns to dynamically pr

Google Research 207 Jan 06, 2023
This repository contains datasets and baselines for benchmarking Chinese text recognition.

Benchmarking-Chinese-Text-Recognition This repository contains datasets and baselines for benchmarking Chinese text recognition. Please see the corres

FudanVI Lab 254 Dec 30, 2022
LineBoard - Python+React+MySQL-白板即時系統改善人群行為

LineBoard-白板即時系統改善人群行為 即時顯示實驗室的使用狀況,並遠端預約排隊,以此來改善人們的工作效率 程式架構 運作流程 使用者先至該實驗室網站預約

Bo-Jyun Huang 1 Feb 22, 2022
Code for Multimodal Neural SLAM for Interactive Instruction Following

Code for Multimodal Neural SLAM for Interactive Instruction Following Code structure The code is adapted from E.T. and most training as well as data p

7 Dec 07, 2022
This project aims to be a handler for input creation and running of multiple RICEWQ simulations.

What is autoRICEWQ? This project aims to be a handler for input creation and running of multiple RICEWQ simulations. What is RICEWQ? From the descript

Yass Fuentes 1 Feb 01, 2022
EvDistill: Asynchronous Events to End-task Learning via Bidirectional Reconstruction-guided Cross-modal Knowledge Distillation (CVPR'21)

EvDistill: Asynchronous Events to End-task Learning via Bidirectional Reconstruction-guided Cross-modal Knowledge Distillation (CVPR'21) Citation If y

addisonwang 18 Nov 11, 2022
CV backbones including GhostNet, TinyNet and TNT, developed by Huawei Noah's Ark Lab.

CV Backbones including GhostNet, TinyNet, TNT (Transformer in Transformer) developed by Huawei Noah's Ark Lab. GhostNet Code TinyNet Code TNT Code Pyr

HUAWEI Noah's Ark Lab 3k Jan 08, 2023
Learning Pixel-level Semantic Affinity with Image-level Supervision for Weakly Supervised Semantic Segmentation, CVPR 2018

Learning Pixel-level Semantic Affinity with Image-level Supervision This code is deprecated. Please see https://github.com/jiwoon-ahn/irn instead. Int

Jiwoon Ahn 337 Dec 15, 2022