Points2Surf: Learning Implicit Surfaces from Point Clouds (ECCV 2020 Spotlight)

Overview

Points2Surf: Learning Implicit Surfaces from Point Clouds (ECCV 2020 Spotlight)

This is our implementation of Points2Surf, a network that estimates a signed distance function from point clouds. This SDF is turned into a mesh with Marching Cubes. For more details, please watch the short video and long video.

Points2Surf reconstructs objects from arbitrary points clouds more accurately than DeepSDF, AtlasNet and Screened Poisson Surface Reconstruction.

The architecture is similar to PCPNet. In contrast to other ML-based surface reconstruction methods, e.g. DeepSDF and AtlasNet, Points2Surf is patch-based and therefore independent from classes. The strongly improved generalization leads to much better results, even better than Screened Poisson Surface Reconstruction in most cases.

This code was mostly written by Philipp Erler and Paul Guerrero. This work was published at ECCV 2020.

Prerequisites

  • Python >= 3.7
  • PyTorch >= 1.6
  • CUDA and CuDNN if using GPU
  • BlenSor 1.0.18 RC 10 for dataset generation

Quick Start

To get a minimal working example for training and reconstruction, follow these steps. We recommend using Anaconda to manage the Python environment. Otherwise, you can install the required packages with Pip as defined in the requirements.txt.

# clone this repo
# a minimal dataset is included (2 shapes for training, 1 for evaluation)
git clone https://github.com/ErlerPhilipp/points2surf.git

# go into the cloned dir
cd points2surf

# create a conda environment with the required packages
conda env create --file p2s.yml

# activate the new conda environment
conda activate p2s

# train and evaluate the vanilla model with default settings
python full_run.py

Reconstruct Surfaces from our Point Clouds

Reconstruct meshes from a point clouds to replicate our results:

# download the test datasets
python datasets/download_datasets_abc.py
python datasets/download_datasets_famous.py
python datasets/download_datasets_thingi10k.py
python datasets/download_datasets_real_world.py

# download the pre-trained models
python models/download_models_vanilla.py
python models/download_models_ablation.py
python models/download_models_max.py

# start the evaluation for each model
# Points2Surf main model, trained for 150 epochs
bash experiments/eval_p2s_vanilla.sh

# ablation models, trained to for 50 epochs
bash experiments/eval_p2s_small_radius.sh
bash experiments/eval_p2s_medium_radius.sh
bash experiments/eval_p2s_large_radius.sh
bash experiments/eval_p2s_small_kNN.sh
bash experiments/eval_p2s_large_kNN.sh
bash experiments/eval_p2s_shared_transformer.sh
bash experiments/eval_p2s_no_qstn.sh
bash experiments/eval_p2s_uniform.sh
bash experiments/eval_p2s_vanilla_ablation.sh

# additional ablation models, trained for 50 epochs
bash experiments/eval_p2s_regression.sh
bash experiments/eval_p2s_shared_encoder.sh

# best model based on the ablation results, trained for 250 epochs
bash experiments/eval_p2s_max.sh

Each eval script reconstructs all test sets using the specified model. Running one evaluation takes around one day on a normal PC with e.g. a 1070 GTX and grid resolution = 256.

To get the best results, take the Max model. It's 15% smaller and produces 4% better results (mean Chamfer distance over all test sets) than the Vanilla model. It avoids the QSTN and uses uniform sub-sampling.

Training with our Dataset

To train the P2S models from the paper with our training set:

# download the ABC training and validation set
python datasets/download_datasets_abc_training.py

# start the evaluation for each model
# Points2Surf main model, train for 150 epochs
bash experiments/train_p2s_vanilla.sh

# ablation models, train to for 50 epochs
bash experiments/train_p2s_small_radius.sh
bash experiments/train_p2s_medium_radius.sh
bash experiments/train_p2s_large_radius.sh
bash experiments/train_p2s_small_kNN.sh
bash experiments/train_p2s_large_kNN.sh
bash experiments/train_p2s_shared_transformer.sh
bash experiments/train_p2s_no_qstn.sh
bash experiments/train_p2s_uniform.sh
bash experiments/train_p2s_vanilla_ablation.sh

# additional ablation models, train for 50 epochs
bash experiments/train_p2s_regression.sh
bash experiments/train_p2s_shared_encoder.sh

# best model based on the ablation results, train for 250 epochs
bash experiments/train_p2s_max.sh

With 4 RTX 2080Ti, we trained around 5 days to 150 epochs. Full convergence is at 200-250 epochs but the Chamfer distance doesn't change much. The topological noise might be reduced, though.

Logging of loss (absolute distance, sign logits and both) with Tensorboard is done by default. Additionally, we log the accuracy, recall and F1 score for the sign prediction. You can start a Tensorboard server with:

bash start_tensorboard.sh

Make your own Datasets

The point clouds are stored as NumPy arrays of type np.float32 with ending .npy where each line contains the 3 coordinates of a point. The point clouds need to be normalized to the (-1..+1)-range.

A dataset is given by a text file containing the file name (without extension) of one point cloud per line. The file name is given relative to the location of the text file.

Dataset from Meshes for Training and Reconstruction

To make your own dataset from meshes, place your ground-truth meshes in ./datasets/(DATASET_NAME)/00_base_meshes/. Meshes must be of a type that Trimesh can load, e.g. .ply, .stl, .obj or .off. Because we need to compute signed distances for the training set, these input meshes must represent solids, i.e be manifold and watertight. Triangulated CAD objects like in the ABC-Dataset work in most cases. Next, create the text file ./datasets/(DATASET_NAME)/settings.ini with the following settings:

[general]
only_for_evaluation = 0
grid_resolution = 256
epsilon = 5
num_scans_per_mesh_min = 5
num_scans_per_mesh_max = 30
scanner_noise_sigma_min = 0.0
scanner_noise_sigma_max = 0.05

When you set only_for_evaluation = 1, the dataset preparation script skips most processing steps and produces only the text file for the test set.

For the point-cloud sampling, we used BlenSor 1.0.18 RC 10. Windows users need to fix a bug in the BlenSor code. Make sure that the blensor_bin variable in make_dataset.py points to your BlenSor binary, e.g. blensor_bin = "bin/Blensor-x64.AppImage".

You may need to change other paths or the number of worker processes and run:

python make_dataset.py

The ABC var-noise dataset with about 5k shapes takes around 8 hours using 15 worker processes on a Ryzen 7. Most computation time is required for the sampling and the GT signed distances.

Dataset from Point Clouds for Reconstruction

If you only want to reconstruct your own point clouds, place them in ./datasets/(DATASET_NAME)/00_base_pc/. The accepted file types are the same as for meshes.

You need to change some settings like the dataset name and the number of worker processes in make_pc_dataset.py and then run it with:

python make_pc_dataset.py

Manually Created Dataset for Reconstruction

In case you already have your point clouds as Numpy files, you can create a dataset manually. Put the *.npy files in the (DATASET_NAME)/04_pts/ directory. Then, you need to list the names (without extensions, one per line) in a textfile (DATASET_NAME)/testset.txt.

Related Work

Kazhdan, Michael, and Hugues Hoppe. "Screened poisson surface reconstruction." ACM Transactions on Graphics (ToG) 32.3 (2013): 1-13.

This work is the most important baseline for surface reconstruction. It fits a surface into a point cloud.

Groueix, Thibault, et al. "A papier-mâché approach to learning 3d surface generation." Proceedings of the IEEE conference on computer vision and pattern recognition. 2018.

This is one of the first data-driven methods for surface reconstruction. It learns to approximate objects with 'patches', deformed and subdivided rectangles.

Park, Jeong Joon, et al. "Deepsdf: Learning continuous signed distance functions for shape representation." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019.

This is one of the first data-driven methods for surface reconstruction. It learns to approximate a signed distance function from points.

Chabra, Rohan, et al. "Deep Local Shapes: Learning Local SDF Priors for Detailed 3D Reconstruction." arXiv preprint arXiv:2003.10983 (2020).

This concurrent work uses a similar approach as ours. It produces smooth surfaces but requires point normals.

Citation

If you use our work, please cite our paper:

@InProceedings{ErlerEtAl:Points2Surf:ECCV:2020,
  title   = {{Points2Surf}: Learning Implicit Surfaces from Point Clouds}, 
  author="Erler, Philipp
    and Guerrero, Paul
    and Ohrhallinger, Stefan
    and Mitra, Niloy J.
    and Wimmer, Michael",
  editor="Vedaldi, Andrea
    and Bischof, Horst
    and Brox, Thomas
    and Frahm, Jan-Michael",
  year    = {2020},
  booktitle="Computer Vision -- ECCV 2020",
  publisher="Springer International Publishing",
  address="Cham",
  pages="108--124",
  abstract="A key step in any scanning-based asset creation workflow is to convert unordered point clouds to a surface. Classical methods (e.g., Poisson reconstruction) start to degrade in the presence of noisy and partial scans. Hence, deep learning based methods have recently been proposed to produce complete surfaces, even from partial scans. However, such data-driven methods struggle to generalize to new shapes with large geometric and topological variations. We present Points2Surf, a novel patch-based learning framework that produces accurate surfaces directly from raw scans without normals. Learning a prior over a combination of detailed local patches and coarse global information improves generalization performance and reconstruction accuracy. Our extensive comparison on both synthetic and real data demonstrates a clear advantage of our method over state-of-the-art alternatives on previously unseen classes (on average, Points2Surf brings down reconstruction error by 30{\%} over SPR and by 270{\%}+ over deep learning based SotA methods) at the cost of longer computation times and a slight increase in small-scale topological noise in some cases. Our source code, pre-trained model, and dataset are available at: https://github.com/ErlerPhilipp/points2surf.",
  isbn="978-3-030-58558-7"
  doi = {10.1007/978-3-030-58558-7_7},
}
Owner
Philipp Erler
PhD student at TU Wien researching surface reconstruction with deep learning
Philipp Erler
Dataset used in "PlantDoc: A Dataset for Visual Plant Disease Detection" accepted in CODS-COMAD 2020

PlantDoc: A Dataset for Visual Plant Disease Detection This repository contains the Cropped-PlantDoc dataset used for benchmarking classification mode

Pratik Kayal 109 Dec 29, 2022
Official Implementation (PyTorch) of "Point Cloud Augmentation with Weighted Local Transformations", ICCV 2021

PointWOLF: Point Cloud Augmentation with Weighted Local Transformations This repository is the implementation of PointWOLF(To appear). Sihyeon Kim1*,

MLV Lab (Machine Learning and Vision Lab at Korea University) 16 Nov 03, 2022
deep-table implements various state-of-the-art deep learning and self-supervised learning algorithms for tabular data using PyTorch.

deep-table implements various state-of-the-art deep learning and self-supervised learning algorithms for tabular data using PyTorch.

63 Oct 17, 2022
Cascaded Deep Video Deblurring Using Temporal Sharpness Prior and Non-local Spatial-Temporal Similarity

This repository is the official PyTorch implementation of Cascaded Deep Video Deblurring Using Temporal Sharpness Prior and Non-local Spatial-Temporal Similarity

hippopmonkey 4 Dec 11, 2022
PyTorch implementation for ACL 2021 paper "Maria: A Visual Experience Powered Conversational Agent".

Maria: A Visual Experience Powered Conversational Agent This repository is the Pytorch implementation of our paper "Maria: A Visual Experience Powered

Jokie 22 Dec 12, 2022
A machine learning library for spiking neural networks. Supports training with both torch and jax pipelines, and deployment to neuromorphic hardware.

Rockpool Rockpool is a Python package for developing signal processing applications with spiking neural networks. Rockpool allows you to build network

SynSense 21 Dec 14, 2022
WarpRNNT loss ported in Numba CPU/CUDA for Pytorch

RNNT loss in Pytorch - Numba JIT compiled (warprnnt_numba) Warp RNN Transducer Loss for ASR in Pytorch, ported from HawkAaron/warp-transducer and a re

Somshubra Majumdar 15 Oct 22, 2022
A curated list of Machine Learning and Deep Learning tutorials in Jupyter Notebook format ready to run in Google Colaboratory

Awesome Machine Learning Jupyter Notebooks for Google Colaboratory A curated list of Machine Learning and Deep Learning tutorials in Jupyter Notebook

Carlos Toxtli 245 Jan 01, 2023
Source code for "Roto-translated Local Coordinate Framesfor Interacting Dynamical Systems"

Roto-translated Local Coordinate Frames for Interacting Dynamical Systems Source code for Roto-translated Local Coordinate Frames for Interacting Dyna

Miltiadis Kofinas 19 Nov 27, 2022
This repository contains the code for using the H3DS dataset introduced in H3D-Net: Few-Shot High-Fidelity 3D Head Reconstruction

H3DS Dataset This repository contains the code for using the H3DS dataset introduced in H3D-Net: Few-Shot High-Fidelity 3D Head Reconstruction Access

Crisalix 72 Dec 10, 2022
An implementation of shampoo

shampoo.pytorch An implementation of shampoo, proposed in Shampoo : Preconditioned Stochastic Tensor Optimization by Vineet Gupta, Tomer Koren and Yor

Ryuichiro Hataya 69 Sep 10, 2022
(EI 2022) Controllable Confidence-Based Image Denoising

Image Denoising with Control over Deep Network Hallucination Paper and arXiv preprint -- Our frequency-domain insights derive from SFM and the concept

Images and Visual Representation Laboratory (IVRL) at EPFL 5 Dec 18, 2022
LEDNet: A Lightweight Encoder-Decoder Network for Real-time Semantic Segmentation

LEDNet: A Lightweight Encoder-Decoder Network for Real-time Semantic Segmentation Table of Contents: Introduction Project Structure Installation Datas

Yu Wang 492 Dec 02, 2022
Pre-Trained Image Processing Transformer (IPT)

Pre-Trained Image Processing Transformer (IPT) By Hanting Chen, Yunhe Wang, Tianyu Guo, Chang Xu, Yiping Deng, Zhenhua Liu, Siwei Ma, Chunjing Xu, Cha

HUAWEI Noah's Ark Lab 332 Dec 18, 2022
Lucid library adapted for PyTorch

Lucent PyTorch + Lucid = Lucent The wonderful Lucid library adapted for the wonderful PyTorch! Lucent is not affiliated with Lucid or OpenAI's Clarity

Lim Swee Kiat 520 Dec 26, 2022
A Flow-based Generative Network for Speech Synthesis

WaveGlow: a Flow-based Generative Network for Speech Synthesis Ryan Prenger, Rafael Valle, and Bryan Catanzaro In our recent paper, we propose WaveGlo

NVIDIA Corporation 2k Dec 26, 2022
Text and code for the forthcoming second edition of Think Bayes, by Allen Downey.

Think Bayes 2 by Allen B. Downey The HTML version of this book is here. Think Bayes is an introduction to Bayesian statistics using computational meth

Allen Downey 1.5k Jan 08, 2023
Source code for CAST - Crisis Domain Adaptation Using Sequence-to-sequence Transformers (Accepted to ISCRAM 2021, CorePaper).

Source code for CAST: Crisis Domain Adaptation UsingSequence-to-sequenceTransformers (Paper, BibTeX, Accepted to ISCRAM 2021, CorePaper) Quick start D

Congcong Wang 0 Jul 14, 2021
Code accompanying "Dynamic Neural Relational Inference" from CVPR 2020

Code accompanying "Dynamic Neural Relational Inference" This codebase accompanies the paper "Dynamic Neural Relational Inference" from CVPR 2020. This

Colin Graber 48 Dec 23, 2022
A novel benchmark dataset for Monocular Layout prediction

AutoLay AutoLay: Benchmarking Monocular Layout Estimation Kaustubh Mani, N. Sai Shankar, J. Krishna Murthy, and K. Madhava Krishna Abstract In this pa

Kaustubh Mani 39 Apr 26, 2022