Neural Surface Maps

Overview

Neural Surface Maps

Official implementation of Neural Surface Maps - Luca Morreale, Noam Aigerman, Vladimir Kim, Niloy J. Mitra

[Paper] [Project Page]

How-To

Replicating the results is possible following these steps:

  1. Parametrize the surface
  2. Prepare surface sample
  3. Overfit the surface
  4. Neural parametrization of the surface
  5. Optimize surface-to-surface map
  6. Optimize a map between a collection

1. Surface Parametrization

This is a preprocessing step. You can use SLIM[1] from this repo to fulfill this step.

2. Sample preparation

Given a parametrized surface (prev. step), we need to convert it into a sample. First of all, we need to over sample the surface with Meshlab. You can use the midpoint subdivision filter.

Once the super-sampled surface is ready then you can convert it into a sample:

python -m preprocessing.convert_sample surface_slim.obj surface_slim_oversampled.obj output_sample.pth

The file output_sample.pth is the sample ready to be over-fitted.

3. Overfit surface

A surface representation is generated with:

python -m training_surface_map dataset.sample_path=output_sample.pth

This will save a surface map inside outputs/neural_maps folder. The folder name follows this patterns: overfit_[timestamp]. Inside that folder, the map is saved under the sample fodler as pth file.

The overfitted surface can be generated with:

python -m show_surface_map

please, set the path to the pth file just created inside the script.

4. Neural parametrization

Generating a neural parametrization need to run:

python -m training_parametrization_map dataset.sample_path=your_surface_map.pth

Like for the overfitting, this saves the map inside outputs/neural_maps folder. The folder name have the following patterns parametrization_[timestamp].

To display the paramtrization obtained run:

python -m show_parametrization_map

please, set the path to the pth file just created inside the script.

5. Optimize surface-to-surface map

To generating a inter-surface map run:

python -m training_intersurface_map dataset.sample_path_g=your_surface_map_a.pth dataset.sample_path_f=your_surface_map_b.pth

Note, this steps requires two surface maps. A source, sample_path_g, and a target, sample_path_f.

Likewise the overfitting, the map is saved inside outputs/neural_maps. The inter-surface map folder pattern is intersurface_[timestamp]. The pth file is inside the models folder.

To display the inter-surface map run:

python -m show_intersurface_map

remember to set the path of the maps inside the script.

6. Optimize collection map

A collection between a set of surface maps can be optimized with:

python -m training_intersurface_map dataset.sample_path_g=your_surface_map_g.pth dataset.sample_path_f=your_surface_map_f.pth dataset.sample_path_q=your_surface_map_q.pth

Note, this steps requires three surface maps. A source, sample_path_g, and two targets, sample_path_f and sample_path_q.

This will save two maps inside outputs/neural_maps folder. The folder name follows this patterns: collection_[timestamp], under the folder models you can find two *.pth file.

To display the collection map run:

python -m show_collection_map

remember to set the path of maps inside the script.


Dependencies

Dependencies are listed in environment.yml. Using conda, all the packages can be installed with conda env create -f environment.yml.

On top of the packages above, please install also pytorch svd on gpu package.


Data

Any mesh can be used for this process. A data example can be downloaded here.


Citation

@misc{morreale2021neural,
      title={Neural Surface Maps},
      author={Luca Morreale and Noam Aigerman and Vladimir Kim and Niloy J. Mitra},
      year={2021},
      eprint={2103.16942},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

References

[1] Scalable locally injective mappings - Michael Rabinovich et. al. - ACM Transactions on Graphics (TOG) 2017

Owner
Luca Morreale
Luca Morreale
Lyapunov-guided Deep Reinforcement Learning for Stable Online Computation Offloading in Mobile-Edge Computing Networks

PyTorch code to reproduce LyDROO algorithm [1], which is an online computation offloading algorithm to maximize the network data processing capability subject to the long-term data queue stability an

Liang HUANG 87 Dec 28, 2022
Near-Duplicate Video Retrieval with Deep Metric Learning

Near-Duplicate Video Retrieval with Deep Metric Learning This repository contains the Tensorflow implementation of the paper Near-Duplicate Video Retr

2 Jan 24, 2022
PyTorch implementation of CDistNet: Perceiving Multi-Domain Character Distance for Robust Text Recognition

PyTorch implementation of CDistNet: Perceiving Multi-Domain Character Distance for Robust Text Recognition The unofficial code of CDistNet. Now, we ha

25 Jul 20, 2022
Distributed Asynchronous Hyperparameter Optimization better than HyperOpt.

UltraOpt : Distributed Asynchronous Hyperparameter Optimization better than HyperOpt. UltraOpt is a simple and efficient library to minimize expensive

98 Aug 16, 2022
NeoPlay is the project dedicated to ESport events.

NeoPlay is the project dedicated to ESport events. On this platform users can participate in tournaments with prize pools as well as create their own tournaments.

3 Dec 18, 2021
Monk is a low code Deep Learning tool and a unified wrapper for Computer Vision.

Monk - A computer vision toolkit for everyone Why use Monk Issue: Want to begin learning computer vision Solution: Start with Monk's hands-on study ro

Tessellate Imaging 507 Dec 04, 2022
FCOSR: A Simple Anchor-free Rotated Detector for Aerial Object Detection

FCOSR: A Simple Anchor-free Rotated Detector for Aerial Object Detection FCOSR: A Simple Anchor-free Rotated Detector for Aerial Object Detection arXi

59 Nov 29, 2022
MLSpace: Hassle-free machine learning & deep learning development

MLSpace: Hassle-free machine learning & deep learning development

abhishek thakur 293 Jan 03, 2023
FindFunc is an IDA PRO plugin to find code functions that contain a certain assembly or byte pattern, reference a certain name or string, or conform to various other constraints.

FindFunc: Advanced Filtering/Finding of Functions in IDA Pro FindFunc is an IDA Pro plugin to find code functions that contain a certain assembly or b

213 Dec 17, 2022
DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.

DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.

Microsoft 8.4k Jan 01, 2023
Robustness between the worst and average case

Robustness between the worst and average case A repository that implements intermediate robustness training and evaluation from the NeurIPS 2021 paper

CMU Locus Lab 16 Dec 02, 2022
AutoML library for deep learning

Official Website: autokeras.com AutoKeras: An AutoML system based on Keras. It is developed by DATA Lab at Texas A&M University. The goal of AutoKeras

Keras 8.7k Jan 08, 2023
Code for our paper: Online Variational Filtering and Parameter Learning

Variational Filtering To run phi learning on linear gaussian (Fig1a) python linear_gaussian_phi_learning.py To run phi and theta learning on linear g

16 Aug 14, 2022
DeLag: Detecting Latency Degradation Patterns in Service-based Systems

DeLag: Detecting Latency Degradation Patterns in Service-based Systems Replication package of the work "DeLag: Detecting Latency Degradation Patterns

SEALABQualityGroup @ University of L'Aquila 2 Mar 24, 2022
Tracking code for the winner of track 1 in the MMP-Tracking Challenge at ICCV 2021 Workshop.

Tracking Code for the winner of track1 in MMP-Trakcing challenge This repository contains our tracking code for the Multi-camera Multiple People Track

DamoCV 29 Nov 13, 2022
The first public PyTorch implementation of Attentive Recurrent Comparators

arc-pytorch PyTorch implementation of Attentive Recurrent Comparators by Shyam et al. A blog explaining Attentive Recurrent Comparators Visualizing At

Sanyam Agarwal 150 Oct 14, 2022
基于深度强化学习的原神自动钓鱼AI

原神自动钓鱼AI由YOLOX, DQN两部分模型组成。使用迁移学习,半监督学习进行训练。 模型也包含一些使用opencv等传统数字图像处理方法实现的不可学习部分。

4.2k Jan 01, 2023
Locally Constrained Self-Attentive Sequential Recommendation

LOCKER This is the pytorch implementation of this paper: Locally Constrained Self-Attentive Sequential Recommendation. Zhankui He, Handong Zhao, Zhe L

Zhankui (Aaron) He 8 Jul 30, 2022
Efficient and Scalable Physics-Informed Deep Learning and Scientific Machine Learning on top of Tensorflow for multi-worker distributed computing

Notice: Support for Python 3.6 will be dropped in v.0.2.1, please plan accordingly! Efficient and Scalable Physics-Informed Deep Learning Collocation-

tensordiffeq 74 Dec 09, 2022
Implementation of the Paper: "Parameterized Hypercomplex Graph Neural Networks for Graph Classification" by Tuan Le, Marco Bertolini, Frank Noé and Djork-Arné Clevert

Parameterized Hypercomplex Graph Neural Networks (PHC-GNNs) PHC-GNNs (Le et al., 2021): https://arxiv.org/abs/2103.16584 PHM Linear Layer Illustration

Bayer AG 26 Aug 11, 2022