GPU Accelerated Non-rigid ICP for surface registration

Overview

GPU Accelerated Non-rigid ICP for surface registration

Introduction

Preivous Non-rigid ICP algorithm is usually implemented on CPU, and needs to solve sparse least square problem, which is time consuming. In this repo, we implement a pytorch version NICP algorithm based on paper Amberg et al. Detailedly, we leverage the AMSGrad to optimize the linear regresssion, and then found nearest points iteratively. Additionally, we smooth the calculated mesh with laplacian smoothness term. With laplacian smoothness term, the wireframe is also more neat.


Quick Start

install

We use python3.8 and cuda10.2 for implementation. The code is tested on Ubuntu 20.04.

  • The pytorch3d cannot be installed directly from pip install pytorch3d, for the installation of pytorch3d, see pytorch3d.
  • For other packages, run
pip install -r requirements.txt
  • For the template face model, currently we use a processed version of BFM face model from 3DMMfitting-pytorch, download the BFM09_model_info.mat from 3DMMfitting-pytorch and put it into the ./BFM folder.
  • For demo, run
python demo_nicp.py

we show demo for NICP mesh2mesh and NICP mesh2pointcloud. We have two param sets for registration:

milestones = set([50, 80, 100, 110, 120, 130, 140])
stiffness_weights = np.array([50, 20, 5, 2, 0.8, 0.5, 0.35, 0.2])
landmark_weights = np.array([5, 2, 0.5, 0, 0, 0, 0, 0])

This param set is used for registration on fine grained mesh

milestones = set([50, 100])
stiffness_weights = np.array([50, 20, 5])
landmark_weights = np.array([50, 20, 5])

This param set is used for registration on noisy point clouds

Templated Model

You can also use your own templated face model with manually specified landmarks.

Todo

Currently we write some batchwise functions, but batchwise NICP is not supported now. We will support batch NICP in further releases.

You might also like...
High performance Cross-platform Inference-engine, you could run Anakin on x86-cpu,arm, nv-gpu, amd-gpu,bitmain and cambricon devices.

Anakin2.0 Welcome to the Anakin GitHub. Anakin is a cross-platform, high-performance inference engine, which is originally developed by Baidu engineer

GrabGpu_py: a scripts for grab gpu when gpu is free

GrabGpu_py a scripts for grab gpu when gpu is free. WaitCondition: gpu_memory

A Robust Non-IoU Alternative to Non-Maxima Suppression in Object Detection
A Robust Non-IoU Alternative to Non-Maxima Suppression in Object Detection

Confluence: A Robust Non-IoU Alternative to Non-Maxima Suppression in Object Detection 1. 介绍 用以替代 NMS,在所有 bbox 中挑选出最优的集合。 NMS 仅考虑了 bbox 的得分,然后根据 IOU 来

A non-linear, non-parametric Machine Learning method capable of modeling complex datasets
A non-linear, non-parametric Machine Learning method capable of modeling complex datasets

Fast Symbolic Regression Symbolic Regression is a non-linear, non-parametric Machine Learning method capable of modeling complex data sets. fastsr aim

Code for
Code for "Learning to Segment Rigid Motions from Two Frames".

rigidmask Code for "Learning to Segment Rigid Motions from Two Frames". ** This is a partial release with inference and evaluation code.

Weakly Supervised Learning of Rigid 3D Scene Flow
Weakly Supervised Learning of Rigid 3D Scene Flow

Weakly Supervised Learning of Rigid 3D Scene Flow This repository provides code and data to train and evaluate a weakly supervised method for rigid 3D

Official PyTorch implementation of CAPTRA: CAtegory-level Pose Tracking for Rigid and Articulated Objects from Point Clouds
Official PyTorch implementation of CAPTRA: CAtegory-level Pose Tracking for Rigid and Articulated Objects from Point Clouds

CAPTRA: CAtegory-level Pose Tracking for Rigid and Articulated Objects from Point Clouds Introduction This is the official PyTorch implementation of o

Brax is a differentiable physics engine that simulates environments made up of rigid bodies, joints, and actuators
Brax is a differentiable physics engine that simulates environments made up of rigid bodies, joints, and actuators

Brax is a differentiable physics engine that simulates environments made up of rigid bodies, joints, and actuators. It's also a suite of learning algorithms to train agents to operate in these environments (PPO, SAC, evolutionary strategy, and direct trajectory optimization are implemented).

Code for ICCV 2021 paper: ARAPReg: An As-Rigid-As Possible Regularization Loss for Learning Deformable Shape Generators..
Code for ICCV 2021 paper: ARAPReg: An As-Rigid-As Possible Regularization Loss for Learning Deformable Shape Generators..

ARAPReg Code for ICCV 2021 paper: ARAPReg: An As-Rigid-As Possible Regularization Loss for Learning Deformable Shape Generators.. Installation The cod

Comments
  • Lack of file “BFM09_model_info.mat”

    Lack of file “BFM09_model_info.mat”

    Traceback (most recent call last): File "demo_nicp.py", line 28, in bfm_meshes, bfm_lm_index = load_bfm_model(torch.device('cuda:0')) File "/data/pytorch-nicp/bfm_model.py", line 15, in load_bfm_model bfm_meta_data = loadmat('BFM/BFM09_model_info.mat') File "/root/anaconda3/envs/pytorch3d/lib/python3.8/site-packages/scipy/io/matlab/mio.py", line 224, in loadmat with _open_file_context(file_name, appendmat) as f: File "/root/anaconda3/envs/pytorch3d/lib/python3.8/contextlib.py", line 113, in enter return next(self.gen) File "/root/anaconda3/envs/pytorch3d/lib/python3.8/site-packages/scipy/io/matlab/mio.py", line 17, in _open_file_context f, opened = _open_file(file_like, appendmat, mode) File "/root/anaconda3/envs/pytorch3d/lib/python3.8/site-packages/scipy/io/matlab/mio.py", line 45, in _open_file return open(file_like, mode), True FileNotFoundError: [Errno 2] No such file or directory: 'BFM/BFM09_model_info.mat'

    In 3DMMfitting-pytorch, there are only these files: BFM_exp_idx.mat BFM_front_idx.mat facemodel_info.mat README.md select_vertex_id.mat similarity_Lm3D_all.mat std_exp.txt

    opened by 675492062 2
  • What is the expected time needed for running demo_nicp.py?

    What is the expected time needed for running demo_nicp.py?

    Hello,

    On my computer it seems quite slow to run demo_nicp.py. At least it took more than 1 minutes to get final.obj. Is it correct?

    I ranAMM_NRR for non-rigit ICP registration with two 7000 vertices meshes. It needs ca 1 second with CPU on my computer. With GPU, it might be possible to do the same work in less than 100 ms?

    Thank you!

    opened by 1939938853 0
  • Hi, with landmarks: `landmarks = torch.from_numpy(np.array(landmarks)).to(device).long()`, maybe you can  reshape landmarks from torch.Size([1, 1, 68, 2]) to  torch.Size([1, 68, 2])

    Hi, with landmarks: `landmarks = torch.from_numpy(np.array(landmarks)).to(device).long()`, maybe you can reshape landmarks from torch.Size([1, 1, 68, 2]) to torch.Size([1, 68, 2])

    Hi, with landmarks: landmarks = torch.from_numpy(np.array(landmarks)).to(device).long(), maybe you can reshape landmarks from torch.Size([1, 1, 68, 2]) to torch.Size([1, 68, 2])

    Originally posted by @wuhaozhe in https://github.com/wuhaozhe/pytorch-nicp/issues/3#issuecomment-971453681 hi!I got output as torch.Size([1, 68, 512, 3]) torch.Size([1, 68, 2]) torch.Size([1, 512, 512, 3]) I think the shape of following tensors are right, but I meet the same problem. lm_vertex = torch.gather(lm_vertex, 2, column_index) RuntimeError: CUDA error: device-side assert triggered

    landmarks = torch.from_numpy(np.array(landmarks)).to(device).long()
    
    row_index = landmarks[:, :, 1].view(landmarks.shape[0], -1)
    column_index = landmarks[:, :, 0].view(landmarks.shape[0], -1)
    row_index = row_index.unsqueeze(2).unsqueeze(3).expand(landmarks.shape[0], landmarks.shape[1], shape_img.shape[2], shape_img.shape[3])
    column_index = column_index.unsqueeze(1).unsqueeze(3).expand(landmarks.shape[0], landmarks.shape[1], landmarks.shape[1], shape_img.shape[3])
    print(row_index.shape, landmarks.shape, shape_img.shape)
    
    opened by alicedingyueming 1
  • RuntimeError

    RuntimeError

    Traceback (most recent call last): File "demo_nicp.py", line 27, in target_lm_index, lm_mask = get_mesh_landmark(norm_meshes, dummy_render) File "/data/pytorch-nicp/landmark.py", line 37, in get_mesh_landmark row_index = row_index.unsqueeze(2).unsqueeze(3).expand(landmarks.shape[0], landmarks.shape[1], shape_img.shape[2], shape_img.shape[3]) RuntimeError: The expanded size of the tensor (1) must match the existing size (2) at non-singleton dimension 1. Target sizes: [1, 1, 512, 3]. Tensor sizes: [1, 2, 1, 1]

    I have already configure the environment,but it seems have some problems in the code.What can I do to solve this problem.

    opened by 675492062 8
Releases(v0.1)
Owner
Haozhe Wu
Research interests in Computer Vision and Machine Learning.
Haozhe Wu
Node-level Graph Regression with Deep Gaussian Process Models

Node-level Graph Regression with Deep Gaussian Process Models Prerequests our implementation is mainly based on tensorflow 1.x and gpflow 1.x: python

1 Jan 16, 2022
Shitty gaze mouse controller

demo.mp4 shitty_gaze_mouse_cotroller install tensofflow, cv2 run the main.py and as it starts it will collect data so first raise your left eyebrow(bo

16 Aug 30, 2022
Multiple custom object count and detection using YOLOv3-Tiny method

Electronic-Component-YOLOv3 Introduce This project created to detect, count, and recognize multiple custom object using YOLOv3-Tiny method. The target

Derwin Mahardika 2 Nov 14, 2022
Reference models and tools for Cloud TPUs.

Cloud TPUs This repository is a collection of reference models and tools used with Cloud TPUs. The fastest way to get started training a model on a Cl

5k Jan 05, 2023
Gapmm2: gapped alignment using minimap2 (align transcripts to genome)

gapmm2: gapped alignment using minimap2 This tool is a wrapper for minimap2 to r

Jon Palmer 2 Jan 27, 2022
3D dataset of humans Manipulating Objects in-the-Wild (MOW)

MOW dataset [Website] This repository maintains our 3D dataset of humans Manipulating Objects in-the-Wild (MOW). The dataset contains 512 images in th

Zhe Cao 28 Nov 06, 2022
[NeurIPS 2021] “Improving Contrastive Learning on Imbalanced Data via Open-World Sampling”,

Improving Contrastive Learning on Imbalanced Data via Open-World Sampling Introduction Contrastive learning approaches have achieved great success in

VITA 24 Dec 17, 2022
An architecture that makes any doodle realistic, in any specified style, using VQGAN, CLIP and some basic embedding arithmetics.

Sketch Simulator An architecture that makes any doodle realistic, in any specified style, using VQGAN, CLIP and some basic embedding arithmetics. See

12 Dec 18, 2022
NCVX (NonConVeX): A User-Friendly and Scalable Package for Nonconvex Optimization in Machine Learning.

The source code is temporariy removed, as we are solving potential copyright and license issues with GRANSO (http://www.timmitchell.com/software/GRANS

SUN Group @ UMN 28 Aug 03, 2022
Deep Q-network learning to play flappybird.

AI Plays Flappy Bird I've trained a DQN that learns to play flappy bird on it's own. Try the pre-trained model First install the pip requirements and

Anish Shrestha 3 Mar 01, 2022
VL-LTR: Learning Class-wise Visual-Linguistic Representation for Long-Tailed Visual Recognition

VL-LTR: Learning Class-wise Visual-Linguistic Representation for Long-Tailed Visual Recognition Usage First, install PyTorch 1.7.1+, torchvision 0.8.2

40 Dec 12, 2022
Official Implementation of DE-CondDETR and DELA-CondDETR in "Towards Data-Efficient Detection Transformers"

DE-DETRs By Wen Wang, Jing Zhang, Yang Cao, Yongliang Shen, and Dacheng Tao This repository is an official implementation of DE-CondDETR and DELA-Cond

Wen Wang 41 Dec 12, 2022
OpenMatch: Open-set Consistency Regularization for Semi-supervised Learning with Outliers (NeurIPS 2021)

OpenMatch: Open-set Consistency Regularization for Semi-supervised Learning with Outliers (NeurIPS 2021) This is an PyTorch implementation of OpenMatc

Vision and Learning Group 38 Dec 26, 2022
Train SN-GAN with AdaBelief

SNGAN-AdaBelief Train a state-of-the-art spectral normalization GAN with AdaBelief https://github.com/juntang-zhuang/Adabelief-Optimizer Acknowledgeme

Juntang Zhuang 10 Jun 11, 2022
95.47% on CIFAR10 with PyTorch

Train CIFAR10 with PyTorch I'm playing with PyTorch on the CIFAR10 dataset. Prerequisites Python 3.6+ PyTorch 1.0+ Training # Start training with: py

5k Dec 30, 2022
[ICCV'2021] Image Inpainting via Conditional Texture and Structure Dual Generation

[ICCV'2021] Image Inpainting via Conditional Texture and Structure Dual Generation

Xiefan Guo 122 Dec 11, 2022
PyTorch implementation of Asymmetric Siamese (https://arxiv.org/abs/2204.00613)

Asym-Siam: On the Importance of Asymmetry for Siamese Representation Learning This is a PyTorch implementation of the Asym-Siam paper, CVPR 2022: @inp

Meta Research 89 Dec 18, 2022
Repo for the Tutorials of Day1-Day3 of the Nordic Probabilistic AI School 2021 (https://probabilistic.ai/)

ProbAI 2021 - Probabilistic Programming and Variational Inference Tutorial with Pryo Day 1 (June 14) Slides Notebook: students_PPLs_Intro Notebook: so

PGM-Lab 46 Nov 01, 2022
Python 3 module to print out long strings of text with intervals of time inbetween

Python-Fastprint Python 3 module to print out long strings of text with intervals of time inbetween Install: pip install fastprint Sync Usage: from fa

Kainoa Kanter 2 Jun 27, 2022
A cross-document event and entity coreference resolution system, trained and evaluated on the ECB+ corpus.

A Comprehensive Comparison of Word Embeddings in Event & Entity Coreference Resolution. Introduction This repo contains experimental code derived from

2 May 09, 2022