GPU Accelerated Non-rigid ICP for surface registration

Overview

GPU Accelerated Non-rigid ICP for surface registration

Introduction

Preivous Non-rigid ICP algorithm is usually implemented on CPU, and needs to solve sparse least square problem, which is time consuming. In this repo, we implement a pytorch version NICP algorithm based on paper Amberg et al. Detailedly, we leverage the AMSGrad to optimize the linear regresssion, and then found nearest points iteratively. Additionally, we smooth the calculated mesh with laplacian smoothness term. With laplacian smoothness term, the wireframe is also more neat.


Quick Start

install

We use python3.8 and cuda10.2 for implementation. The code is tested on Ubuntu 20.04.

  • The pytorch3d cannot be installed directly from pip install pytorch3d, for the installation of pytorch3d, see pytorch3d.
  • For other packages, run
pip install -r requirements.txt
  • For the template face model, currently we use a processed version of BFM face model from 3DMMfitting-pytorch, download the BFM09_model_info.mat from 3DMMfitting-pytorch and put it into the ./BFM folder.
  • For demo, run
python demo_nicp.py

we show demo for NICP mesh2mesh and NICP mesh2pointcloud. We have two param sets for registration:

milestones = set([50, 80, 100, 110, 120, 130, 140])
stiffness_weights = np.array([50, 20, 5, 2, 0.8, 0.5, 0.35, 0.2])
landmark_weights = np.array([5, 2, 0.5, 0, 0, 0, 0, 0])

This param set is used for registration on fine grained mesh

milestones = set([50, 100])
stiffness_weights = np.array([50, 20, 5])
landmark_weights = np.array([50, 20, 5])

This param set is used for registration on noisy point clouds

Templated Model

You can also use your own templated face model with manually specified landmarks.

Todo

Currently we write some batchwise functions, but batchwise NICP is not supported now. We will support batch NICP in further releases.

You might also like...
High performance Cross-platform Inference-engine, you could run Anakin on x86-cpu,arm, nv-gpu, amd-gpu,bitmain and cambricon devices.

Anakin2.0 Welcome to the Anakin GitHub. Anakin is a cross-platform, high-performance inference engine, which is originally developed by Baidu engineer

GrabGpu_py: a scripts for grab gpu when gpu is free

GrabGpu_py a scripts for grab gpu when gpu is free. WaitCondition: gpu_memory

A Robust Non-IoU Alternative to Non-Maxima Suppression in Object Detection
A Robust Non-IoU Alternative to Non-Maxima Suppression in Object Detection

Confluence: A Robust Non-IoU Alternative to Non-Maxima Suppression in Object Detection 1. 介绍 用以替代 NMS,在所有 bbox 中挑选出最优的集合。 NMS 仅考虑了 bbox 的得分,然后根据 IOU 来

A non-linear, non-parametric Machine Learning method capable of modeling complex datasets
A non-linear, non-parametric Machine Learning method capable of modeling complex datasets

Fast Symbolic Regression Symbolic Regression is a non-linear, non-parametric Machine Learning method capable of modeling complex data sets. fastsr aim

Code for
Code for "Learning to Segment Rigid Motions from Two Frames".

rigidmask Code for "Learning to Segment Rigid Motions from Two Frames". ** This is a partial release with inference and evaluation code.

Weakly Supervised Learning of Rigid 3D Scene Flow
Weakly Supervised Learning of Rigid 3D Scene Flow

Weakly Supervised Learning of Rigid 3D Scene Flow This repository provides code and data to train and evaluate a weakly supervised method for rigid 3D

Official PyTorch implementation of CAPTRA: CAtegory-level Pose Tracking for Rigid and Articulated Objects from Point Clouds
Official PyTorch implementation of CAPTRA: CAtegory-level Pose Tracking for Rigid and Articulated Objects from Point Clouds

CAPTRA: CAtegory-level Pose Tracking for Rigid and Articulated Objects from Point Clouds Introduction This is the official PyTorch implementation of o

Brax is a differentiable physics engine that simulates environments made up of rigid bodies, joints, and actuators
Brax is a differentiable physics engine that simulates environments made up of rigid bodies, joints, and actuators

Brax is a differentiable physics engine that simulates environments made up of rigid bodies, joints, and actuators. It's also a suite of learning algorithms to train agents to operate in these environments (PPO, SAC, evolutionary strategy, and direct trajectory optimization are implemented).

Code for ICCV 2021 paper: ARAPReg: An As-Rigid-As Possible Regularization Loss for Learning Deformable Shape Generators..
Code for ICCV 2021 paper: ARAPReg: An As-Rigid-As Possible Regularization Loss for Learning Deformable Shape Generators..

ARAPReg Code for ICCV 2021 paper: ARAPReg: An As-Rigid-As Possible Regularization Loss for Learning Deformable Shape Generators.. Installation The cod

Comments
  • Lack of file “BFM09_model_info.mat”

    Lack of file “BFM09_model_info.mat”

    Traceback (most recent call last): File "demo_nicp.py", line 28, in bfm_meshes, bfm_lm_index = load_bfm_model(torch.device('cuda:0')) File "/data/pytorch-nicp/bfm_model.py", line 15, in load_bfm_model bfm_meta_data = loadmat('BFM/BFM09_model_info.mat') File "/root/anaconda3/envs/pytorch3d/lib/python3.8/site-packages/scipy/io/matlab/mio.py", line 224, in loadmat with _open_file_context(file_name, appendmat) as f: File "/root/anaconda3/envs/pytorch3d/lib/python3.8/contextlib.py", line 113, in enter return next(self.gen) File "/root/anaconda3/envs/pytorch3d/lib/python3.8/site-packages/scipy/io/matlab/mio.py", line 17, in _open_file_context f, opened = _open_file(file_like, appendmat, mode) File "/root/anaconda3/envs/pytorch3d/lib/python3.8/site-packages/scipy/io/matlab/mio.py", line 45, in _open_file return open(file_like, mode), True FileNotFoundError: [Errno 2] No such file or directory: 'BFM/BFM09_model_info.mat'

    In 3DMMfitting-pytorch, there are only these files: BFM_exp_idx.mat BFM_front_idx.mat facemodel_info.mat README.md select_vertex_id.mat similarity_Lm3D_all.mat std_exp.txt

    opened by 675492062 2
  • What is the expected time needed for running demo_nicp.py?

    What is the expected time needed for running demo_nicp.py?

    Hello,

    On my computer it seems quite slow to run demo_nicp.py. At least it took more than 1 minutes to get final.obj. Is it correct?

    I ranAMM_NRR for non-rigit ICP registration with two 7000 vertices meshes. It needs ca 1 second with CPU on my computer. With GPU, it might be possible to do the same work in less than 100 ms?

    Thank you!

    opened by 1939938853 0
  • Hi, with landmarks: `landmarks = torch.from_numpy(np.array(landmarks)).to(device).long()`, maybe you can  reshape landmarks from torch.Size([1, 1, 68, 2]) to  torch.Size([1, 68, 2])

    Hi, with landmarks: `landmarks = torch.from_numpy(np.array(landmarks)).to(device).long()`, maybe you can reshape landmarks from torch.Size([1, 1, 68, 2]) to torch.Size([1, 68, 2])

    Hi, with landmarks: landmarks = torch.from_numpy(np.array(landmarks)).to(device).long(), maybe you can reshape landmarks from torch.Size([1, 1, 68, 2]) to torch.Size([1, 68, 2])

    Originally posted by @wuhaozhe in https://github.com/wuhaozhe/pytorch-nicp/issues/3#issuecomment-971453681 hi!I got output as torch.Size([1, 68, 512, 3]) torch.Size([1, 68, 2]) torch.Size([1, 512, 512, 3]) I think the shape of following tensors are right, but I meet the same problem. lm_vertex = torch.gather(lm_vertex, 2, column_index) RuntimeError: CUDA error: device-side assert triggered

    landmarks = torch.from_numpy(np.array(landmarks)).to(device).long()
    
    row_index = landmarks[:, :, 1].view(landmarks.shape[0], -1)
    column_index = landmarks[:, :, 0].view(landmarks.shape[0], -1)
    row_index = row_index.unsqueeze(2).unsqueeze(3).expand(landmarks.shape[0], landmarks.shape[1], shape_img.shape[2], shape_img.shape[3])
    column_index = column_index.unsqueeze(1).unsqueeze(3).expand(landmarks.shape[0], landmarks.shape[1], landmarks.shape[1], shape_img.shape[3])
    print(row_index.shape, landmarks.shape, shape_img.shape)
    
    opened by alicedingyueming 1
  • RuntimeError

    RuntimeError

    Traceback (most recent call last): File "demo_nicp.py", line 27, in target_lm_index, lm_mask = get_mesh_landmark(norm_meshes, dummy_render) File "/data/pytorch-nicp/landmark.py", line 37, in get_mesh_landmark row_index = row_index.unsqueeze(2).unsqueeze(3).expand(landmarks.shape[0], landmarks.shape[1], shape_img.shape[2], shape_img.shape[3]) RuntimeError: The expanded size of the tensor (1) must match the existing size (2) at non-singleton dimension 1. Target sizes: [1, 1, 512, 3]. Tensor sizes: [1, 2, 1, 1]

    I have already configure the environment,but it seems have some problems in the code.What can I do to solve this problem.

    opened by 675492062 8
Releases(v0.1)
Owner
Haozhe Wu
Research interests in Computer Vision and Machine Learning.
Haozhe Wu
Code for "Primitive Representation Learning for Scene Text Recognition" (CVPR 2021)

Primitive Representation Learning Network (PREN) This repository contains the code for our paper accepted by CVPR 2021 Primitive Representation Learni

Ruijie Yan 76 Jan 02, 2023
This is a five-step framework for the development of intrusion detection systems (IDS) using machine learning (ML) considering model realization, and performance evaluation.

AB-TRAP: building invisibility shields to protect network devices The AB-TRAP framework is applicable to the development of Network Intrusion Detectio

Lab-C2DC - Laboratory of Command and Control and Cyber-security 17 Jan 04, 2023
A PyTorch Implementation of "Neural Arithmetic Logic Units"

Neural Arithmetic Logic Units [WIP] This is a PyTorch implementation of Neural Arithmetic Logic Units by Andrew Trask, Felix Hill, Scott Reed, Jack Ra

Kevin Zakka 181 Nov 18, 2022
A graphical Semi-automatic annotation tool based on labelImg and Yolov5

💕YOLOV5 semi-automatic annotation tool (Based on labelImg)

EricFang 247 Jan 05, 2023
GLIP: Grounded Language-Image Pre-training

GLIP: Grounded Language-Image Pre-training Updates 12/06/2021: GLIP paper on arxiv https://arxiv.org/abs/2112.03857. Code and Model are under internal

Microsoft 862 Jan 01, 2023
Pytorch implementation of FlowNet by Dosovitskiy et al.

FlowNetPytorch Pytorch implementation of FlowNet by Dosovitskiy et al. This repository is a torch implementation of FlowNet, by Alexey Dosovitskiy et

Clément Pinard 762 Jan 02, 2023
TDmatch is a Python library developed to perform matching tasks in three categories:

TDmatch TDmatch is a Python library developed to perform matching tasks in three categories: Text to Data which matches tuples of a table to text docu

Naser Ahmadi 5 Aug 11, 2022
Repo for my Tensorflow/Keras CV experiments. Mostly revolving around the Danbooru20xx dataset

SW-CV-ModelZoo Repo for my Tensorflow/Keras CV experiments. Mostly revolving around the Danbooru20xx dataset Framework: TF/Keras 2.7 Training SQLite D

20 Dec 27, 2022
Pytorch implementation of SimSiam Architecture

SimSiam-pytorch A simple pytorch implementation of Exploring Simple Siamese Representation Learning which is developed by Facebook AI Research (FAIR)

Saeed Shurrab 1 Oct 20, 2021
Auto Seg-Loss: Searching Metric Surrogates for Semantic Segmentation

Auto-Seg-Loss By Hao Li, Chenxin Tao, Xizhou Zhu, Xiaogang Wang, Gao Huang, Jifeng Dai This is the official implementation of the ICLR 2021 paper Auto

61 Dec 21, 2022
This tutorial repository is to introduce the functionality of KGTK to first-time users

Welcome to the KGTK notebook tutorial The goal of this tutorial repository is to introduce the functionality of KGTK to first-time users. The Knowledg

USC ISI I2 58 Dec 21, 2022
Large-scale open domain KNOwledge grounded conVERsation system based on PaddlePaddle

Knover Knover is a toolkit for knowledge grounded dialogue generation based on PaddlePaddle. Knover allows researchers and developers to carry out eff

607 Dec 31, 2022
Code to generate datasets used in "How Useful is Self-Supervised Pretraining for Visual Tasks?"

Synthetic dataset rendering Framework for producing the synthetic datasets used in: How Useful is Self-Supervised Pretraining for Visual Tasks? Alejan

Princeton Vision & Learning Lab 21 Apr 29, 2022
KoCLIP: Korean port of OpenAI CLIP, in Flax

KoCLIP This repository contains code for KoCLIP, a Korean port of OpenAI's CLIP. This project was conducted as part of Hugging Face's Flax/JAX communi

Jake Tae 100 Jan 02, 2023
Python Implementation of the CoronaWarnApp (CWA) Event Registration

Python implementation of the Corona-Warn-App (CWA) Event Registration This is an implementation of the Protocol used to generate event and location QR

MaZderMind 17 Oct 05, 2022
VM3000 Microphones

VM3000-Microphones This project was completed by Ricky Leman under the supervision of Dr Ben Travaglione and Professor Melinda Hodkiewicz as part of t

UWA System Health Lab 0 Jun 04, 2021
Real-time ground filtering algorithm of cloud points acquired using Terrestrial Laser Scanner (TLS)

This repository contains tools to simulate the ground filtering process of a registered point cloud. The repository contains two filtering methods. The first method uses a normal vector, and fit to p

5 Aug 25, 2022
Contrastive Learning Inverts the Data Generating Process

Official code to reproduce the results and data presented in the paper Contrastive Learning Inverts the Data Generating Process.

71 Nov 25, 2022
Code for the paper: Learning Adversarially Robust Representations via Worst-Case Mutual Information Maximization (https://arxiv.org/abs/2002.11798)

Representation Robustness Evaluations Our implementation is based on code from MadryLab's robustness package and Devon Hjelm's Deep InfoMax. For all t

Sicheng 19 Dec 07, 2022
High dimensional black-box optimizer using Latent Action Monte Carlo Tree Search algorithm

LA-MCTS The code is based of paper Learning Search Space Partition for Black-box Optimization using Monte Carlo Tree Search. Component LA-MCTS has thr

Meta Research 18 Oct 24, 2022