MoCap-Solver: A Neural Solver for Optical Motion Capture Data

Overview

1. Description

This depository contains the sourcecode of MoCap-Solver and the baseline method [Holden 2018].

MoCap-Solver is a data-driven-based robust marker denoising method, which takes raw mocap markers as input and outputs corresponding clean markers and skeleton motions. It is based on our work published in SIGGRAPH 2021:

MoCap-Solver: A Neural Solver for Optical Motion Capture Data.

To configurate this project, run the following commands in Anaconda:

conda create -n MoCapSolver pip python=3.6
conda activate MoCapSolver
conda install cudatoolkit=10.1.243
conda install cudnn=7.6.5
conda install numpy=1.17.0
conda install matplotlib=3.1.3
conda install json5=0.9.1
conda install pyquaternion=0.9.9
conda install h5py=2.10.0
conda install tqdm=4.56.0
conda install tensorflow-gpu==1.13.1
conda install keras==2.2.5
conda install chumpy==0.70
conda install opencv-python==4.5.3.56
conda install pytorch==1.6.0 torchvision==0.7.0 cudatoolkit=10.1 -c pytorch
conda install tensorboard==1.15.1

2. Generate synthetic dataset

Download the project SMPLPYTORCH with SMPL models downloaded and configurated and put the subfolder "smplpytorch" into the folder "external".

Put the CMU mocap dataset from AMASS dataset into the folder

external/CMU

and download the 'smpl_data.npz' from the project SURREAL and put it into "external".

Finally, run the following scripts to generate training dataset and testing dataset.

python generate_dataset.py

We use a SEED to randomly select train dataset and test dataset and randomly generate noises. You can set the number of SEED to generate different datasets.

If you need to generate the training data of your own mocap data sequence, we need three kinds of data for each mocap data sequence: raw data, clean data and the bind pose. For each sequence, we should prepare these three kinds of data.

  • The raw data: the animations of raw markers that are captured by the optical mocap devices.
  • The clean data: The corresponding ground-truth skinned mesh animations containing clean markers and skeleton animation. The skeletons of each mocap sequences must be homogenious, that is to say, the numbers of skeletons and the hierarchy must be consistent. The clean markers is skinned on the skeletons. The skinning weights of each mocap sequence must be consistent.
  • The bind pose: The bind pose contains the positions of skeletons and the corresponding clean markers, as the Section 3 illustrated.
M: the marker global positions of cleaned mocap sequence. N * 56 * 3
M1: the marker global positions of raw mocap sequence. N * 56 * 3
J_R: The global rotation matrix of each joints of mocap sequence. N *  24 * 3 * 3
J_t: The joint global positions of mocap sequence. N * 24 * 3
J: The joint positions of T-pose. 24 * 3
Marker_config: The marker configuration of the bind-pose, meaning the local position of each marker with respect to the local frame of each joints. 56 * 24 * 3

The order of the markers and skeletons we process in our algorithm is as follows:

Marker_order = {
            "ARIEL": 0, "C7": 1, "CLAV": 2, "L4": 3, "LANK": 4, "LBHD": 5, "LBSH": 6, "LBWT": 7, "LELB": 8, "LFHD": 9,
            "LFSH": 10, "LFWT": 11, "LHEL": 12, "LHIP": 13,
            "LIEL": 14, "LIHAND": 15, "LIWR": 16, "LKNE": 17, "LKNI": 18, "LMT1": 19, "LMT5": 20, "LMWT": 21,
            "LOHAND": 22, "LOWR": 23, "LSHN": 24, "LTOE": 25, "LTSH": 26,
            "LUPA": 27, "LWRE": 28, "RANK": 29, "RBHD": 30, "RBSH": 31, "RBWT": 32, "RELB": 33, "RFHD": 34, "RFSH": 35,
            "RFWT": 36, "RHEL": 37, "RHIP": 38, "RIEL": 39, "RIHAND": 40,
            "RIWR": 41, "RKNE": 42, "RKNI": 43, "RMT1": 44, "RMT5": 45, "RMWT": 46, "ROHAND": 47, "ROWR": 48,
            "RSHN": 49, "RTOE": 50, "RTSH": 51, "RUPA": 52, "RWRE": 53, "STRN": 54, "T10": 55} // The order of markers

Skeleton_order = {"Pelvis": 0, "L_Hip": 1, "L_Knee": 2, "L_Ankle": 3, "L_Foot": 4, "R_Hip": 5, "R_Knee": 6, "R_Ankle": 7,
            "R_Foot": 8, "Spine1": 9, "Spine2": 10, "Spine3": 11, "L_Collar": 12, "L_Shoulder": 13, "L_Elbow": 14,
            "L_Wrist": 15, "L_Hand": 16, "Neck": 17, "Head": 18, "R_Collar": 19, "R_Shoulder": 20, "R_Elbow": 21,
            "R_Wrist": 22, "R_Hand": 23}// The order of skeletons.

3. Train and evaluate

3.1 MoCap-Solver

We can train and evaluate MoCap-Solver by running this script.

python train_and_evaluate_MoCap_Solver.py

3.2 Train and evaluate [Holden 2018]

We also provide our implement version of [Holden 2018], which is the baseline of mocap data solving.

Once prepared mocap dataset, we can train and evaluate the model [Holden 2018] by running the following script:

python train_and_evaluate_Holden2018.py

3.3 Pre-trained models

We set the SEED number to 100, 200, 300, 400 respectively, and generated four different datasets. We trained MoCap-Solver and [Holden 2018] on these four datasets and evaluated the errors on the test dataset, the evaluation result is showed on the table.

The pretrained models can be downloaded from Google Drive. To evaluate the pretrained models, you need to copy all the files in one of the seed folder (need to be consistent with the SEED parameter) into models/, and run the evaluation script:

python evaluate_MoCap_Solver.py

In our original implementation of MoCap-Solver and [Holden 2018] in our paper, markers and skeletons were normalized using the average bone length of the dataset. However, it is problematic when deploying this algorithm to the production environment, since the groundtruth skeletons of test data were actually unknown information. So in our released version, such normalization is removed and the evaluation error is slightly higher than our original implementation since the task has become more complex.

4. Typos

The loss function (3-4) of our paper: The first term of this function (i.e. alpha_1*D(Y, X)), X denotes the groundtruth clean markers and Y the predicted clean markers.

5. Citation

If you use this code for your research, please cite our paper:

@article{kang2021mocapsolver,
  author = {Chen, Kang and Wang, Yupan and Zhang, Song-Hai and Xu, Sen-Zhe and Zhang, Weidong and Hu, Shi-Min},
  title = {MoCap-Solver: A Neural Solver for Optical Motion Capture Data},
  journal = {ACM Transactions on Graphics (TOG)},
  volume = {40},
  number = {4},
  pages = {84},
  year = {2021},
  publisher = {ACM}
}
Code to reproduce the results for Compositional Attention

Compositional-Attention This repository contains the official implementation for the paper Compositional Attention: Disentangling Search and Retrieval

Sarthak Mittal 58 Nov 30, 2022
[CVPR 2022] Pytorch implementation of "Templates for 3D Object Pose Estimation Revisited: Generalization to New objects and Robustness to Occlusions" paper

template-pose Pytorch implementation of "Templates for 3D Object Pose Estimation Revisited: Generalization to New objects and Robustness to Occlusions

Van Nguyen Nguyen 92 Dec 28, 2022
Generalized Proximal Policy Optimization with Sample Reuse (GePPO)

Generalized Proximal Policy Optimization with Sample Reuse This repository is the official implementation of the reinforcement learning algorithm Gene

Jimmy Queeney 9 Nov 28, 2022
[NeurIPS 2021]: Are Transformers More Robust Than CNNs? (Pytorch implementation & checkpoints)

Are Transformers More Robust Than CNNs? Pytorch implementation for NeurIPS 2021 Paper: Are Transformers More Robust Than CNNs? Our implementation is b

Yutong Bai 145 Dec 01, 2022
Rapid experimentation and scaling of deep learning models on molecular and crystal graphs.

LitMatter A template for rapid experimentation and scaling deep learning models on molecular and crystal graphs. How to use Clone this repository and

Nathan Frey 32 Dec 06, 2022
PyTorch implementation of: Michieli U. and Zanuttigh P., "Continual Semantic Segmentation via Repulsion-Attraction of Sparse and Disentangled Latent Representations", CVPR 2021.

Continual Semantic Segmentation via Repulsion-Attraction of Sparse and Disentangled Latent Representations This is the official PyTorch implementation

Multimedia Technology and Telecommunication Lab 42 Nov 09, 2022
Problem-943.-ACMP - Problem 943. ACMP

Problem-943.-ACMP В "main.py" расположен вариант моего решения задачи 943 с серв

Konstantin Dyomshin 2 Aug 19, 2022
Adaptive FNO transformer - official Pytorch implementation

Adaptive Fourier Neural Operators: Efficient Token Mixers for Transformers This repository contains PyTorch implementation of the Adaptive Fourier Neu

NVIDIA Research Projects 77 Dec 29, 2022
This is the repo for our work "Towards Persona-Based Empathetic Conversational Models" (EMNLP 2020)

Towards Persona-Based Empathetic Conversational Models (PEC) This is the repo for our work "Towards Persona-Based Empathetic Conversational Models" (E

Zhong Peixiang 35 Nov 17, 2022
Interpretable-contrastive-word-mover-s-embedding

Interpretable-contrastive-word-mover-s-embedding Paper Datasets Here is a Dropbox link to the datasets used in the paper: https://www.dropbox.com/sh/n

0 Nov 02, 2021
The offcial repository for 'CharacterBERT and Self-Teaching for Improving the Robustness of Dense Retrievers on Queries with Typos', SIGIR2022

CharacterBERT-DR The offcial repository for CharacterBERT and Self-Teaching for Improving the Robustness of Dense Retrievers on Queries with Typos, Sh

ielab 11 Nov 15, 2022
Model Quantization Benchmark

Introduction MQBench is an open-source model quantization toolkit based on PyTorch fx. The envision of MQBench is to provide: SOTA Algorithms. With MQ

500 Jan 06, 2023
Keras Realtime Multi-Person Pose Estimation - Keras version of Realtime Multi-Person Pose Estimation project

This repository has become incompatible with the latest and recommended version of Tensorflow 2.0 Instead of refactoring this code painfully, I create

M Faber 769 Dec 08, 2022
PyMove is a Python library to simplify queries and visualization of trajectories and other spatial-temporal data

Use PyMove and go much further Information Package Status License Python Version Platforms Build Status PyPi version PyPi Downloads Conda version Cond

Insight Data Science Lab 64 Nov 15, 2022
This repository contains the code for the CVPR 2020 paper "Differentiable Volumetric Rendering: Learning Implicit 3D Representations without 3D Supervision"

Differentiable Volumetric Rendering Paper | Supplementary | Spotlight Video | Blog Entry | Presentation | Interactive Slides | Project Page This repos

697 Jan 06, 2023
Group Activity Recognition with Clustered Spatial Temporal Transformer

GroupFormer Group Activity Recognition with Clustered Spatial-TemporalTransformer Backbone Style Action Acc Activity Acc Config Download Inv3+flow+pos

28 Dec 12, 2022
MAT: Mask-Aware Transformer for Large Hole Image Inpainting

MAT: Mask-Aware Transformer for Large Hole Image Inpainting (CVPR2022, Oral) Wenbo Li, Zhe Lin, Kun Zhou, Lu Qi, Yi Wang, Jiaya Jia [Paper] News This

254 Dec 29, 2022
Radar-to-Lidar: Heterogeneous Place Recognition via Joint Learning

radar-to-lidar-place-recognition This page is the coder of a pre-print, implemented by PyTorch. If you have some questions on this project, please fee

Huan Yin 37 Oct 09, 2022
Code of our paper "Contrastive Object-level Pre-training with Spatial Noise Curriculum Learning"

CCOP Code of our paper Contrastive Object-level Pre-training with Spatial Noise Curriculum Learning Requirement Install OpenSelfSup Install Detectron2

Chenhongyi Yang 21 Dec 13, 2022
Code for ACL 21: Generating Query Focused Summaries from Query-Free Resources

marge This repository releases the code for Generating Query Focused Summaries from Query-Free Resources. Please cite the following paper [bib] if you

Yumo Xu 28 Nov 10, 2022