This repository is the offical Pytorch implementation of ContextPose: Context Modeling in 3D Human Pose Estimation: A Unified Perspective (CVPR 2021).

Overview

Context Modeling in 3D Human Pose Estimation: A Unified Perspective (CVPR 2021)

Introduction

This repository is the offical Pytorch implementation of ContextPose, Context Modeling in 3D Human Pose Estimation: A Unified Perspective (CVPR 2021). Below is the example pipeline of using ContextPose for 3D pose estimation. overall pipeline

Quick start

Environment

This project is developed using >= python 3.5 on Ubuntu 16.04. NVIDIA GPUs are needed.

Installation

  1. Clone this repo, and we'll call the directory that you cloned as ${ContextPose_ROOT}.
  2. Install dependences.
    1. Install pytorch >= v1.4.0 following official instruction.
    2. Install other packages. This project doesn't have any special or difficult-to-install dependencies. All installation can be down with:
    pip install -r requirements.txt
  3. Download data following the next section. In summary, your directory tree should be like this
${ContextPose_ROOT}
├── data
├── experiments
├── mvn
├── logs 
├── README.md
├── process_h36m.sh
├── requirements.txt
├── train.py
`── train.sh

Data

Note: We provide the training and evaluation code on Human3.6M dataset. We do NOT provide the source data. We do NOT own the data or have permission to redistribute the data. Please download according to the official instructions.

Human3.6M

  1. Install CDF C Library by following (https://stackoverflow.com/questions/37232008/how-read-common-data-format-cdf-in-python/58167429#58167429), which is neccessary for processing Human3.6M data.
  2. Download and preprocess the dataset by following the instructions in mvn/datasets/human36m_preprocessing/README.md.
  3. To train ContextPose model, you need rough estimations of the pelvis' 3D positions both for train and val splits. In the paper we use the precalculated 3D skeletons estimated by the Algebraic model proposed in learnable-triangulation (which is an opensource repo and we adopt their Volumetric model to be our baseline.) All pretrained weights and precalculated 3D skeletons can be downloaded at once from here and placed to ./data/pretrained. Here, we fine-tuned the pretrained weight on the Human3.6M dataset for another 20 epochs, please download the weight from here and place to ./data/pretrained/human36m.
  4. We provide the limb length mean and standard on the Human3.6M training set, please download from here and place to ./data/human36m/extra.
  5. Finally, your data directory should be like this (for more detailed directory tree, please refer to README.md)
${ContextPose_ROOT}
|-- data
    |-- human36m
    |   |-- extra
    |   |   | -- una-dinosauria-data
    |   |   | -- ...
    |   |   | -- mean_and_std_limb_length.h5
    |   `-- ...
    `-- pretrained
        |-- human36m
            |-- human36m_alg_10-04-2019
            |-- human36m_vol_softmax_10-08-2019
            `-- backbone_weights.pth

Train

Every experiment is defined by .config files. Configs with experiments from the paper can be found in the ./experiments directory. You can use the train.sh script or specifically:

Single-GPU

To train a Volumetric model with softmax aggregation using 1 GPU, run:

python train.py \
  --config experiments/human36m/train/human36m_vol_softmax_single.yaml \
  --logdir ./logs

The training will start with the config file specified by --config, and logs (including tensorboard files) will be stored in --logdir.

Multi-GPU

Multi-GPU training is implemented with PyTorch's DistributedDataParallel. It can be used both for single-machine and multi-machine (cluster) training. To run the processes use the PyTorch launch utility.

To train our model using 4 GPUs on single machine, run:

python -m torch.distributed.launch --nproc_per_node=4 --master_port=2345 --sync_bn\
  train.py  \
  --config experiments/human36m/train/human36m_vol_softmax_single.yaml \
  --logdir ./logs

Evaluation

After training, you can evaluate the model. Inside the same config file, add path to the learned weights (they are dumped to logs dir during training):

model:
    init_weights: true
    checkpoint: {PATH_TO_WEIGHTS}

Single-GPU

Run:

python train.py \
  --eval --eval_dataset val \
  --config experiments/human36m/eval/human36m_vol_softmax_single.yaml \
  --logdir ./logs

Multi-GPU

Using 4 GPUs on single machine, Run:

python -m torch.distributed.launch --nproc_per_node=4 --master_port=2345 \
  train.py  --eval --eval_dataset val \
  --config experiments/human36m/eval/human36m_vol_softmax_single.yaml \
  --logdir ./logs

Argument --eval_dataset can be val or train. Results can be seen in logs directory or in the tensorboard.

Results & Model Zoo

  • We evaluate ContextPose on two available large benchmarks: Human3.6M and MPI-INF-3DHP.
  • To get the results reported in our paper, you can download the weights and place to ./logs.
Dataset to be evaluated Weights Results
Human3.6M link 43.4mm (MPJPE)
MPI-INF-3DHP link 81.5 (PCK), 43.6 (AUC)
  • For H36M, the main metric is MPJPE (Mean Per Joint Position Error) which is L2 distance averaged over all joints. To get the result, run as stated above.
  • For 3DHP, Percentage of Correctly estimated Keypoints (PCK) as well as Area Under the Curve (AUC) are reported. Note that we directly apply our model trained on H36M dataset to 3DHP dataset without re-training to evaluate the generalization performance. To prevent from over-fitting to the H36M-style appearance, we only change the training strategy that we fix the backbone to train 20 epoch before we train the whole network end-to-end. If you want to eval on MPI-INF-3DHP, you can save the results and use the official evaluation code in Matlab.

Human3.6M

MPI-INF-3DHP

Citation

If you use our code or models in your research, please cite with:

@article{ma2021context,
  title={Context Modeling in 3D Human Pose Estimation: A Unified Perspective},
  author={Ma, Xiaoxuan and Su, Jiajun and Wang, Chunyu and Ci, Hai and Wang, Yizhou},
  journal={arXiv preprint arXiv:2103.15507},
  year={2021}
} 

Acknowledgement

This repo is built on https://github.com/karfly/learnable-triangulation-pytorch. Part of the data are provided by https://github.com/una-dinosauria/3d-pose-baseline.

This repo contains the code and data used in the paper "Wizard of Search Engine: Access to Information Through Conversations with Search Engines"

Wizard of Search Engine: Access to Information Through Conversations with Search Engines by Pengjie Ren, Zhongkun Liu, Xiaomeng Song, Hongtao Tian, Zh

19 Oct 27, 2022
A list of Machine Learning Art Colabs

ML Visual Art Colabs A list of cool Colabs on Machine Learning Imagemaking or other artistic purposes 3D Ken Burns Effect Ken Burns Effect by Manuel R

Derrick Schultz (he/him) 789 Dec 12, 2022
Remote sensing change detection tool based on PaddlePaddle

PdRSCD PdRSCD(PaddlePaddle Remote Sensing Change Detection)是一个基于飞桨PaddlePaddle的遥感变化检测的项目,pypi包名为ppcd。目前0.2版本,最新支持图像列表输入的训练和预测,如多期影像、多源影像甚至多期多源影像。可以快速完

38 Aug 31, 2022
This repository contains the source code and data for reproducing results of Deep Continuous Clustering paper

Deep Continuous Clustering Introduction This is a Pytorch implementation of the DCC algorithms presented in the following paper (paper): Sohil Atul Sh

Sohil Shah 197 Nov 29, 2022
Tensorflow 2.x based implementation of EDSR, WDSR and SRGAN for single image super-resolution

Single Image Super-Resolution with EDSR, WDSR and SRGAN A Tensorflow 2.x based implementation of Enhanced Deep Residual Networks for Single Image Supe

Martin Krasser 1.3k Jan 06, 2023
Pytorch-diffusion - A basic PyTorch implementation of 'Denoising Diffusion Probabilistic Models'

PyTorch implementation of 'Denoising Diffusion Probabilistic Models' This reposi

Arthur Juliani 76 Jan 07, 2023
Stacked Generative Adversarial Networks

Stacked Generative Adversarial Networks This repository contains code for the paper "Stacked Generative Adversarial Networks", CVPR 2017. Part of the

Xun Huang 241 May 07, 2022
Code accompanying the paper "ProxyFL: Decentralized Federated Learning through Proxy Model Sharing"

ProxyFL Code accompanying the paper "ProxyFL: Decentralized Federated Learning through Proxy Model Sharing" Authors: Shivam Kalra*, Junfeng Wen*, Jess

Layer6 Labs 14 Dec 06, 2022
A modular, research-friendly framework for high-performance and inference of sequence models at many scales

T5X T5X is a modular, composable, research-friendly framework for high-performance, configurable, self-service training, evaluation, and inference of

Google Research 1.1k Jan 08, 2023
CoReD: Generalizing Fake Media Detection with Continual Representation using Distillation (ACMMM'21 Oral Paper)

CoReD: Generalizing Fake Media Detection with Continual Representation using Distillation (ACMMM'21 Oral Paper) (Accepted for oral presentation at ACM

Minha Kim 1 Nov 12, 2021
Continual Learning of Long Topic Sequences in Neural Information Retrieval

ContinualPassageRanking Repository for the paper "Continual Learning of Long Topic Sequences in Neural Information Retrieval". In this repository you

0 Apr 12, 2022
ViViT: Curvature access through the generalized Gauss-Newton's low-rank structure

ViViT is a collection of numerical tricks to efficiently access curvature from the generalized Gauss-Newton (GGN) matrix based on its low-rank structure. Provided functionality includes computing

Felix Dangel 12 Dec 08, 2022
Simultaneous Demand Prediction and Planning

Simultaneous Demand Prediction and Planning Dependencies Python packages: Pytorch, scikit-learn, Pandas, Numpy, PyYAML Data POI: data/poi Road network

Yizong Wang 1 Sep 01, 2022
This repository contains code and data for "On the Multimodal Person Verification Using Audio-Visual-Thermal Data"

trimodal_person_verification This repository contains the code, and preprocessed dataset featured in "A Study of Multimodal Person Verification Using

ISSAI 7 Aug 31, 2022
An end-to-end PyTorch framework for image and video classification

What's New: March 2021: Added RegNetZ models November 2020: Vision Transformers now available, with training recipes! 2020-11-20: Classy Vision v0.5 R

Facebook Research 1.5k Dec 31, 2022
Object recognition using Azure Custom Vision AI and Azure Functions

Step by Step on how to create an object recognition model using Custom Vision, export the model and run the model in an Azure Function

El Bruno 11 Jul 08, 2022
Image-Scaling Attacks and Defenses

Image-Scaling Attacks & Defenses This repository belongs to our publication: Erwin Quiring, David Klein, Daniel Arp, Martin Johns and Konrad Rieck. Ad

Erwin Quiring 163 Nov 21, 2022
FlowTorch is a PyTorch library for learning and sampling from complex probability distributions using a class of methods called Normalizing Flows

FlowTorch is a PyTorch library for learning and sampling from complex probability distributions using a class of methods called Normalizing Flows.

Meta Incubator 272 Jan 02, 2023
KoRean based ELECTRA pre-trained models (KR-ELECTRA) for Tensorflow and PyTorch

KoRean based ELECTRA (KR-ELECTRA) This is a release of a Korean-specific ELECTRA model with comparable or better performances developed by the Computa

12 Jun 03, 2022