A PyTorch toolkit for 2D Human Pose Estimation.

Overview

PyTorch-Pose

screenshot

PyTorch-Pose is a PyTorch implementation of the general pipeline for 2D single human pose estimation. The aim is to provide the interface of the training/inference/evaluation, and the dataloader with various data augmentation options for the most popular human pose databases (e.g., the MPII human pose, LSP and FLIC).

Some codes for data preparation and augmentation are brought from the Stacked hourglass network. Thanks to the original author.

Update: this repository is compatible with PyTorch 0.4.1/1.0 now!

Features

  • Multi-thread data loading
  • Multi-GPU training
  • Logger
  • Training/testing results visualization

Installation

  1. PyTorch (>= 0.4.1): Please follow the installation instruction of PyTorch. Note that the code is developed with Python2 and has not been tested with Python3 yet.

  2. Clone the repository with submodule

    git clone --recursive https://github.com/bearpaw/pytorch-pose.git
    
  3. Create a symbolic link to the images directory of the MPII dataset:

    ln -s PATH_TO_MPII_IMAGES_DIR data/mpii/images
    

    For training/testing on COCO, please refer to COCO Readme.

  1. Download annotation file:

Usage

Please refer to TRAINING.md for detailed training recipes!

Testing

You may download our pretrained models (e.g., 2-stack hourglass model) for a quick start.

Run the following command in terminal to evaluate the model on MPII validation split (The train/val split is from Tompson et al. CVPR 2015).

CUDA_VISIBLE_DEVICES=0 python example/main.py --dataset mpii -a hg --stacks 2 --blocks 1 --checkpoint checkpoint/mpii/hg_s2_b1 --resume checkpoint/mpii/hg_s2_b1/model_best.pth.tar -e -d
  • -a specifies a network architecture
  • --resume will load the weight from a specific model
  • -e stands for evaluation only
  • -d will visualize the network output. It can be also used during training

The result will be saved as a .mat file (preds_valid.mat), which is a 2958x16x2 matrix, in the folder specified by --checkpoint.

Evaluate the [email protected] score

Evaluate with MATLAB

You may use the matlab script evaluation/eval_PCKh.m to evaluate your predictions. The evaluation code is ported from Tompson et al. CVPR 2015.

The results ([email protected] score) trained using this code is reported in the following table.

Model Head Shoulder Elbow Wrist Hip Knee Ankle Mean
hg_s2_b1 (last) 95.80 94.57 88.12 83.31 86.24 80.88 77.44 86.76
hg_s2_b1 (best) 95.87 94.68 88.27 83.64 86.29 81.20 77.70 86.95
hg_s8_b1 (last) 96.79 95.19 90.08 85.32 87.48 84.26 80.73 88.64
hg_s8_b1 (best) 96.79 95.28 90.27 85.56 87.57 84.3 81.06 88.78

Training / validation curve is visualized as follows.

curve

Evaluate with Python

You may also evaluate the result by running python evaluation/eval_PCKh.py to evaluate the predictions. It will produce exactly the same result as that of the MATLAB. Thanks @sssruhan1 for the contribution.

Training

Run the following command in terminal to train an 8-stack of hourglass network on the MPII human pose dataset.

CUDA_VISIBLE_DEVICES=0 python example/main.py --dataset mpii -a hg --stacks 8 --blocks 1 --checkpoint checkpoint/mpii/hg8 -j 4

Here,

  • CUDA_VISIBLE_DEVICES=0 identifies the GPU devices you want to use. For example, use CUDA_VISIBLE_DEVICES=0,1 if you want to use two GPUs with ID 0 and 1.
  • -j specifies how many workers you want to use for data loading.
  • --checkpoint specifies where you want to save the models, the log and the predictions to.

Miscs

Supported dataset

Supported models

Contribute

Please create a pull request if you want to contribute.

Owner
Wei Yang
NVIDIA Robotics Research Lab
Wei Yang
A new GCN model for Point Cloud Analyse

Pytorch Implementation of PointNet and PointNet++ This repo is implementation for VA-GCN in pytorch. Classification (ModelNet10/40) Data Preparation D

12 Feb 02, 2022
Misc YOLOL scripts for use in the Starbase space sandbox videogame

starbase-misc Misc YOLOL scripts for use in the Starbase space sandbox videogame. Each directory contains standalone YOLOL scripts. They don't really

4 Oct 17, 2021
Dynamic View Synthesis from Dynamic Monocular Video

Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer This repository contains code to compute depth from a

Intelligent Systems Lab Org 2.3k Jan 01, 2023
Implementation of ToeplitzLDA for spatiotemporal stationary time series data.

Code for the ToeplitzLDA classifier proposed in here. The classifier conforms sklearn and can be used as a drop-in replacement for other LDA classifiers. For in-depth usage refer to the learning from

Jan Sosulski 5 Nov 07, 2022
METS/ALTO OCR enhancing tool by the National Library of Luxembourg (BnL)

Nautilus-OCR The National Library of Luxembourg (BnL) started its first initiative in digitizing newspapers, with layout recognition and OCR on articl

National Library of Luxembourg 36 Dec 05, 2022
VGGFace2-HQ - A high resolution face dataset for face editing purpose

The first open source high resolution dataset for face swapping!!! A high resolution version of VGGFace2 for academic face editing purpose

Naiyuan Liu 232 Dec 29, 2022
An efficient PyTorch library for Global Wheat Detection using YOLOv5. The project is based on this Kaggle competition Global Wheat Detection (2021).

Global-Wheat-Detection An efficient PyTorch library for Global Wheat Detection using YOLOv5. The project is based on this Kaggle competition Global Wh

Chuxin Wang 11 Sep 25, 2022
[NeurIPS'20] Self-supervised Co-Training for Video Representation Learning. Tengda Han, Weidi Xie, Andrew Zisserman.

CoCLR: Self-supervised Co-Training for Video Representation Learning This repository contains the implementation of: InfoNCE (MoCo on videos) UberNCE

Tengda Han 271 Jan 02, 2023
Anomaly Transformer: Time Series Anomaly Detection with Association Discrepancy" (ICLR 2022 Spotlight)

About Code release for Anomaly Transformer: Time Series Anomaly Detection with Association Discrepancy (ICLR 2022 Spotlight)

THUML @ Tsinghua University 221 Dec 31, 2022
A study project using the AA-RMVSNet to reconstruct buildings from multiple images

3d-building-reconstruction This is part of a study project using the AA-RMVSNet to reconstruct buildings from multiple images. Introduction It is exci

17 Oct 17, 2022
这是一个yolo3-tf2的源码,可以用于训练自己的模型。

YOLOV3:You Only Look Once目标检测模型在Tensorflow2当中的实现 目录 性能情况 Performance 所需环境 Environment 文件下载 Download 训练步骤 How2train 预测步骤 How2predict 评估步骤 How2eval 参考资料

Bubbliiiing 68 Dec 21, 2022
Facebook AI Image Similarity Challenge: Descriptor Track

Facebook AI Image Similarity Challenge: Descriptor Track This repository contains the code for our solution to the Facebook AI Image Similarity Challe

Sergio MP 17 Dec 14, 2022
Official code release for 3DV 2021 paper Human Performance Capture from Monocular Video in the Wild.

Official code release for 3DV 2021 paper Human Performance Capture from Monocular Video in the Wild.

Chen Guo 58 Dec 24, 2022
Code for the AAAI-2022 paper: Imagine by Reasoning: A Reasoning-Based Implicit Semantic Data Augmentation for Long-Tailed Classification

Imagine by Reasoning: A Reasoning-Based Implicit Semantic Data Augmentation for Long-Tailed Classification (AAAI 2022) Prerequisite PyTorch = 1.2.0 P

16 Dec 14, 2022
Code for our paper "Interactive Analysis of CNN Robustness"

Perturber Code for our paper "Interactive Analysis of CNN Robustness" Datasets Feature visualizations: Google Drive Fine-tuning checkpoints as saved m

Stefan Sietzen 0 Aug 17, 2021
Implementation of experiments in the paper Clockwork Variational Autoencoders (project website) using JAX and Flax

Clockwork VAEs in JAX/Flax Implementation of experiments in the paper Clockwork Variational Autoencoders (project website) using JAX and Flax, ported

Julius Kunze 26 Oct 05, 2022
CoMoGAN: continuous model-guided image-to-image translation. CVPR 2021 oral.

CoMoGAN: Continuous Model-guided Image-to-Image Translation Official repository. Paper CoMoGAN: continuous model-guided image-to-image translation [ar

166 Dec 31, 2022
Official implementation of VaxNeRF (Voxel-Accelearated NeRF).

VaxNeRF Paper | Google Colab This is the official implementation of VaxNeRF (Voxel-Accelearated NeRF). VaxNeRF provides very fast training and slightl

naruya 132 Nov 21, 2022
Implementation for Panoptic-PolarNet (CVPR 2021)

Panoptic-PolarNet This is the official implementation of Panoptic-PolarNet. [ArXiv paper] Introduction Panoptic-PolarNet is a fast and robust LiDAR po

Zixiang Zhou 126 Jan 01, 2023
torchlm is aims to build a high level pipeline for face landmarks detection, it supports training, evaluating, exporting, inference(Python/C++) and 100+ data augmentations

💎A high level pipeline for face landmarks detection, supports training, evaluating, exporting, inference and 100+ data augmentations, compatible with torchvision and albumentations, can easily instal

DefTruth 142 Dec 25, 2022