Research code for Arxiv paper "Camera Motion Agnostic 3D Human Pose Estimation"

Related tags

Deep LearningGMR
Overview

GMR(Camera Motion Agnostic 3D Human Pose Estimation)

This repo provides the source code of our arXiv paper:
Seong Hyun Kim, Sunwon Jeong, Sungbum Park, and Ju Yong Chang, "Camera motion agnostic 3D human pose estimation," arXiv preprint arXiv:2112.00343, 2021.

Environment

  • Python : 3.6
  • Ubuntu : 18.04
  • CUDA : 11.1
  • cudnn : 8.0.5
  • torch : 1.7.1
  • torchvision : 0.8.2
  • GPU : one Nvidia RTX3090

Installation

  • First, you need to install python and other packages.

    pip install -r requirements.txt
  • Then, you need to install torch and torchvision. We tested our code on torch1.7.1 and torchvision0.8.2. But our code can also work with torch version >= 1.5.0.

Quick Demo

  • Download pretrained GMR model from [pretrained GMR] and make them look like this:

    ${GMR_ROOT}
     |-- results
         |-- GMR
             |-- final_model.pth
    
  • Download other model files from [other model files] and make them look like this:

    ${GMR_ROOT}
     |-- data
         |-- gmr_data
             |-- J_regressor_extra.npy
             |-- J_regressor_h36m.npy
             |-- SMPL_NEUTRAL.pkl
             |-- gmm_08.pkl
             |-- smpl_mean_params.npz
             |-- spin_model_checkpoint.pth.tar
             |-- vibe_model_w_3dpw.pth.tar
             |-- vibe_model_wo_3dpw.pth.tar
    
  • Finally, download demo videos from [demo videos] and make them look like this:

    ${GMR_ROOT}
    |-- configs
    |-- data
    |-- lib
    |-- results
    |-- scripts
    |-- demo.py
    |-- eval_3dpw.py
    |-- eval_synthetic.py
    |-- DEMO_VIDEO1.mp4
    |-- DEMO_VIDEO2.mp4
    |-- DEMO_VIDEO3.mp4
    |-- DEMO_VIDEO4.mp4
    |-- README.md
    |-- requirements.txt
    |-- run_eval_3dpw.sh
    |-- run_eval_synthetic.sh
    |-- run_train.sh
    |-- train.py
    

Demo code consists of (bounding box tracking) - (VIBE) - (GMR)

python demo.py --vid_file DEMO_VIDEO1.mp4 --vid_type mp4 --vid_fps 30 --view_type back --cfg configs/GMR_config.yaml --output_folder './'

python demo.py --vid_file DEMO_VIDEO2.mp4 --vid_type mp4 --vid_fps 30 --view_type front_large --cfg configs/GMR_config.yaml --output_folder './'

python demo.py --vid_file DEMO_VIDEO3.mp4 --vid_type mp4 --vid_fps 30 --view_type back --cfg configs/GMR_config.yaml --output_folder './'

python demo.py --vid_file DEMO_VIDEO4.mp4 --vid_type mp4 --vid_fps 30 --view_type back --cfg configs/GMR_config.yaml --output_folder './'

Data

You need to follow directory structure of the data as below.

${GMR_ROOT}
  |-- data
    |-- amass
      |-- ACCAD
      |-- BioMotionLab_NTroje
      |-- CMU
      |-- EKUT
      |-- Eyes_Japan_Dataset
      |-- HumanEva
      |-- KIT
      |-- MPI_HDM05
      |-- MPI_Limits
      |-- MPI_mosh
      |-- SFU
      |-- SSM_synced
      |-- TCD_handMocap
      |-- TotalCapture
      |-- Transitions_mocap
    |-- gmr_data
      |-- J_regressor_extra.npy
      |-- J_regressor_h36m.npy
      |-- SMPL_NEUTRAL.pkl
      |-- gmm_08.pkl
      |-- smpl_mean_params.npz
      |-- spin_model_checkpoint.pth.tar
      |-- vibe_model_w_3dpw.pth.tar
      |-- vibe_model_wo_3dpw.pth.tar
    |-- gmr_db
      |-- amass_train_db.pt
      |-- h36m_dsd_val_db.pt
      |-- 3dpw_test_db.pt
      |-- synthetic_camera_motion_off.pt
      |-- synthetic_camera_motion_on.pt
  • Download AMASS dataset from this link and place them in data/amass. Then, you can obtain the training data through the following command. Also, you can download the training data from this link.
    source scripts/prepare_training_data.sh
    
  • Download processed 3DPW data [data]
  • Download processed Human3.6 data [data]
  • Download synthetic dataset [data]

Train

Run the commands below to start training:

./run_train.sh

Evaluation

Run the commands below to start evaluation:

# Evaluation on 3DPW dataset
./run_eval_3dpw.sh

# Evaluation on synthetic dataset
./run_eval_synthetic.sh

References

We borrowed some scripts and models externally. Thanks to the authors for providing great resources.

  • Pretrained VIBE and most of functions are borrowed from VIBE.
  • Pretrained SPIN is borrowed from SPIN.
  • SMPL model files are borrowed from SPIN and SMPLify.
Owner
Seong Hyun Kim
M.S. student in CVLAB, Kwang Woon University
Seong Hyun Kim
ACL'2021: LM-BFF: Better Few-shot Fine-tuning of Language Models

LM-BFF (Better Few-shot Fine-tuning of Language Models) This is the implementation of the paper Making Pre-trained Language Models Better Few-shot Lea

Princeton Natural Language Processing 607 Jan 07, 2023
Code for "AutoMTL: A Programming Framework for Automated Multi-Task Learning"

AutoMTL: A Programming Framework for Automated Multi-Task Learning This is the website for our paper "AutoMTL: A Programming Framework for Automated M

Ivy Zhang 40 Dec 04, 2022
Drone-based Joint Density Map Estimation, Localization and Tracking with Space-Time Multi-Scale Attention Network

DroneCrowd Paper Detection, Tracking, and Counting Meets Drones in Crowds: A Benchmark. Introduction This paper proposes a space-time multi-scale atte

VisDrone 98 Nov 16, 2022
✔️ Visual, reactive testing library for Julia. Time machine included.

PlutoTest.jl (alpha release) Visual, reactive testing library for Julia A macro @test that you can use to verify your code's correctness. But instead

Pluto 68 Dec 20, 2022
Unicorn can be used for performance analyses of highly configurable systems with causal reasoning

Unicorn can be used for performance analyses of highly configurable systems with causal reasoning. Users or developers can query Unicorn for a performance task.

AISys Lab 27 Jan 05, 2023
Our solution for SSN Invente 2021's Hackathon

Our solution for SSN Invente 2021's Hackathon. To help maitain godowns in a pristine and safe condition using raspberry pi.

1 Jan 12, 2022
Code of Puregaze: Purifying gaze feature for generalizable gaze estimation, AAAI 2022.

PureGaze: Purifying Gaze Feature for Generalizable Gaze Estimation Description Our work is accpeted by AAAI 2022. Picture: We propose a domain-general

39 Dec 05, 2022
Official repository of PanoAVQA: Grounded Audio-Visual Question Answering in 360° Videos (ICCV 2021)

Pano-AVQA Official repository of PanoAVQA: Grounded Audio-Visual Question Answering in 360° Videos (ICCV 2021) [Paper] [Poster] [Video] Getting Starte

Heeseung Yun 9 Dec 23, 2022
Code release for Local Light Field Fusion at SIGGRAPH 2019

Local Light Field Fusion Project | Video | Paper Tensorflow implementation for novel view synthesis from sparse input images. Local Light Field Fusion

1.1k Dec 27, 2022
Official implementation of Pixel-Level Bijective Matching for Video Object Segmentation

BMVOS This is the official implementation of Pixel-Level Bijective Matching for Video Object Segmentation, to appear in WACV 2022. @article{cho2021pix

Suhwan Cho 13 Dec 14, 2022
A PyTorch implementation of a Factorization Machine module in cython.

fmpytorch A library for factorization machines in pytorch. A factorization machine is like a linear model, except multiplicative interaction terms bet

Jack Hessel 167 Jul 06, 2022
Repo for flood prediction using LSTMs and HAND

Abstract Every year, floods cause billions of dollars’ worth of damages to life, crops, and property. With a proper early flood warning system in plac

1 Oct 27, 2021
TransZero++: Cross Attribute-guided Transformer for Zero-Shot Learning

TransZero++ This repository contains the testing code for the paper "TransZero++: Cross Attribute-guided Transformer for Zero-Shot Learning" submitted

Shiming Chen 6 Aug 16, 2022
Implementation of "StrengthNet: Deep Learning-based Emotion Strength Assessment for Emotional Speech Synthesis"

StrengthNet Implementation of "StrengthNet: Deep Learning-based Emotion Strength Assessment for Emotional Speech Synthesis" https://arxiv.org/abs/2110

RuiLiu 65 Dec 20, 2022
TPH-YOLOv5: Improved YOLOv5 Based on Transformer Prediction Head for Object Detection on Drone-Captured Scenarios

TPH-YOLOv5 This repo is the implementation of "TPH-YOLOv5: Improved YOLOv5 Based on Transformer Prediction Head for Object Detection on Drone-Captured

cv516Buaa 439 Dec 22, 2022
A note taker for NVDA. Allows the user to create, edit, view, manage and export notes to different formats.

Quick Notetaker add-on for NVDA The Quick Notetaker add-on is a wonderful tool which allows writing notes quickly and easily anytime and from any app

5 Dec 06, 2022
A Simulation Environment to train Robots in Large Realistic Interactive Scenes

iGibson: A Simulation Environment to train Robots in Large Realistic Interactive Scenes iGibson is a simulation environment providing fast visual rend

Stanford Vision and Learning Lab 493 Jan 04, 2023
How to Learn a Domain Adaptive Event Simulator? ACM MM, 2021

LETGAN How to Learn a Domain Adaptive Event Simulator? ACM MM 2021 Running Environment: pytorch=1.4, 1 NVIDIA-1080TI. More details can be found in pap

CVTEAM 4 Sep 20, 2022
Identifying Stroke Indicators Using Rough Sets

Identifying Stroke Indicators Using Rough Sets With the spirit of reproducible research, this repository contains all the codes required to produce th

Muhammad Salman Pathan 0 Jun 09, 2022
pybaum provides tools to work with pytrees which is a concept burrowed from JAX.

pybaum provides tools to work with pytrees which is a concept burrowed from JAX.

Open Source Economics 9 May 11, 2022