Code for "Neural Body: Implicit Neural Representations with Structured Latent Codes for Novel View Synthesis of Dynamic Humans" CVPR 2021 best paper candidate

Overview

News

  • 05/17/2021 To make the comparison on ZJU-MoCap easier, we save quantitative and qualitative results of other methods at here, including Neural Volumes, Multi-view Neural Human Rendering, and Deferred Neural Human Rendering.
  • 05/13/2021 To make the following works easier compare with our model, we save our rendering results of ZJU-MoCap at here and write a document that describes the training and test protocols.
  • 05/12/2021 The code supports the test and visualization on unseen human poses.
  • 05/12/2021 We update the ZJU-MoCap dataset with better fitted SMPL using EasyMocap. We also release a website for visualization. Please see here for the usage of provided smpl parameters.

Neural Body: Implicit Neural Representations with Structured Latent Codes for Novel View Synthesis of Dynamic Humans

Project Page | Video | Paper | Data

monocular

Neural Body: Implicit Neural Representations with Structured Latent Codes for Novel View Synthesis of Dynamic Humans
Sida Peng, Yuanqing Zhang, Yinghao Xu, Qianqian Wang, Qing Shuai, Hujun Bao, Xiaowei Zhou
CVPR 2021

Any questions or discussions are welcomed!

Installation

Please see INSTALL.md for manual installation.

Installation using docker

Please see docker/README.md.

Thanks to Zhaoyi Wan for providing the docker implementation.

Run the code on the custom dataset

Please see CUSTOM.

Run the code on People-Snapshot

Please see INSTALL.md to download the dataset.

We provide the pretrained models at here.

Process People-Snapshot

We already provide some processed data. If you want to process more videos of People-Snapshot, you could use tools/process_snapshot.py.

You can also visualize smpl parameters of People-Snapshot with tools/vis_snapshot.py.

Visualization on People-Snapshot

Take the visualization on female-3-casual as an example. The command lines for visualization are recorded in visualize.sh.

  1. Download the corresponding pretrained model and put it to $ROOT/data/trained_model/if_nerf/female3c/latest.pth.

  2. Visualization:

    • Visualize novel views of single frame
    python run.py --type visualize --cfg_file configs/snapshot_exp/snapshot_f3c.yaml exp_name female3c vis_novel_view True num_render_views 144
    

    monocular

    • Visualize views of dynamic humans with fixed camera
    python run.py --type visualize --cfg_file configs/snapshot_exp/snapshot_f3c.yaml exp_name female3c vis_novel_pose True
    

    monocular

    • Visualize mesh
    # generate meshes
    python run.py --type visualize --cfg_file configs/snapshot_exp/snapshot_f3c.yaml exp_name female3c vis_mesh True train.num_workers 0
    # visualize a specific mesh
    python tools/render_mesh.py --exp_name female3c --dataset people_snapshot --mesh_ind 226
    

    monocular

  3. The results of visualization are located at $ROOT/data/render/female3c and $ROOT/data/perform/female3c.

Training on People-Snapshot

Take the training on female-3-casual as an example. The command lines for training are recorded in train.sh.

  1. Train:
    # training
    python train_net.py --cfg_file configs/snapshot_exp/snapshot_f3c.yaml exp_name female3c resume False
    # distributed training
    python -m torch.distributed.launch --nproc_per_node=4 train_net.py --cfg_file configs/snapshot_exp/snapshot_f3c.yaml exp_name female3c resume False gpus "0, 1, 2, 3" distributed True
    
  2. Train with white background:
    # training
    python train_net.py --cfg_file configs/snapshot_exp/snapshot_f3c.yaml exp_name female3c resume False white_bkgd True
    
  3. Tensorboard:
    tensorboard --logdir data/record/if_nerf
    

Run the code on ZJU-MoCap

Please see INSTALL.md to download the dataset.

We provide the pretrained models at here.

Potential problems of provided smpl parameters

  1. The newly fitted parameters locate in new_params. Currently, the released pretrained models are trained on previously fitted parameters, which locate in params.
  2. The smpl parameters of ZJU-MoCap have different definition from the one of MPI's smplx.
    • If you want to extract vertices from the provided smpl parameters, please use zju_smpl/extract_vertices.py.
    • The reason that we use the current definition is described at here.

It is okay to train Neural Body with smpl parameters fitted by smplx.

Test on ZJU-MoCap

The command lines for test are recorded in test.sh.

Take the test on sequence 313 as an example.

  1. Download the corresponding pretrained model and put it to $ROOT/data/trained_model/if_nerf/xyzc_313/latest.pth.
  2. Test on training human poses:
    python run.py --type evaluate --cfg_file configs/zju_mocap_exp/latent_xyzc_313.yaml exp_name xyzc_313
    
  3. Test on unseen human poses:
    python run.py --type evaluate --cfg_file configs/zju_mocap_exp/latent_xyzc_313.yaml exp_name xyzc_313 test_novel_pose True
    

Visualization on ZJU-MoCap

Take the visualization on sequence 313 as an example. The command lines for visualization are recorded in visualize.sh.

  1. Download the corresponding pretrained model and put it to $ROOT/data/trained_model/if_nerf/xyzc_313/latest.pth.

  2. Visualization:

    • Visualize novel views of single frame
    python run.py --type visualize --cfg_file configs/zju_mocap_exp/latent_xyzc_313.yaml exp_name xyzc_313 vis_novel_view True
    

    zju_mocap

    • Visualize novel views of single frame by rotating the SMPL model
    python run.py --type visualize --cfg_file configs/zju_mocap_exp/latent_xyzc_313.yaml exp_name xyzc_313 vis_novel_view True num_render_views 100
    

    zju_mocap

    • Visualize views of dynamic humans with fixed camera
    python run.py --type visualize --cfg_file configs/zju_mocap_exp/latent_xyzc_313.yaml exp_name xyzc_313 vis_novel_pose True num_render_frame 1000 num_render_views 1
    

    zju_mocap

    • Visualize views of dynamic humans with rotated camera
    python run.py --type visualize --cfg_file configs/zju_mocap_exp/latent_xyzc_313.yaml exp_name xyzc_313 vis_novel_pose True num_render_frame 1000
    

    zju_mocap

    • Visualize mesh
    # generate meshes
    python run.py --type visualize --cfg_file configs/zju_mocap_exp/latent_xyzc_313.yaml exp_name xyzc_313 vis_mesh True train.num_workers 0
    # visualize a specific mesh
    python tools/render_mesh.py --exp_name xyzc_313 --dataset zju_mocap --mesh_ind 0
    

    zju_mocap

  3. The results of visualization are located at $ROOT/data/render/xyzc_313 and $ROOT/data/perform/xyzc_313.

Training on ZJU-MoCap

Take the training on sequence 313 as an example. The command lines for training are recorded in train.sh.

  1. Train:
    # training
    python train_net.py --cfg_file configs/zju_mocap_exp/latent_xyzc_313.yaml exp_name xyzc_313 resume False
    # distributed training
    python -m torch.distributed.launch --nproc_per_node=4 train_net.py --cfg_file configs/zju_mocap_exp/latent_xyzc_313.yaml exp_name xyzc_313 resume False gpus "0, 1, 2, 3" distributed True
    
  2. Train with white background:
    # training
    python train_net.py --cfg_file configs/zju_mocap_exp/latent_xyzc_313.yaml exp_name xyzc_313 resume False white_bkgd True
    
  3. Tensorboard:
    tensorboard --logdir data/record/if_nerf
    

Citation

If you find this code useful for your research, please use the following BibTeX entry.

@inproceedings{peng2021neural,
  title={Neural Body: Implicit Neural Representations with Structured Latent Codes for Novel View Synthesis of Dynamic Humans},
  author={Peng, Sida and Zhang, Yuanqing and Xu, Yinghao and Wang, Qianqian and Shuai, Qing and Bao, Hujun and Zhou, Xiaowei},
  booktitle={CVPR},
  year={2021}
}
Owner
ZJU3DV
ZJU3DV is a research group of State Key Lab of CAD&CG, Zhejiang University. We focus on the research of 3D computer vision, SLAM and AR.
ZJU3DV
DAFNe: A One-Stage Anchor-Free Deep Model for Oriented Object Detection

DAFNe: A One-Stage Anchor-Free Deep Model for Oriented Object Detection Code for our Paper DAFNe: A One-Stage Anchor-Free Deep Model for Oriented Obje

Steven Lang 58 Dec 19, 2022
High-level library to help with training and evaluating neural networks in PyTorch flexibly and transparently.

TL;DR Ignite is a high-level library to help with training and evaluating neural networks in PyTorch flexibly and transparently. Click on the image to

4.2k Jan 01, 2023
Unsupervised Feature Ranking via Attribute Networks.

FRANe Unsupervised Feature Ranking via Attribute Networks (FRANe) converts a dataset into a network (graph) with nodes that correspond to the features

7 Sep 29, 2022
Official PyTorch implementation of "AASIST: Audio Anti-Spoofing using Integrated Spectro-Temporal Graph Attention Networks"

AASIST This repository provides the overall framework for training and evaluating audio anti-spoofing systems proposed in 'AASIST: Audio Anti-Spoofing

Clova AI Research 56 Jan 02, 2023
Dynamic Multi-scale Filters for Semantic Segmentation (DMNet ICCV'2019)

Dynamic Multi-scale Filters for Semantic Segmentation (DMNet ICCV'2019) Introduction Official implementation of Dynamic Multi-scale Filters for Semant

23 Oct 21, 2022
A Python library for Deep Graph Networks

PyDGN Wiki Description This is a Python library to easily experiment with Deep Graph Networks (DGNs). It provides automatic management of data splitti

Federico Errica 194 Dec 22, 2022
HyperPose is a library for building high-performance custom pose estimation applications.

HyperPose is a library for building high-performance custom pose estimation applications.

TensorLayer Community 1.2k Jan 04, 2023
Adversarial Learning for Modeling Human Motion

Adversarial Learning for Modeling Human Motion This repository contains the open source code which reproduces the results for the paper: Adversarial l

wangqi 6 Jun 15, 2021
This repository is the official implementation of Open Rule Induction. This paper has been accepted to NeurIPS 2021.

Open Rule Induction This repository is the official implementation of Open Rule Induction. This paper has been accepted to NeurIPS 2021. Abstract Rule

Xingran Chen 16 Nov 14, 2022
SSD-based Object Detection in PyTorch

SSD-based Object Detection in PyTorch 서강대학교 현대모비스 SW 프로그램에서 진행한 인공지능 프로젝트입니다. Jetson nano를 이용해 pre-trained network를 fine tuning시켜 차량 및 신호등 인식을 구현하였습니다

Haneul Kim 1 Nov 16, 2021
Implementation of Convolutional enhanced image Transformer

CeiT : Convolutional enhanced image Transformer This is an unofficial PyTorch implementation of Incorporating Convolution Designs into Visual Transfor

Rishikesh (ऋषिकेश) 82 Dec 13, 2022
[ICCV'21] Neural Radiance Flow for 4D View Synthesis and Video Processing

NeRFlow [ICCV'21] Neural Radiance Flow for 4D View Synthesis and Video Processing Datasets The pouring dataset used for experiments can be download he

44 Dec 20, 2022
Churn-Prediction-Project - In this project, a churn prediction model is developed for a private bank as a term project for Data Mining class.

Churn-Prediction-Project In this project, a churn prediction model is developed for a private bank as a term project for Data Mining class. Project in

1 Jan 03, 2022
Revisiting Global Statistics Aggregation for Improving Image Restoration

Revisiting Global Statistics Aggregation for Improving Image Restoration Xiaojie Chu, Liangyu Chen, Chengpeng Chen, Xin Lu Paper: https://arxiv.org/pd

MEGVII Research 128 Dec 24, 2022
Lenia - Mathematical Life Forms

For full version list, see Timeline in Lenia portal [2020-10-13] Update Python version with multi-kernel and multi-channel extensions (v3.4 LeniaNDK.p

Bert Chan 3.1k Dec 28, 2022
Dashboard for the COVID19 spread

COVID-19 Data Explorer App A streamlit Dashboard for the COVID-19 spread. The app is live at: [https://covid19.cwerner.ai]. New data is queried from G

Christian Werner 22 Sep 29, 2022
AoT is a system for automatically generating off-target test harness by using build information.

AoT: Auto off-Target Automatically generating off-target test harness by using build information. Brought to you by the Mobile Security Team at Samsun

Samsung 10 Oct 19, 2022
Unofficial Pytorch Lightning implementation of Contrastive Syn-to-Real Generalization (ICLR, 2021)

Unofficial Pytorch Lightning implementation of Contrastive Syn-to-Real Generalization (ICLR, 2021)

Gyeongjae Choi 17 Sep 23, 2021
[CVPR2021] Look before you leap: learning landmark features for one-stage visual grounding.

LBYL-Net This repo implements paper Look Before You Leap: Learning Landmark Features For One-Stage Visual Grounding CVPR 2021. Getting Started Prerequ

SVIP Lab 45 Dec 12, 2022
Differentiable Prompt Makes Pre-trained Language Models Better Few-shot Learners

DART Implementation for ICLR2022 paper Differentiable Prompt Makes Pre-trained Language Models Better Few-shot Learners. Environment

ZJUNLP 83 Dec 27, 2022