An end-to-end library for editing and rendering motion of 3D characters with deep learning [SIGGRAPH 2020]

Overview

Deep-motion-editing

Python Pytorch Blender

This library provides fundamental and advanced functions to work with 3D character animation in deep learning with Pytorch. The code contains end-to-end modules, from reading and editing animation files to visualizing and rendering (using Blender) them.

The main deep editing operations provided here, motion retargeting and motion style transfer, are based on two works published in SIGGRAPH 2020:

Skeleton-Aware Networks for Deep Motion Retargeting: Project | Paper | Video


Unpaired Motion Style Transfer from Video to Animation: Project | Paper | Video


This library is written and maintained by Kfir Aberman, Peizhuo Li and Yijia Weng. The library is still under development.

Prerequisites

  • Linux or macOS
  • Python 3
  • CPU or NVIDIA GPU + CUDA CuDNN

Quick Start

We provide pretrained models together with demo examples using animation files specified in bvh format.

Motion Retargeting

Download and extract the test dataset from Google Drive or Baidu Disk (ye1q). Then place the Mixamo directory within retargeting/datasets.

To generate the demo examples with the pretrained model, run

cd retargeting
sh demo.sh

The results will be saved in retargeting/examples.

To reconstruct the quantitative result with the pretrained model, run

cd retargeting
python test.py

The retargeted demo results, that consists both intra-structual retargeting and cross-structural retargeting, will be saved in retargeting/pretrained/results.

Motion Style Transfer

To receive the demo examples, simply run

sh style_transfer/demo.sh

The results will be saved in style_transfer/demo_results, where each folder contains the raw output raw.bvh and the output after footskate clean-up fixed.bvh.

Train from scratch

We provide instructions for retraining our models

Motion Retargeting

Dataset

We use Mixamo dataset to train our model. You can download our preprocessed data from Google Drive or Baidu Disk(4rgv). Then place the Mixamo directory within retargeting/datasets.

Otherwise, if you want to download Mixamo dataset or use your own dataset, please follow the instructions below. Unless specifically mentioned, all script should be run in retargeting directory.

  • To download Mixamo on your own, you can refer to this good tutorial. You will need to download as fbx file (skin is not required) and make a subdirectory for each character in retargeting/datasets/Mixamo. In our original implementation we download 60fps fbx files and downsample them into 30fps. Since we use an unpaired way in training, it is recommended to divide all motions into two equal size sets for each group and equal size sets for each character in each group. If you use your own data, you need to make sure that your dataset consists of bvh files with same t-pose. You should also put your dataset in subdirectories of retargeting/datasets/Mixamo.

  • Enter retargeting/datasets directory and run blender -b -P fbx2bvh.py to convert fbx files to bvh files. If you already have bvh file as dataset, please skil this step.

  • In our original implementation, we manually split three joints for skeletons in group A. If you want to follow our routine, run python datasets/split_joint.py. This step is optional.

  • Run python datasets/preprocess.py to simplify the skeleton by removing some less interesting joints, e.g. fingers and convert bvh files into npy files. If you use your own data, you'll need to define simplified structure in retargeting/datasets/bvh_parser.py. This information currently is hard-coded in the code. See the comment in source file for more details. There are four steps to make your own dataset work.

  • Training and testing character are hard-coded in retargeting/datasets/__init__.py. You'll need to modify it if you want to use your own dataset.

Train

After preparing dataset, simply run

cd retargeting
python train.py --save_dir=./training/

It will use default hyper-parameters to train the model and save trained model in retargeting/training directory. More options are available in retargeting/option_parser.py. You can use tensorboard to monitor the training progress by running

tensorboard --logdir=./retargeting/training/logs/

Motion Style Transfer

Dataset

  • Download the dataset from Google Drive or Baidu Drive (zzck). The dataset consists of two parts: one is the taken from the motion style transfer dataset proposed by Xia et al. and the other is our BFA dataset, where both parts contain .bvh files retargeted to the standard skeleton of CMU mocap dataset.

  • Extract the .zip files into style_transfer/data

  • Pre-process data for training:

    cd style_transfer/data_proc
    sh gen_dataset.sh

    This will produce xia.npz, bfa.npz in style_transfer/data.

Train

After downloading the dataset simply run

python style_transfer/train.py

Style from videos

To run our models in test time with your own videos, you first need to use OpenPose to extract the 2D joint positions from the video, then use the resulting JSON files as described in the demo examples.

Blender Visualization

We provide a simple wrapper of blender's python API (2.80) for rendering 3D animations.

Prerequisites

The Blender releases distributed from blender.org include a complete Python installation across all platforms, which means that any extensions you have installed in your systems Python won’t appear in Blender.

To use external python libraries, you can install new packages directly to Blender's python distribution. Alternatively, you can change the default blender python interpreter by:

  1. Remove the built-in python directory: [blender_path]/2.80/python.

  2. Make a symbolic link or simply copy a python interpreter at [blender_path]/2.80/python. E.g. ln -s ~/anaconda3/envs/env_name [blender_path]/2.80/python

This interpreter should be python 3.7.x version and contains at least: numpy, scipy.

Usage

Arguments

Due to blender's argparse system, the argument list should be separated from the python file with an extra '--', for example:

blender -P render.py -- --arg1 [ARG1] --arg2 [ARG2]

engine: "cycles" or "eevee". Please refer to Render section for more details.

render: 0 or 1. If set to 1, the data will be rendered outside blender's GUI. It is recommended to use render = 0 in case you need to manually adjust the camera.

The full parameters list can be displayed by: blender -P render.py -- -h

Load bvh File (load_bvh.py)

To load example.bvh, run blender -P load_bvh.py. Please finish the preparation first.

Note that currently it uses primitive_cone with 5 vertices for limbs.

Note that Blender and bvh file have different xyz-coordinate systems. In bvh file, the "height" axis is y-axis while in blender it's z-axis. load_bvh.py swaps the axis in the BVH_file class initialization funtion.

Currently all the End Sites in bvh file are discarded, this is because of the out-side code used in utils/.

After loading the bvh file, it's height is normalized to 10.

Material, Texture, Light and Camera (scene.py)

This file enables to add a checkerboard floor, camera, a "sun" to the scene and to apply a basic color material to character.

The floor is placed at y=0, and should be corrected manually in case that it is needed (depends on the character parametes in the bvh file).

Rendering

We support 2 render engines provided in Blender 2.80: Eevee and Cycles, where the trade-off is between speed and quality.

Eevee (left) is a fast, real-time, render engine provides limited quality, while Cycles (right) is a slower, unbiased, ray-tracing render engine provides photo-level rendering result. Cycles also supports CUDA and OpenGL acceleration.

Skinning

Automatic Skinning

We provide a blender script that applies "skinning" to the output skeletons. You first need to download the fbx file which corresponds to the targeted character (for example, "mousey"). Then, you can get a skinned animation by simply run

blender -P blender_rendering/skinning.py -- --bvh_file [bvh file path] --fbx_file [fbx file path]

Note that the script might not work well for all the fbx and bvh files. If it fails, you can try to tweak the script or follow the manual skinning guideline below.

Manual Skinning

Here we provide a "quick and dirty" guideline for how to apply skin to the resulting bvh files, with blender:

  • Download the fbx file that corresponds to the retargeted character (for example, "mousey")
  • Import the fbx file to blender (uncheck the "import animation" option)
  • Merge meshes - select all the parts and merge them (ctrl+J)
  • Import the retargeted bvh file
  • Click "context" (menu bar) -> "Rest Position" (under sekeleton)
  • Manually align the mesh and the skeleton (rotation + translation)
  • Select the skeleton and the mesh (the skeleton object should be highlighted)
  • Click Object -> Parent -> with automatic weights (or Ctrl+P)

Now the skeleton and the skin are bound and the animation can be rendered.

Acknowledgments

The code in the utils directory is mostly taken from Holden et al. [2016].
In addition, part of the MoCap dataset is taken from Adobe Mixamo and from the work of Xia et al..

Citation

If you use this code for your research, please cite our papers:

@article{aberman2020skeleton,
  author = {Aberman, Kfir and Li, Peizhuo and Sorkine-Hornung Olga and Lischinski, Dani and Cohen-Or, Daniel and Chen, Baoquan},
  title = {Skeleton-Aware Networks for Deep Motion Retargeting},
  journal = {ACM Transactions on Graphics (TOG)},
  volume = {39},
  number = {4},
  pages = {62},
  year = {2020},
  publisher = {ACM}
}

and

@article{aberman2020unpaired,
  author = {Aberman, Kfir and Weng, Yijia and Lischinski, Dani and Cohen-Or, Daniel and Chen, Baoquan},
  title = {Unpaired Motion Style Transfer from Video to Animation},
  journal = {ACM Transactions on Graphics (TOG)},
  volume = {39},
  number = {4},
  pages = {64},
  year = {2020},
  publisher = {ACM}
}
"NAS-Bench-301 and the Case for Surrogate Benchmarks for Neural Architecture Search".

NAS-Bench-301 This repository containts code for the paper: "NAS-Bench-301 and the Case for Surrogate Benchmarks for Neural Architecture Search". The

AutoML-Freiburg-Hannover 57 Nov 30, 2022
BiSeNet based on pytorch

BiSeNet BiSeNet based on pytorch 0.4.1 and python 3.6 Dataset Download CamVid dataset from Google Drive or Baidu Yun(6xw4). Pretrained model Download

367 Dec 26, 2022
Active learning for Mask R-CNN in Detectron2

MaskAL - Active learning for Mask R-CNN in Detectron2 Summary MaskAL is an active learning framework that automatically selects the most-informative i

49 Dec 20, 2022
Distributed Evolutionary Algorithms in Python

DEAP DEAP is a novel evolutionary computation framework for rapid prototyping and testing of ideas. It seeks to make algorithms explicit and data stru

Distributed Evolutionary Algorithms in Python 4.9k Jan 05, 2023
Matthew Colbrook 1 Apr 08, 2022
Real time Human Detection Counting

In this python project, we are going to build the Human Detection and Counting System through Webcam or you can give your own video or images. This is a deep learning project on computer vision, whic

Mir Nawaz Ahmad 2 Jun 17, 2022
Mini-hmc-jax - A simple implementation of Hamiltonian Monte Carlo in JAX

mini-hmc-jax This is a simple implementation of Hamiltonian Monte Carlo in JAX t

Martin Marek 6 Mar 03, 2022
The official PyTorch implementation for NCSNv2 (NeurIPS 2020)

Improved Techniques for Training Score-Based Generative Models This repo contains the official implementation for the paper Improved Techniques for Tr

174 Dec 26, 2022
BMN: Boundary-Matching Network

BMN: Boundary-Matching Network A pytorch-version implementation codes of paper: "BMN: Boundary-Matching Network for Temporal Action Proposal Generatio

qinxin 260 Dec 06, 2022
Manifold Alignment for Semantically Aligned Style Transfer

Manifold Alignment for Semantically Aligned Style Transfer [Paper] Getting Started MAST has been tested on CentOS 7.6 with python = 3.6. It supports

35 Nov 14, 2022
Extracts essential Mediapipe face landmarks and arranges them in a sequenced order.

simplified_mediapipe_face_landmarks Extracts essential Mediapipe face landmarks and arranges them in a sequenced order. The default 478 Mediapipe face

Irfan 13 Oct 04, 2022
An off-line judger supporting distributed problem repositories

Thaw 中文 | English Thaw is an off-line judger supporting distributed problem repositories. Everyone can use Thaw release problems with license on GitHu

countercurrent_time 2 Jan 09, 2022
CoaT: Co-Scale Conv-Attentional Image Transformers

CoaT: Co-Scale Conv-Attentional Image Transformers Introduction This repository contains the official code and pretrained models for CoaT: Co-Scale Co

mlpc-ucsd 191 Dec 03, 2022
Leaf: Multiple-Choice Question Generation

Leaf: Multiple-Choice Question Generation Easy to use and understand multiple-choice question generation algorithm using T5 Transformers. The applicat

Kristiyan Vachev 62 Dec 20, 2022
Serve TensorFlow ML models with TF-Serving and then create a Streamlit UI to use them

TensorFlow Serving + Streamlit! ✨ 🖼️ Serve TensorFlow ML models with TF-Serving and then create a Streamlit UI to use them! This is a pretty simple S

Álvaro Bartolomé 18 Jan 07, 2023
Code for Learning Manifold Patch-Based Representations of Man-Made Shapes, in ICLR 2021.

LearningPatches | Webpage | Paper | Video Learning Manifold Patch-Based Representations of Man-Made Shapes Dmitriy Smirnov, Mikhail Bessmeltsev, Justi

Dima Smirnov 22 Nov 14, 2022
A little software to generate and save Julia or Mandelbrot's Fractals.

Julia-Mandelbrot-s-Fractals A little software to generate and save Julia or Mandelbrot's Fractals. Dependencies : Python 3.7 or more. (Also possible t

Olivier 0 Jul 09, 2022
The self-supervised goal reaching benchmark introduced in Discovering and Achieving Goals via World Models

Lexa-Benchmark Codebase for the self-supervised goal reaching benchmark introduced in 'Discovering and Achieving Goals via World Models'. Setup Create

1 Oct 14, 2021
Find-Lane-Line - Use openCV library and Python to detect the road-lane-line

Find-Lane-Line This project is to use openCV library and Python to detect the road-lane-line. Data Pipeline Step one : Color Selection Step two : Cann

Kenny Cheng 3 Aug 17, 2022
A simple and useful implementation of LPIPS.

lpips-pytorch Description Developing perceptual distance metrics is a major topic in recent image processing problems. LPIPS[1] is a state-of-the-art

So Uchida 121 Dec 24, 2022