Human motion synthesis using Unity3D

Overview

Human motion synthesis using Unity3D

Prerequisite:

Software: amc2bvh.exe, Unity 2017, Blender.
Unity: RockVR (Video Capture), scenes, character models Files:
Motion files: amc, asf or bvh formats.
Character models: fbx format.

Procedure

  1. If motion files in amc/asf format, run amc2bvh.exe to convert them to bvh
  2. Place all bvh files into "Desktop/New folder/bvh" (or modify script)
  3. Open Blender and run the bvh2fbx.py script. It will convert the motion files to fbx format which Unity can process and place them under the unity "Resources/Input"[1]
  4. Find the imported motion file in Unity and change its Animation Type to Humanoid under Rig. Check to make sure the model is mapped properly.
  5. Configure the different variations to record video (characters, camera angle, scene, lighting)
    1. For characters, add[2] or remove from the "characters" GameObject in Unity Editor for the ones desired. For new character added to the scene, add the "New Animation Controller"[3] in Asset to the character's controller in the "Animator" section.
    2. For camera, change the position of the DedicatedCapture GameObjects to the desired location. Add additional DedicatedCapture GameObjects for more angle. Read the documentation for RockVR Video Capture for more detail.
    3. For scene, check the desired scenes within the intro scene and run.
    4. For lighting, change the "lights" parameter in Automation.cs script. Add more values to the array for more variations in lighting angles.
  6. Start up the "intro" scene and run it from Unity Editor. Click "Start" button to start the problem.
  7. Adjust the desired resolution and framerate and click start. For initial run, leave all the counters to 0. For continuing runs enter the counters where the previous run left off. The videos will be recorded to "Documents/RockVR/Video"[4]

Note

  • [1] Converting too many bvh files at a time may result in Blender crashing. Try converting them in batches of smaller quantity (~50).
  • [2] To add a GameObject to a Scene in Unity, drag it from the Asset menu to a position in the Hierarchy menu or a position in the scene itself. You can also create an empty GameObject from the "GameObject->Create Empty" option.
  • [3] Depending on the framerate of the motion files, you may need to adjust the speed of the animation. To do this go to "Assets" and find the "New Animator Controller" and open it. Then click on "New State" and adjust the speed to framerate/24 (if 120 frames changes to 5, if 60 change to 2.5, etc). Also find the line "timeLeft = ((AnimationClip)clips[clipCounter]).length;" in the SwitchAnimation function and divide it by the speed.
  • [4] Unity will most likely freeze or crash if left running for too long. Adjust the counters in the "intro" scene to resume progress.

Scene Creation procedure

  1. To get a scene, either download a pre-built one or build one yourself using various 3d models for GameObjects.
  2. Create an empty GameObject named "characters" and place it at a location best suited for recording. Add a character to it to see if any adjusting or scaling is needed.
  3. Add DedicatedCapture GameObjects from the "RockVR/Video/Prefabs" folder to the scene in desired locations.
  4. Attach the AudioCapture script in "RockVR/Video/Scripts" folder to the main camera.
  5. Create an empty GameObject named "VideoCaptureCtrl" and attach the VideoCaptureCtrl script in "RockVR/Video/Scripts" to it. Also attach the Automation.cs script from "Scripts" to it as well.
  6. Add the first DedicatedCapture GameObject as well as the AudioCapture to the the VideoCaptureCtrl script.
  7. If there is no "Directional light" GameObject, create one.
  8. Add the created scene to build settings.
  9. Add a check box in the intro scene for the newly created scene and modify the scene "ProcessParameter" accordingly.

Additional characters

In the "characters" folder in Assets, there is a list of preprocessed characters I got from the Unity asset store for free.
To process new characters:

  1. Change its Animation type to Humanoid under Rig
  2. Fix any mapping problem for the bones of the character
  3. Remove the mapping on the bones for both hands. This could be done using the "New Human Template" in the Assets folder. (This is to avoid weird finger mapping from the animations)

Instructions on error handling

  • If you tried to terminate the program insider the Unity Editor, the ffmpeg.exe will still be running and result in unfinished video and audio files to remain in the videos folder. To solve this issue, simply terminate the ffmpeg.exe from task manager and delete the unfinished files.
  • Since the program freezes fairly often, a temporary save state feature is implemented. Once Unity froze, terminate it from task manager. Look into the videos folder and figure out what combination the next video should be. Enter the parameters where the last run left off in the "intro" scene (various counters) to pick up from there.

Local environment specs

  • OS: Microsoft Windows 10 Pro
  • Version: 10.0.16299 Build 16299
  • Processor: Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz, 2201 Mhz, 10 Core(s), 20 Logical Processor(s)
  • Total Physical Memory: 63.9 GB
  • GPU: NVIDIA Quadro M5000
Owner
Hao Xu
Hao Xu
Airborne Optical Sectioning (AOS) is a wide synthetic-aperture imaging technique

AOS: Airborne Optical Sectioning Airborne Optical Sectioning (AOS) is a wide synthetic-aperture imaging technique that employs manned or unmanned airc

JKU Linz, Institute of Computer Graphics 39 Dec 09, 2022
PyTorch implementation of "Learning to Discover Cross-Domain Relations with Generative Adversarial Networks"

DiscoGAN in PyTorch PyTorch implementation of Learning to Discover Cross-Domain Relations with Generative Adversarial Networks. * All samples in READM

Taehoon Kim 1k Jan 04, 2023
Creative Applications of Deep Learning w/ Tensorflow

Creative Applications of Deep Learning w/ Tensorflow This repository contains lecture transcripts and homework assignments as Jupyter Notebooks for th

Parag K Mital 1.5k Dec 30, 2022
A TensorFlow implementation of SOFA, the Simulator for OFfline LeArning and evaluation.

SOFA This repository is the implementation of SOFA, the Simulator for OFfline leArning and evaluation. Keeping Dataset Biases out of the Simulation: A

22 Nov 23, 2022
Implementation for Simple Spectral Graph Convolution in ICLR 2021

Simple Spectral Graph Convolutional Overview This repo contains an example implementation of the Simple Spectral Graph Convolutional (S^2GC) model. Th

allenhaozhu 64 Dec 31, 2022
Neural network-based build time estimation for additive manufacturing

Neural network-based build time estimation for additive manufacturing Oh, Y., Sharp, M., Sprock, T., & Kwon, S. (2021). Neural network-based build tim

Yosep 1 Nov 15, 2021
git《Self-Attention Attribution: Interpreting Information Interactions Inside Transformer》(AAAI 2021) GitHub:

Self-Attention Attribution This repository contains the implementation for AAAI-2021 paper Self-Attention Attribution: Interpreting Information Intera

60 Dec 29, 2022
A Broader Picture of Random-walk Based Graph Embedding

Random-walk Embedding Framework This repository is a reference implementation of the random-walk embedding framework as described in the paper: A Broa

Zexi Huang 23 Dec 13, 2022
REGTR: End-to-end Point Cloud Correspondences with Transformers

REGTR: End-to-end Point Cloud Correspondences with Transformers This repository contains the source code for REGTR. REGTR utilizes multiple transforme

Zi Jian Yew 108 Dec 17, 2022
Seeing Dynamic Scene in the Dark: High-Quality Video Dataset with Mechatronic Alignment (ICCV2021)

Seeing Dynamic Scene in the Dark: High-Quality Video Dataset with Mechatronic Alignment This is a pytorch project for the paper Seeing Dynamic Scene i

DV Lab 21 Nov 28, 2022
Codes for "Template-free Prompt Tuning for Few-shot NER".

EntLM The source codes for EntLM. Dependencies: Cuda 10.1, python 3.6.5 To install the required packages by following commands: $ pip3 install -r requ

77 Dec 27, 2022
Applying CLIP to Point Cloud Recognition.

PointCLIP: Point Cloud Understanding by CLIP This repository is an official implementation of the paper 'PointCLIP: Point Cloud Understanding by CLIP'

Renrui Zhang 175 Dec 24, 2022
Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch

Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch

Phil Wang 12.6k Jan 09, 2023
Code for one-stage adaptive set-based HOI detector AS-Net.

AS-Net Code for one-stage adaptive set-based HOI detector AS-Net. Mingfei Chen*, Yue Liao*, Si Liu, Zhiyuan Chen, Fei Wang, Chen Qian. "Reformulating

Mingfei Chen 45 Dec 09, 2022
Event sourced bank - A wide-and-shallow example using the Python event sourcing library

Event Sourced Bank A "wide but shallow" example of using the Python event sourci

3 Mar 09, 2022
Multitask Learning Strengthens Adversarial Robustness

Multitask Learning Strengthens Adversarial Robustness

Columbia University 15 Jun 10, 2022
Pytorch Implementation of Residual Vision Transformers(ResViT)

ResViT Official Pytorch Implementation of Residual Vision Transformers(ResViT) which is described in the following paper: Onat Dalmaz and Mahmut Yurt

ICON Lab 41 Dec 08, 2022
3DMV jointly combines RGB color and geometric information to perform 3D semantic segmentation of RGB-D scans.

3DMV 3DMV jointly combines RGB color and geometric information to perform 3D semantic segmentation of RGB-D scans. This work is based on our ECCV'18 p

Владислав Молодцов 0 Feb 06, 2022
Complete U-net Implementation with keras

U Net Lowered with Keras Complete U-net Implementation with keras Original Paper Link : https://arxiv.org/abs/1505.04597 Special Implementations : The

Sagnik Roy 14 Oct 10, 2022
Generate pixel-style avatars with python.

face2pixel Generate pixel-style avatars with python. Run: Clone the project: git clone https://github.com/theodorecooper/face2pixel install requiremen

Theodore Cooper 2 May 11, 2022