Learning Time-Critical Responses for Interactive Character Control

Overview

Learning Time-Critical Responses for Interactive Character Control

teaser

Abstract

This code implements the paper Learning Time-Critical Responses for Interactive Character Control. This system implements teacher-student framework to learn time-critically responsive policies, which guarantee the time-to-completion between user inputs and their associated responses regardless of the size and composition of the motion databases. This code is written in java and Python, based on Tensorflow2.

Publications

Kyungho Lee, Sehee Min, Sunmin Lee, and Jehee Lee. 2021. Learning Time-Critical Responses for Interactive Character Control. ACM Trans. Graph. 40, 4, 147. (SIGGRAPH 2021)

Project page: http://mrl.snu.ac.kr/research/ProjectAgile/Agile.html

Paper: http://mrl.snu.ac.kr/research/ProjectAgile/AGILE_2021_SIGGRAPH_author.pdf

Youtube: https://www.youtube.com/watch?v=rQKuvxg5ZHc

How to install

This code is implemented with Java and Python, and was developed using Eclipse on Windows. A Windows 64-bit environment is required to run the code.

Requirements

Install JDK 1.8

Java SE Development Kit 8 Downloads

Install Eclipse

Install Eclipse IDE for Java Developers

Install Python 3.6

https://www.python.org/downloads/release/python-368/

Install pydev to Eclipse

https://www.pydev.org/download.html

Install cuda and cudnn 10.0

CUDA Toolkit 10.0 Archive

NVIDIA cuDNN

Install Visual C++ Redistributable for VS2012

Laplacian Motion Editing(PmQmJNI.dll) is implemented in C++, and VS2012 is required to run it.

Visual C++ Redistributable for Visual Studio 2012 Update 4

Install JEP(Java Embedded Python)

Java Embedded Python

This library requires a part of the Visual Studio installation. I don't know exactly which ones are needed, but I'm guessing .net framework 3.5, VC++ 2015.3 v14.00(v140). Installing Visual Studio 2017 or later may be helpful.

Install Tensoflow 1.14.0

pip install tensorflow-gpu==1.14.0

Install this repository

We recommend downloading through Git in Eclipse environment.

  1. Open Git Perspective in Elcipse
  2. Paste repository url and clone repository ( 'https://git.ncsoft.net/scm/private_khlee/private-khlee-test.git' )
  3. Select all projects in Working Tree
  4. Right click and select Import Projects, and Import existing Eclipse projects.

Or you can just download the repository as Zip file and extract it, and import it using File->Import->General->Existing Projects into Workspace in Eclipse.

Install third party library

This code uses Interactive Character Animation by Learning Multi-Objective Control for learning the student policy.

Download required third pary library files(ThirdPartyDlls.zip) and extract it to mrl.motion.critical folder.

Dataset

The entire data used in the paper cannot be published due to copyright issues. This repository contains only minimal motion dataset for algorithm validation. SNU Motion Database was used for martial arts movements, CMU Motion Database was used for locomotion.

How to run

Eclipse

All of the instructions below are assumed to be executed based on Eclipse. Executable java files are grouped in package mrl.motion.critical.run of project mrl.motion.critical.

  • You can directly open source file with Ctrl+Shift+R
  • You can run the currently open source file with Ctrl+F11.
  • You can configure program arguments in Run->Run Configurations menu.

Pre-trained student policy

You can see the pre-trained network by running RuntimeMartialArtsControlModule.java. Pre-trained network file is located at mrl.python.neural\train\martial_arts_sp_da

  • 1, 2 : walk, run
  • 3,4,5,6 : martial arts actions
  • q,w,e,r,t : control critical response time

How to train

  1. Data Annotation & Configuration
    • You can check motion data list and annotation information by executing MAnnotationRun.java.
  2. Model Configuration
    • Action list, critical response time of each action, user input model and error metric is defined at MartialArtsConfig.java
  3. Preprocessing
    • You can precompute data table for pruning by executing DP_Preprocessing.java
    • The data file will be located at mrl.motion.critical\output\dp_cache
  4. Training teacher policy
    • You can train teacher policy by executing LearningTeacherPolicy.java
    • The result will be located at mrl.motion.critical\train_rl
  5. Training data for student policy
    • You can generate training data for student policy by executing StudentPolicyDataGeneration.java
    • The result will be located at mrl.python.neural\train
  6. Training student policy
    • You can train student policy by executing mrl.python.neural\train_rl.py
    • You need to set program arguments in Run->Run Configurations menu.
      • arguments format :
      • ex) martial_arts_sp new 0.0001
  7. Running student policy
    • You can see the trained student policy by running RuntimeMartialArtsControlModule.java.
    • This class will be load student policy located at mrl.python.neural\train.
Owner
Movement Research Lab
Our research group explores new ways of understanding, representing, and animating human movements.
Movement Research Lab
Brain tumor detection using Convolution-Neural Network (CNN)

Detect and Classify Brain Tumor using CNN. A system performing detection and classification by using Deep Learning Algorithms using Convolution-Neural Network (CNN).

assia 1 Feb 07, 2022
Joint Unsupervised Learning (JULE) of Deep Representations and Image Clusters.

Joint Unsupervised Learning (JULE) of Deep Representations and Image Clusters. Overview This project is a Torch implementation for our CVPR 2016 paper

Jianwei Yang 278 Dec 25, 2022
EmoTag helps you train emotion detection model for Chinese audios

emoTag emoTag helps you train emotion detection model for Chinese audios. Environment pip install -r requirement.txt Data We used Emotional Speech Dat

_zza 4 Sep 07, 2022
Official implementation of "Open-set Label Noise Can Improve Robustness Against Inherent Label Noise" (NeurIPS 2021)

Open-set Label Noise Can Improve Robustness Against Inherent Label Noise NeurIPS 2021: This repository is the official implementation of ODNL. Require

Hongxin Wei 12 Dec 07, 2022
Cleaned test data list of DukeMTMC-reID, ICCV2021

Cleaned DukeMTMC-reID Cleaned data list of DukeMTMC-reID released with our paper accepted by ICCV 2021: Learning Instance-level Spatial-Temporal Patte

14 Feb 19, 2022
Vikrant Deshpande 1 Nov 17, 2022
Pytorch implementation of the DeepDream computer vision algorithm

deep-dream-in-pytorch Pytorch (https://github.com/pytorch/pytorch) implementation of the deep dream (https://en.wikipedia.org/wiki/DeepDream) computer

102 Dec 05, 2022
Learning Features with Parameter-Free Layers (ICLR 2022)

Learning Features with Parameter-Free Layers (ICLR 2022) Dongyoon Han, YoungJoon Yoo, Beomyoung Kim, Byeongho Heo | Paper NAVER AI Lab, NAVER CLOVA Up

NAVER AI 65 Dec 07, 2022
Official Pytorch implementation for "End2End Occluded Face Recognition by Masking Corrupted Features, TPAMI 2021"

End2End Occluded Face Recognition by Masking Corrupted Features This is the Pytorch implementation of our TPAMI 2021 paper End2End Occluded Face Recog

Haibo Qiu 25 Oct 31, 2022
Simple command line tool for text to image generation using OpenAI's CLIP and Siren (Implicit neural representation network)

Deep Daze mist over green hills shattered plates on the grass cosmic love and attention a time traveler in the crowd life during the plague meditative

Phil Wang 4.4k Jan 03, 2023
For IBM Quantum Challenge Africa 2021, 9 September (07:00 UTC) - 20 September (23:00 UTC).

IBM Quantum Challenge Africa 2021 To ensure Africa is able to apply quantum computing to solve problems relevant to the continent, the IBM Research La

Qiskit Community 48 Dec 25, 2022
This is the codebase for the ICLR 2021 paper Trajectory Prediction using Equivariant Continuous Convolution

Trajectory Prediction using Equivariant Continuous Convolution (ECCO) This is the codebase for the ICLR 2021 paper Trajectory Prediction using Equivar

Spatiotemporal Machine Learning 45 Jul 22, 2022
This is an open source python repository for various python tests

Welcome to Py-tests This is an open source python repository for various python tests. This is in response to the hacktoberfest2021 challenge. It is a

Yada Martins Tisan 3 Oct 31, 2021
NNR conformation conditional and global probabilities estimation and analysis in peptides or proteins fragments

NNR and global probabilities estimation and analysis in peptides or protein fragments This module calculates global and NNR conformation dependent pro

0 Jul 15, 2021
The datasets and code of ACL 2021 paper "Aspect-Category-Opinion-Sentiment Quadruple Extraction with Implicit Aspects and Opinions".

Aspect-Category-Opinion-Sentiment (ACOS) Quadruple Extraction This repo contains the data sets and source code of our paper: Aspect-Category-Opinion-S

NUSTM 144 Jan 02, 2023
PyTorch implementation for SDEdit: Image Synthesis and Editing with Stochastic Differential Equations

SDEdit: Image Synthesis and Editing with Stochastic Differential Equations Project | Paper | Colab PyTorch implementation of SDEdit: Image Synthesis a

536 Jan 05, 2023
A collection of papers about Transformer in the field of medical image analysis.

A collection of papers about Transformer in the field of medical image analysis.

Junyu Chen 377 Jan 05, 2023
Code for the paper One Thing One Click: A Self-Training Approach for Weakly Supervised 3D Semantic Segmentation, CVPR 2021.

One Thing One Click One Thing One Click: A Self-Training Approach for Weakly Supervised 3D Semantic Segmentation (CVPR2021) Code for the paper One Thi

44 Dec 12, 2022
Unofficial implementation of Pix2SEQ

Unofficial-Pix2seq: A Language Modeling Framework for Object Detection Unofficial implementation of Pix2SEQ. Please use this code with causion. Many i

159 Dec 12, 2022
Dataset and Code for the paper "DepthTrack: Unveiling the Power of RGBD Tracking" (ICCV2021), and "Depth-only Object Tracking" (BMVC2021)

DeT and DOT Code and datasets for "DepthTrack: Unveiling the Power of RGBD Tracking" (ICCV2021) "Depth-only Object Tracking" (BMVC2021) @InProceedings

Yan Song 55 Dec 15, 2022