An addon uses SMPL's poses and global translation to drive cartoon character in Blender.

Overview

Blender addon for driving character

The addon drives the cartoon character by passing SMPL's poses and global translation into model's armature in Blender. Poses and global translation can be obtained from ROMP or any other 3D pose estimation model. If the model outputs poses and global translation at a high FPS, you can drive cartoon characters in Blender in real time.

Demo

image image

The first demo uses ROMP outputs from the video, which is stored in a file.

The second demo uses ROMP outputs from the webcam in real time.

How to Use the add-on

Data Requester

This addon is a data requester that sends a data request over TCP to 127.0.0.1:9999.It gets one data at a time from the data server.

After running the addon by pressing ctrl+E in Blender, it keeps asking for data until the server closes the TCP connection. You can also press A to close the TCP connection.

Data Server

The data server is bound to 127.0.0.1:9999. After receiving a request from the data requester, one data is sent to the requester.

I've written server.py as a example data server (you only need to know a little about Python TCP to understand it).

Real-time data server can be found in ROMP.You just need to run webcam_blender.sh.

Data Format

The data is a Python list of four elements in the form of [mode,poses,global translation,current keyframe id].

  1. Mode is an integer, 1 for insert keyframe, 0 for no insert keyframe. Insert keyframes and animation rendering can be carried out later. Keyframes are generally not inserted in real-time mode to make real-time driving cartoon characters smoother.
  2. Poses is a list of length 72.
  3. Global translation is a list of length 3. If you don't need global translation, just go into [0,0,0].
  4. Current keyframe id is an integer.If you insert keyframes, you should set it to the correct keyframe id. If you don't insert keyframes, just set it to 0.

Steps

  1. Install the addon in Blender
  2. Run data server
  3. Press Ctrl+E in Blender to run addon
  4. Press A in Blender to stop addon or wait until the data transfer is complete

In step 3, you'd better select Armature, otherwise bugs may occur. Also, the mouse must be placed in the 3D viewport area(where the model is), otherwise the addon will not run.

Something about Blender

If you're not familiar with Blender, I've placed a blender project in the resources folder to help you.All you need to do is open it and follow Steps to achieve the effect shown in the Demo.(It's better to know something about animation in Blender.)

If you need a video background in the demo, select Compositing in the top menu bar, click Open Clip in the Movie Clip, and select your video.

图 2

If you are familiar with Blender and want to use your own models, you should make sure it's armature is SMPL's skeleton. The armature should name Armature and each bone has the same name as the bones in demo model(Only the 24 bones of SMPL skeleton are needed, and the fingers don't need to change their names).

图 3

Comments
  • How to run .fbx file to control the charater

    How to run .fbx file to control the charater

    Hello. I have successfully run the demo of ROMP, which exported .fbx file. And currently I want to use the .fbx to control the character. Can you provide steps for the video demo? I can only see the camera one

    opened by CheungBH 31
  • [simple-romp] How to use it?

    [simple-romp] How to use it?

    I tried to use it with simple-romp but it did not work. I already created an issue at the ROMP repository and described my problem here and here in detail.

    The author of the ROMP repository answered there:

    About live blender driving, please refer to this repo. https://github.com/yanch2116/CharacterDriven-BlenderAddon My colleague is responsible for maintaining this funciton now. Best regard.

    and closed the issue.

    @yanch2116 So how to solve it?

    My goal is to send the positions and quaternions of ROMP (or any other SMPL based solution) over the VMC protocol to other application (not Blender). Any ideas, whats the best way to do it?

    opened by vivi90 26
  • Could I ask for a detailed instruction on how to change the 3D Character?

    Could I ask for a detailed instruction on how to change the 3D Character?

    Also, I would like to know if there is a way to replace the 3D Character before I run the script and process the video so that I don't have to manually change it every time?

    opened by XXZhe 12
  • Can't connect ROMP into addon

    Can't connect ROMP into addon

    I use ROMP v1.1 on Windows machine. But CDBA works with version 1.0 of ROMP I read installation and using documentation of ROMP v1.0 but i can't figured out all How can i use this addon properly? I'm not good at programming i'm animator and interested in your project. Can you write steps how to install and use ROMP with this addon,please?

    image

    opened by hasanleiva 11
  • Hi, Looks like change another mixamo character not work

    Hi, Looks like change another mixamo character not work

    I have ROMP driven SMPL model looks OK, but when using CDBA (I using script locally) driven mixamo model, result looks wrong:

    image

    I am using this bone mapper in your repo:

    bones_mixamo_smpl_mapper = {
        "Hips": "Pelvis",
        "LeftUpLeg": "L_Hip",
        "RightUpLeg": "R_Hip",
        "Spine2": "Spine3",
        "Spine1": "Spine2",
        "Spine": "Spine1",
        "LeftLeg": "L_Knee",
        "RightLeg": "R_Knee",
        "LeftFoot": "L_Ankle",
        "RightFoot": "R_Ankle",
        "LeftToeBase": "L_Foot",
        "RightToeBase": "R_Foot",
        "Neck": "Neck",
        "LeftShoulder": "L_Collar",
        "RightShoulder": "R_Collar",
        "Head": "Head",
        "LeftArm": "L_Shoulder",
        "RightArm": "R_Shoulder",
        "LeftForeArm": "L_Elbow",
        "RightForeArm": "R_Elbow",
        "LeftHand": "L_Wrist",
        "RightHand": "R_Wrist",
        "LeftHandIndex1": "L_Hand",
        "LeftHandMiddle1": "L_Hand",
        "RightHandMiddle1": "R_Hand",
        "RightHandIndex1": "R_Hand",
    }
    bones_smpl_mixamo_mapper = {v: k for k, v in bones_mixamo_smpl_mapper.items()}
    bone_name_from_index_character = {
        k: bones_smpl_mixamo_mapper[v] for k, v in bone_name_from_index.items()
    }
    
    

    Do u know why?

    also the hand look not right.

    image

    opened by jinfagang 11
  • Detection variable: outputs = {'poses': poses, 'trans': trans[0]}

    Detection variable: outputs = {'poses': poses, 'trans': trans[0]}

    Hey, Yanchxx,

    May I ask, what is the output variable, I think "trans" is the translation between camera and object.

    What are the poses, is it the location x, y, z, or rotation x, y, z, degrees?

    Thank you 👍

    opened by zhangby2085 10
  • 导入fbx时骨骼方向乱了

    导入fbx时骨骼方向乱了

    @yanch2116 你好,非常酷的工作!我基本把整个项目跑通了,但我有两个问题想进一步请教一下你。 、 1.导入fbx时骨骼的方向错乱了,参见:https://www.bilibili.com/read/cv2520452 。但是这个文章里的方法没有完全解决方向错乱的问题,所以我想请问一下你,你是怎么解决这个问题的呢? 2.我想请教一下,如何将编辑骨架让其和smpl的骨架一致呢? 望不吝赐教!万分感谢!

    opened by syguan96 8
  • Use keyframes to prevent pose shaking

    Use keyframes to prevent pose shaking

    I found that the avatar will shake drastically when running the webcam demo. And I set the mode =1 to insert keyframe record the webcam results for playback.

    Now, if we set one more condition for inserting keyframe, only the frame_idx % 3 == 0 or frame_idx % 5 == 0. This would allow the avatar to move along these keyframes much smoother.

    However, is there a way to let the webcam demo runs in real-time using the keyframe strategy I said? This skip keyframe strategy only seems to work with the recorded playback. The character is still moving on each frame when we are actually running in real-time.

    opened by ZhengdiYu 4
  • Can I rotate the scene while running scripts?

    Can I rotate the scene while running scripts?

    Hi, I'm able to run to demo with blender now, but it turns out that the view is locked while the script is running.

    If there's a way to enable rotation?

    opened by anzisheng 4
  • Which version of Blender are you using?

    Which version of Blender are you using?

    I met the following error while running Beta.blend. I'm using blender 2.83.9. What's the expected version?

    Read blend: E:\Workspace\blender_test\addons\CharacterDriven-BlenderAddon-master\blender\Beta.blend 0 meshes freed Error: File written by newer Blender binary (290.0), expect loss of data!

    opened by sylyt62 4
  • multiple people

    multiple people

    Hey, yanch2116, very impressive job done in visualizing 3D characters. I tested and it works, one question to ask, the code romp_server.py has the setting.show_largest=True, I am thinking, what is the condition with multiple people in the webcamera, when there are many people, the blender character keeps on shifting. Are they ways to solve this? 1, keep on the object tracker in single object_ID 2, visualize multiple 3D characters as input in the camera

    opened by zhangby2085 3
Owner
犹在镜中
犹在镜中
Progressive Growing of GANs for Improved Quality, Stability, and Variation

Progressive Growing of GANs for Improved Quality, Stability, and Variation — Official TensorFlow implementation of the ICLR 2018 paper Tero Karras (NV

Tero Karras 5.9k Jan 05, 2023
Style-based Neural Drum Synthesis with GAN inversion

Style-based Drum Synthesis with GAN Inversion Demo TensorFlow implementation of a style-based version of the adversarial drum synth (ADS) from the pap

Sound and Music Analysis (SoMA) Group 29 Nov 19, 2022
Convolutional Neural Network for 3D meshes in PyTorch

MeshCNN in PyTorch SIGGRAPH 2019 [Paper] [Project Page] MeshCNN is a general-purpose deep neural network for 3D triangular meshes, which can be used f

Rana Hanocka 1.4k Jan 04, 2023
Code for KiloNeRF: Speeding up Neural Radiance Fields with Thousands of Tiny MLPs

KiloNeRF: Speeding up Neural Radiance Fields with Thousands of Tiny MLPs Check out the paper on arXiv: https://arxiv.org/abs/2103.13744 This repo cont

Christian Reiser 373 Dec 20, 2022
Galactic and gravitational dynamics in Python

Gala is a Python package for Galactic and gravitational dynamics. Documentation The documentation for Gala is hosted on Read the docs. Installation an

Adrian Price-Whelan 101 Dec 22, 2022
Pipeline code for Sequential-GAM(Genome Architecture Mapping).

Sequential-GAM Pipeline code for Sequential-GAM(Genome Architecture Mapping). mapping whole_preprocess.sh include the whole processing of mapping. usa

3 Nov 03, 2022
Deploy pytorch classification model using Flask and Streamlit

Deploy pytorch classification model using Flask and Streamlit

Ben Seo 1 Nov 17, 2021
Implementation of "Semi-supervised Domain Adaptive Structure Learning"

Semi-supervised Domain Adaptive Structure Learning - ASDA This repo contains the source code and dataset for our ASDA paper. Illustration of the propo

3 Dec 13, 2021
One-line your code easily but still with the fun of doing so!

One-liner-iser One-line your code easily but still with the fun of doing so! Have YOU ever wanted to write one-line Python code, but don't have the sa

5 May 04, 2022
GenGNN: A Generic FPGA Framework for Graph Neural Network Acceleration

GenGNN: A Generic FPGA Framework for Graph Neural Network Acceleration Stefan Abi-Karam*, Yuqi He*, Rishov Sarkar*, Lakshmi Sathidevi, Zihang Qiao, Co

Sharc-Lab 19 Dec 15, 2022
Source code for deep symbolic optimization.

Update July 10, 2021: This repository now supports an additional symbolic optimization task: learning symbolic policies for reinforcement learning. Th

Brenden Petersen 290 Dec 25, 2022
PyTorch implementation for the visual prior component (i.e. perception module) of the Visually Grounded Physics Learner [Li et al., 2020].

VGPL-Visual-Prior PyTorch implementation for the visual prior component (i.e. perception module) of the Visually Grounded Physics Learner (VGPL). Give

Toru 8 Dec 29, 2022
Official repository for "On Generating Transferable Targeted Perturbations" (ICCV 2021)

On Generating Transferable Targeted Perturbations (ICCV'21) Muzammal Naseer, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, and Fatih Porikli Paper:

Muzammal Naseer 46 Nov 17, 2022
Milano is a tool for automating hyper-parameters search for your models on a backend of your choice.

Milano (This is a research project, not an official NVIDIA product.) Documentation https://nvidia.github.io/Milano Milano (Machine learning autotuner

NVIDIA Corporation 147 Dec 17, 2022
Sequential model-based optimization with a `scipy.optimize` interface

Scikit-Optimize Scikit-Optimize, or skopt, is a simple and efficient library to minimize (very) expensive and noisy black-box functions. It implements

Scikit-Optimize 2.5k Jan 04, 2023
This is the code repository for the paper A hierarchical semantic segmentation framework for computer-vision-based bridge column damage detection

Bridge-damage-segmentation This is the code repository for the paper A hierarchical semantic segmentation framework for computer-vision-based bridge c

Jingxiao Liu 5 Dec 07, 2022
NeoPlay is the project dedicated to ESport events.

NeoPlay is the project dedicated to ESport events. On this platform users can participate in tournaments with prize pools as well as create their own tournaments.

3 Dec 18, 2021
Funnels: Exact maximum likelihood with dimensionality reduction.

Funnels This repository contains the code needed to reproduce the experiments from the paper: Funnels: Exact maximum likelihood with dimensionality re

2 Apr 21, 2022
CRNN With PyTorch

CRNN-PyTorch Implementation of https://arxiv.org/abs/1507.05717

Vadim 4 Sep 01, 2022
Flask101 - FullStack Web Development with Python & JS - From TAQWA

Task: Create a CLI Calculator Step 0: Creating Virtual Environment $ python -m

Hossain Foysal 1 May 31, 2022