Deep motion transfer

Overview

animation-with-keypoint-mask

Paper

The right most square is the final result. Softmax mask (circles):


\

Heatmap mask:



\

conda env create -f environment.yml
conda activate venv11
We use pytorch 1.7.1 with python 3.8.
Please obtain pretrained keypoint module. You can do so by
git checkout fomm-new-torch
Then, follow the instructions from the README of that branch, or obtain a pre-trained checkpoint from
https://github.com/AliaksandrSiarohin/first-order-model

training

to train a model on specific dataset run:

CUDA_VISIBLE_DEVICES=0,1,2,3 python run.py --config config/dataset_name.yaml --device_ids 0,1,2,3 --checkpoint_with_kp path/to/checkpoint/with/pretrained/kp

E.g. taichi-256-q.yaml for the keypoint heatmap mask model, and taichi-256-softmax-q.yaml for drawn circular keypoints instead.

the code will create a folder in the log directory (each run will create a time-stamped new directory). checkpoints will be saved to this folder. to check the loss values during training see log.txt. you can also check training data reconstructions in the train-vis sub-folder. by default the batch size is tuned to run on 4 titan-x gpu (apart from speed it does not make much difference). You can change the batch size in the train_params in corresponding .yaml file.

evaluation on video reconstruction

To evaluate the reconstruction of the driving video from its first frame, run:

CUDA_VISIBLE_DEVICES=0 python run.py --config config/dataset_name.yaml --mode reconstruction --checkpoint path/to/checkpoint --checkpoint_with_kp path/to/checkpoint/with/pretrained/kp

you will need to specify the path to the checkpoint, the reconstruction sub-folder will be created in the checkpoint folder. the generated video will be stored to this folder, also generated videos will be stored in png subfolder in loss-less '.png' format for evaluation. instructions for computing metrics from the paper can be found: https://github.com/aliaksandrsiarohin/pose-evaluation.

image animation

In order to animate a source image with motion from driving, run:

CUDA_VISIBLE_DEVICES=0 python run.py --config config/dataset_name.yaml --mode animate --checkpoint path/to/checkpoint --checkpoint_with_kp path/to/checkpoint/with/pretrained/kp

you will need to specify the path to the checkpoint, the animation sub-folder will be created in the same folder as the checkpoint. you can find the generated video there and its loss-less version in the png sub-folder. by default video from test set will be randomly paired, but you can specify the "source,driving" pairs in the corresponding .csv files. the path to this file should be specified in corresponding .yaml file in pairs_list setting.

datasets

  1. taichi. follow the instructions in data/taichi-loading or instructions from https://github.com/aliaksandrsiarohin/video-preprocessing.

training on your own dataset

  1. resize all the videos to the same size e.g 256x256, the videos can be in '.gif', '.mp4' or folder with images. we recommend the later, for each video make a separate folder with all the frames in '.png' format. this format is loss-less, and it has better i/o performance.

  2. create a folder data/dataset_name with 2 sub-folders train and test, put training videos in the train and testing in the test.

  3. create a config config/dataset_name.yaml, in dataset_params specify the root dir the root_dir: data/dataset_name. also adjust the number of epoch in train_params.

additional notes

citation:

@misc{toledano2021,
  author = {Or Toledano and Yanir Marmor and Dov Gertz},
  title = {Image Animation with Keypoint Mask},
  year = {2021},
  eprint={2112.10457},
  archivePrefix={arXiv},
  primaryClass={cs.CV}
}

Old format (before paper):

@misc{toledano2021,
  author = {Or Toledano and Yanir Marmor and Dov Gertz},
  title = {Image Animation with Keypoint Mask},
  year = {2021},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/or-toledano/animation-with-keypoint-mask}},
  commit = {015b1f2d466658141c41ea67d7356790b5cded40}
}
an implementation of 3D Ken Burns Effect from a Single Image using PyTorch

3d-ken-burns This is a reference implementation of 3D Ken Burns Effect from a Single Image [1] using PyTorch. Given a single input image, it animates

Simon Niklaus 1.4k Dec 28, 2022
ReLoss - Official implementation for paper "Relational Surrogate Loss Learning" ICLR 2022

Relational Surrogate Loss Learning (ReLoss) Official implementation for paper "R

Tao Huang 31 Nov 22, 2022
Code for the CVPR 2021 paper "Triple-cooperative Video Shadow Detection"

Triple-cooperative Video Shadow Detection Code and dataset for the CVPR 2021 paper "Triple-cooperative Video Shadow Detection"[arXiv link] [official l

Zhihao Chen 24 Oct 04, 2022
Code for Environment Inference for Invariant Learning (ICML 2020 UDL Workshop Paper)

Environment Inference for Invariant Learning This code accompanies the paper Environment Inference for Invariant Learning, which appears at ICML 2021.

Elliot Creager 40 Dec 09, 2022
A quick recipe to learn all about Transformers

Transformers have accelerated the development of new techniques and models for natural language processing (NLP) tasks.

DAIR.AI 772 Dec 31, 2022
MarcoPolo is a clustering-free approach to the exploration of bimodally expressed genes along with group information in single-cell RNA-seq data

MarcoPolo is a method to discover differentially expressed genes in single-cell RNA-seq data without depending on prior clustering Overview MarcoPolo

Chanwoo Kim 13 Dec 18, 2022
A PyTorch implementation of EfficientNet and EfficientNetV2 (coming soon!)

EfficientNet PyTorch Quickstart Install with pip install efficientnet_pytorch and load a pretrained EfficientNet with: from efficientnet_pytorch impor

Luke Melas-Kyriazi 7.2k Jan 06, 2023
PyTorch implementation for MINE: Continuous-Depth MPI with Neural Radiance Fields

MINE: Continuous-Depth MPI with Neural Radiance Fields Project Page | Video PyTorch implementation for our ICCV 2021 paper. MINE: Towards Continuous D

Zijian Feng 325 Dec 29, 2022
【ACMMM 2021】DSANet: Dynamic Segment Aggregation Network for Video-Level Representation Learning

DSANet: Dynamic Segment Aggregation Network for Video-Level Representation Learning (ACMMM 2021) Overview We release the code of the DSANet (Dynamic S

Wenhao Wu 46 Dec 27, 2022
Official repository for Jia, Raghunathan, Göksel, and Liang, "Certified Robustness to Adversarial Word Substitutions" (EMNLP 2019)

Certified Robustness to Adversarial Word Substitutions This is the official GitHub repository for the following paper: Certified Robustness to Adversa

Robin Jia 38 Oct 16, 2022
Predicting a person's gender based on their weight and height

Logistic Regression Advanced Case Study Gender Classification: Predicting a person's gender based on their weight and height 1. Introduction We turn o

1 Feb 01, 2022
Dimension Reduced Turbulent Flow Data From Deep Vector Quantizers

Dimension Reduced Turbulent Flow Data From Deep Vector Quantizers This is an implementation of A Physics-Informed Vector Quantized Autoencoder for Dat

DreamSoul 3 Sep 12, 2022
Official code for "Mean Shift for Self-Supervised Learning"

MSF Official code for "Mean Shift for Self-Supervised Learning" Requirements Python = 3.7.6 PyTorch = 1.4 torchvision = 0.5.0 faiss-gpu = 1.6.1 In

UMBC Vision 44 Nov 21, 2022
BMVC 2021 Oral: code for BI-GCN: Boundary-Aware Input-Dependent Graph Convolution for Biomedical Image Segmentation

BMVC 2021 BI-GConv: Boundary-Aware Input-Dependent Graph Convolution for Biomedical Image Segmentation Necassary Dependencies: PyTorch 1.2.0 Python 3.

Yanda Meng 15 Nov 08, 2022
A 1.3B text-to-image generation model trained on 14 million image-text pairs

minDALL-E on Conceptual Captions minDALL-E, named after minGPT, is a 1.3B text-to-image generation model trained on 14 million image-text pairs for no

Kakao Brain 604 Dec 14, 2022
Match SafeGraph POIs with Data collected through a cultural resource survey in Washington DC.

Match SafeGraph POI data with Cultural Resource Places in Washington DC Match SafeGraph POIs with Data collected through a cultural resource survey in

Changjie Chen 1 Jan 05, 2022
Directed Greybox Fuzzing with AFL

AFLGo: Directed Greybox Fuzzing AFLGo is an extension of American Fuzzy Lop (AFL). Given a set of target locations (e.g., folder/file.c:582), AFLGo ge

380 Nov 24, 2022
This repository contains the source code for the paper "DONeRF: Towards Real-Time Rendering of Compact Neural Radiance Fields using Depth Oracle Networks",

DONeRF: Towards Real-Time Rendering of Compact Neural Radiance Fields using Depth Oracle Networks Project Page | Video | Presentation | Paper | Data L

Facebook Research 281 Dec 22, 2022
[ICLR'21] FedBN: Federated Learning on Non-IID Features via Local Batch Normalization

FedBN: Federated Learning on Non-IID Features via Local Batch Normalization This is the PyTorch implemention of our paper FedBN: Federated Learning on

<a href=[email protected]"> 156 Dec 15, 2022