Implementation of Stochastic Image-to-Video Synthesis using cINNs.

Overview

Stochastic Image-to-Video Synthesis using cINNs

Official PyTorch implementation of Stochastic Image-to-Video Synthesis using cINNs accepted to CVPR2021.

teaser.mp4

Arxiv | Project Page | Supplemental | Pretrained Models | BibTeX

Michael Dorkenwald, Timo Milbich, Andreas Blattmann, Robin Rombach, Kosta Derpanis*, Björn Ommer*, CVPR 2021

tl;dr We present a framework for both stochastic and controlled image-to-video synthesis. We bridge the gap between the image and video domain using conditional invertible neural networks and account for the inherent ambiguity with a learned, dedicated scene dynamics representation.

teaser

For any questions, issues, or recommendations, please contact Michael at m.dorkenwald(at)gmail.com. If our project is helpful for your research, please consider citing.

Table of Content

  1. Requirements
  2. Running pretrained models
  3. Data preparation
  4. Evaluation
    1. Synthesis quality
    2. Diversity
  5. Training
    1. Stage1: Video-to-Video synthesis
    2. Stage2: cINN for Image-to-Video synthesis
  6. Shout-outs
  7. BibTeX

Requirements

A suitable conda environment named i2v can be created and activated with

conda env create -f environment.yaml
conda activate i2v

For this repository cuda verion 11.1 is used. To suppress the annoying warnings from kornia please run all python scripts with -W ignore.

Running pretrained models

One can test our method using the scripts below on images placed in assets/GT_samples after placing the pre-trained model weights for the corresponding datasets e.g. bair in the models folder like models/bair/.

python -W ignore generate_samples.py -dataset landscape -gpu <gpu_id> -seq_length <sequence_length>

teaser

Moreoever, one can also transfer an observed dynamic from a given video (first row) to an arbitrary starting frame using

python -W ignore generate_transfer.py -dataset landscape -gpu <gpu_id> 

teaser teaser

python -W ignore generate_samples.py -dataset bair -gpu <gpu_id> 

teaser

Our model can be extended to control specific factors e.g. the endpoint location of the robot arm. Note, to run this script you need to download the BAIR dataset.

python -W ignore visualize_endpoint.py -dataset bair -gpu <gpu_id> -data_path <path2data>
Sample 1 Sample 2

or look only on the last frame of the generated sequence, which is similar since all videos were conditioned on the same endpoint

Sample 1 Sample 2
python -W ignore generate_samples.py -dataset iPER -gpu <GPU_ID>

teaser

python -W ignore generate_samples.py -dataset DTDB -gpu <GPU_ID> -texture fire

teaser

python -W ignore generate_samples.py -dataset DTDB -gpu <GPU_ID> -texture vegetation

teaser

python -W ignore generate_samples.py -dataset DTDB -gpu <GPU_ID> -texture clouds

teaser

python -W ignore generate_samples.py -dataset DTDB -gpu <GPU_ID> -texture waterfall

teaser

Data preparation

BAIR

To download the dataset to a given target directory <TARGETDIR>, run the following command

sh data/bair/download_bair.sh <TARGETDIR>

In order to convert the tensorflow records file run the following command

python data/bair/convert_bair.py --data_dir <DATADIR> --output_dir <TARGETDIR>

traj_256_to_511 is used for validation and traj_0_to_255 for testing. The resulting folder structure should be the following

$bair/train/
├── traj_512_to_767
│   ├── 1
|   ├── ├── 0.png
|   ├── ├── 1.png
|   ├── ├── 2.png
|   ├── ├── ...
│   ├── 2
│   ├── ...
├── ...
$bair/eval/
├── traj_256_to_511
│   ├── 1
|   ├── ├── 0.png
|   ├── ├── 1.png
|   ├── ├── 2.png
|   ├── ├── ...
│   ├── 2
│   ├── ...
$bair/test/
├── traj_0_to_255
│   ├── 1
|   ├── ├── 0.png
|   ├── ├── 1.png
|   ├── ├── 2.png
|   ├── ├── ...
│   ├── 2
│   ├── ...

Please cite the corresponding paper if you use the data.

Landscape

Download the corresponding dataset from here using e.g. gdown. To use our provided data loader all images need to be renamed to frame0 to frameX to alleviate the problem of missing frames. Therefore the following script can be used

python data/landscape/rename_images.py --data_dir <DATADIR> 

In data/landscape we provide a list of videos that were used for training and testing. Please cite the corresponding paper if you use the data.

iPER

Download the dataset from here and run

python data/iPER/extract_iPER.py --raw_dir <DATADIR> --processed_dir <TARGETDIR>

to extract the frames. In data/iPER we provide a list of videos that were used for train, eval, and test. Please cite the corresponding paper if you use the data.

Dynamic Textures

Download the corrsponding dataset from here and unzip it. Please cite the corresponding paper if you use the data. The original mp4 files from DTDB can be downloaded from here.

Evaluation

After storing the data as described, the evaluation script for each dataset can be used.

Synthesis quality

We use the following metrics to measure synthesis quality: LPIPS, FID, FVD, DTFVD. The latter was introduced in this work and is a specific FVD for dynamic textures. Therefore, please download the weights of the I3D model from here and place it in the models folder like /models/DTI3D/. For more details on DTFVD please see Sec. C3 in supplemental. To compute the mentioned metrics for a given dataset please run

python -W ignore eval_synthesis_quality.py -gpu <gpu_id> -dataset <dataset> -data_path <path2data> -FVD True -LPIPS True -FID True -DTFVD True

for DTDB please specify the dynamic texture you want to evalute e.g. fire

python -W ignore eval_synthesis_quality.py -gpu <gpu_id> -dataset DTDB -data_path <path2data> -texture fire -FVD True -LPIPS True -FID True -DTFVD True

Please cite our work if you use DTFVD in your work. If you place the chkpts outside this repository please specify the location using the argument -chkpt <path_to_chkpt>.

Diversity

We measure diversity by comparing different realizations of an example using a pretrained VGG, I3D and DTI3D backbone. The last two consider the temporal property of the data whereas for the VGG diversity score compared images framewise. To evaluate diversity for a given dataset please run

python -W ignore eval_diversity.py -gpu <gpu_id> -dataset <dataset> -data_path <path2data> -DTI3D True -VGG True -I3D True -seq_length <length>

for DTDB please specify the dynamic texture you want to evalute e.g. fire

python -W ignore eval_diversity.py -gpu <gpu_id> -dataset DTDB -data_path <path2data> -texture fire -DTI3D True -VGG True -I3D True 

Training

The training of our models is divided into two consecutive stages. In stage 1, we learn an information preserving video latent representation using a conditional generative model which reconstructs the given input video as best as possible. After that, we learn a conditional INN to map the video latent representation to a residual space depicting the scene dynamics conditioned on the starting frame and additional control factors. During inference, we now can sample new scene dynamics from the residual distribution and synthesize novel videos due to the bijective nature of the cINN. For more details please check out our paper.

For logging our runs we used and recommend wandb. Please create a free account and add your username to the config. If you don't want to use it, the metrics are also logged in a csv file and samples are written out in the specified chkpt folder. Therefore, please set logging mode to offline. For logging (PyTorch) FVD please download the weights of a PyTorch I3D from here and place it in models like /models/PI3D/. For logging DTFVD please download the weights of the DTI3D model from here and place it in the models folder like /models/DTI3D/. Depending on the dataset please specify either FVD or DTFVD under FVD in the config. For each provided pretrained model we left the corresponding config file in the corresponding folder. If you want to run our model on a dataset we did not provide please create a new config. Before you start a run please specify the data path, save path, and the name of the run in the config.

Stage 1: Video-to-Video synthesis

To train the conditional generative model for video-to-video synthesis run the following command

python -W ignore -m stage1_VAE.main -gpu <gpu_id> -cf stage1_VAE/configs/<config>

Stage 2: cINN for Image-to-Video synthesis

Before we can train the cINN, we need to train an AE to obtain an encoder to embed the starting frame for the cINN. You can use the on provided or train your own by running

python -W ignore -m stage2_cINN.AE.main -gpu <gpu_id> -cf stage2_cINN/AE/configs/<config>

To train the cINN, we need to specify the location of the trained encoder as well as the first stage model in the config. After that, training of the cINN can be started by

python -W ignore -m stage2_cINN.main -gpu <gpu_id> -cf stage2_cINN/configs/<config>

To reproduce the controlled video synthesis experiment, one can specify the control True in the bair_config.yaml to additional condition the cINN on the endpoint location.

Shout-outs

Thanks to everyone who makes their code and models available. In particular,

  • The decoder architecture is inspired by SPADE
  • The great work and code of Stochastic Latent Residual Video Prediction SRVP
  • The 3D encoder and discriminator are based on 3D-Resnet and spatial discriminator is adapted from PatchGAN
  • The metrics which were used LPIPS PyTorch FID FVD

BibTeX

@misc{dorkenwald2021stochastic,
      title={Stochastic Image-to-Video Synthesis using cINNs}, 
      author={Michael Dorkenwald and Timo Milbich and Andreas Blattmann and Robin Rombach and Konstantinos G. Derpanis and Björn Ommer},
      year={2021},
      eprint={2105.04551},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
Owner
CompVis Heidelberg
Computer Vision research group at the Ruprecht-Karls-University Heidelberg
CompVis Heidelberg
This project deals with the detection of skin lesions within the ISICs dataset using YOLOv3 Object Detection with Darknet.

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License. Skin Lesion detection using YOLO This project deal

Lalith Veerabhadrappa Badiger 1 Nov 22, 2021
ExCon: Explanation-driven Supervised Contrastive Learning

ExCon: Explanation-driven Supervised Contrastive Learning Contributors of this repo: Zhibo Zhang ( Zhibo (Darren) Zhang 18 Nov 01, 2022

Large-scale Hyperspectral Image Clustering Using Contrastive Learning, CIKM 21 Workshop

Spectral-spatial contrastive clustering (SSCC) Yaoming Cai, Yan Liu, Zijia Zhang, Zhihua Cai, and Xiaobo Liu, Large-scale Hyperspectral Image Clusteri

Yaoming Cai 4 Nov 02, 2022
Implementation of Research Paper "Learning to Enhance Low-Light Image via Zero-Reference Deep Curve Estimation"

Zero-DCE and Zero-DCE++(Lite architechture for Mobile and edge Devices) Papers Abstract The paper presents a novel method, Zero-Reference Deep Curve E

Tauhid Khan 15 Dec 10, 2022
Simple Dynamic Batching Inference

Simple Dynamic Batching Inference 解决了什么问题? 众所周知,Batch对于GPU上深度学习模型的运行效率影响很大。。。 是在Inference时。搜索、推荐等场景自带比较大的batch,问题不大。但更多场景面临的往往是稀碎的请求(比如图片服务里一次一张图)。 如果

116 Jan 01, 2023
Official code for 'Pixel-wise Energy-biased Abstention Learning for Anomaly Segmentationon Complex Urban Driving Scenes'

PEBAL This repo contains the Pytorch implementation of our paper: Pixel-wise Energy-biased Abstention Learning for Anomaly Segmentationon Complex Urba

Yu Tian 115 Dec 29, 2022
Use VITS and Opencpop to develop singing voice synthesis; Maybe it will VISinger.

Init Use VITS and Opencpop to develop singing voice synthesis; Maybe it will VISinger. 本项目基于 https://github.com/jaywalnut310/vits https://github.com/S

AmorTX 107 Dec 23, 2022
Optimizing DR with hard negatives and achieving SOTA first-stage retrieval performance on TREC DL Track (SIGIR 2021 Full Paper).

Optimizing Dense Retrieval Model Training with Hard Negatives Jingtao Zhan, Jiaxin Mao, Yiqun Liu, Jiafeng Guo, Min Zhang, Shaoping Ma 🔥 News 2021-10

Jingtao Zhan 99 Dec 27, 2022
PyTorch implementation of MuseMorphose, a Transformer-based model for music style transfer.

MuseMorphose This repository contains the official implementation of the following paper: Shih-Lun Wu, Yi-Hsuan Yang MuseMorphose: Full-Song and Fine-

Yating Music, Taiwan AI Labs 142 Jan 08, 2023
PyTorch implementation of DCT fast weight RNNs

DCT based fast weights This repository contains the official code for the paper: Training and Generating Neural Networks in Compressed Weight Space. T

Kazuki Irie 4 Dec 24, 2022
NeurIPS 2021, self-supervised 6D pose on category level

SE(3)-eSCOPE video | paper | website Leveraging SE(3) Equivariance for Self-Supervised Category-Level Object Pose Estimation Xiaolong Li, Yijia Weng,

Xiaolong 63 Nov 22, 2022
Towards Boosting the Accuracy of Non-Latin Scene Text Recognition

Convolutional Recurrent Neural Network + CTCLoss | STAR-Net Code for paper "Towards Boosting the Accuracy of Non-Latin Scene Text Recognition" Depende

Sanjana Gunna 7 Aug 07, 2022
Train CNNs for the fruits360 data set in NTOU CS「Machine Vision」class.

CNNs fruits360 Train CNNs for the fruits360 data set in NTOU CS「Machine Vision」class. CNN on a pretrained model Build a CNN on a pretrained model, Res

Ricky Chuang 1 Mar 07, 2022
subpixel: A subpixel convnet for super resolution with Tensorflow

subpixel: A subpixel convolutional neural network implementation with Tensorflow Left: input images / Right: output images with 4x super-resolution af

Atrium LTS 2.1k Dec 23, 2022
This repo contains the official implementations of EigenDamage: Structured Pruning in the Kronecker-Factored Eigenbasis

EigenDamage: Structured Pruning in the Kronecker-Factored Eigenbasis This repo contains the official implementations of EigenDamage: Structured Prunin

Chaoqi Wang 107 Apr 20, 2022
thundernet ncnn

MMDetection_Lite 基于mmdetection 实现一些轻量级检测模型,安装方式和mmdeteciton相同 voc0712 voc 0712训练 voc2007测试 coco预训练 thundernet_voc_shufflenetv2_1.5 input shape mAP 320

DayBreak 39 Dec 05, 2022
The code for our CVPR paper PISE: Person Image Synthesis and Editing with Decoupled GAN, Project Page, supp.

PISE The code for our CVPR paper PISE: Person Image Synthesis and Editing with Decoupled GAN, Project Page, supp. Requirement conda create -n pise pyt

jinszhang 110 Nov 21, 2022
[ICLR 2021] "Neural Architecture Search on ImageNet in Four GPU Hours: A Theoretically Inspired Perspective" by Wuyang Chen, Xinyu Gong, Zhangyang Wang

Neural Architecture Search on ImageNet in Four GPU Hours: A Theoretically Inspired Perspective [PDF] Wuyang Chen, Xinyu Gong, Zhangyang Wang In ICLR 2

VITA 156 Nov 28, 2022
Official repository for the paper "Going Beyond Linear Transformers with Recurrent Fast Weight Programmers"

Recurrent Fast Weight Programmers This is the official repository containing the code we used to produce the experimental results reported in the pape

IDSIA 36 Nov 15, 2022
Code Repo for the ACL21 paper "Common Sense Beyond English: Evaluating and Improving Multilingual LMs for Commonsense Reasoning"

Common Sense Beyond English: Evaluating and Improving Multilingual LMs for Commonsense Reasoning This is the Github repository of our paper, "Common S

INK Lab @ USC 19 Nov 30, 2022