SLAMP: Stochastic Latent Appearance and Motion Prediction

Overview

SLAMP: Stochastic Latent Appearance and Motion Prediction

Official implementation of the paper SLAMP: Stochastic Latent Appearance and Motion Prediction (Adil Kaan Akan, Erkut Erdem, Aykut Erdem, Fatma Guney), accepted and presented at ICCV 2021.

Article

Preprint

Project Website

Pretrained Models

Requirements

All models were trained with Python 3.7.6 and PyTorch 1.4.0 using CUDA 10.1.

A list of required Python packages is available in the requirements.txt file.

Datasets

For preparations of datasets, we followed SRVP's code. Please follow the links below if you want to construct the datasets.

Stochastic Moving MNIST

KTH

BAIR

KITTI

For KITTI, you need to download the Raw KITTI dataset and extract the zip files. You can follow the official KITTI page.

A good idea might be preprocessing every image in the dataset so that all of them have a size of (w=310, h=92). Then, you can disable the resizing operation in the data loaders, which will speed up the training.

Cityscapes

For Cityscapes, you need to download leftImg8bit_sequence from the official Cityscapes page.

leftImg8bit_sequence contains 30-frame snippets (17Hz) surrounding each left 8-bit image (-19 | +10) from the train, val, and test sets (150000 images).

A good idea might be preprocessing every image in the dataset so that all of them have a size of (w=256, h=128). Then, you can disable the resizing operation in the data loaders, which will speed up the training.

Training

To train a new model, the script train.py should be used as follows:

Data directory ($DATA_DIR) and $SAVE_DIR must be given using options --data_root $DATA_DIR --log_dir $SAVE_DIR. To use GPU, you need to use --device flag.

  • for Stochastic Moving MNIST:
--n_past 5 --n_future 10 --n_eval 25 --z_dim_app 20 --g_dim_app 128 --z_dim_motion 20
--g_dim_motion 128 --last_frame_skip --running_avg --batch_size 32
  • for KTH:
--dataset kth --n_past 10 --n_future 10 --n_eval 40 --z_dim_app 50 --g_dim_app 128 --z_dim_motion 50 --model vgg
--g_dim_motion 128 --last_frame_skip --running_avg --sch_sampling 25 --batch_size 20
  • for BAIR:
--dataset bair --n_past 2 --n_future 10 --n_eval 30 --z_dim_app 64 --g_dim_app 128 --z_dim_motion 64 --model vgg
--g_dim_motion 128 --last_frame_skip --running_avg --sch_sampling 25 --batch_size 20 --channels 3
  • for KITTI:
--dataset bair --n_past 10 --n_future 10 --n_eval 30 --z_dim_app 32 --g_dim_app 64 --z_dim_motion 32 --batch_size 8
--g_dim_motion 64 --last_frame_skip --running_avg --model vgg --niter 151 --channels 3
  • for Cityscapes:
--dataset bair --n_past 10 --n_future 10 --n_eval 30 --z_dim_app 32 --g_dim_app 64 --z_dim_motion 32 --batch_size 7
--g_dim_motion 64 --last_frame_skip --running_avg --model vgg --niter 151 --channels 3 --epoch_size 1300

Testing

To evaluate a trained model, the script evaluate.py should be used as follows:

python evaluate.py --data_root $DATADIR --log_dir $LOG_DIR --model_path $MODEL_PATH

where $LOG_DIR is a directory where the results will be saved, $DATADIR is the directory containing the test set.

Important note: The directory containing the script should include a directory called lpips_weights which contains v0.1 LPIPS weights (from the official repository of The Unreasonable Effectiveness of Deep Features as a Perceptual Metric).

To run the evaluation on GPU, use the option --device.

Pretrained weight links with Dropbox - For MNIST:
wget https://www.dropbox.com/s/eseisehe2u0epiy/slamp_mnist.pth
  • For KTH:
wget https://www.dropbox.com/s/7m0806nt7xt9bz8/slamp_kth.pth
  • For BAIR:
wget https://www.dropbox.com/s/cl1pzs5trw3ltr0/slamp_bair.pth
  • For KITTI:
wget https://www.dropbox.com/s/p7wdboswakyj7yi/slamp_kitti.pth
  • For Cityscapes:
wget https://www.dropbox.com/s/lzwiivr1irffhsj/slamp_cityscapes.pth

PSNR, SSIM, and LPIPS results reported in the paper were obtained with the following options:

  • for stochastic Moving MNIST:
python evaluate.py --data_root $DATADIR --log_dir $LOG_DIR --model_path $MODEL_PATH --n_past 5 --n_future 20
  • for KTH:
python evaluate.py --data_root $DATADIR --log_dir $LOG_DIR --model_path $MODEL_PATH --n_past 10 --n_future 30
  • for BAIR:
python evaluate.py --data_root $DATADIR --log_dir $LOG_DIR --model_path $MODEL_PATH --n_past 2 --n_future 28
  • for KITTI:
python evaluate.py --data_root $DATADIR --log_dir $LOG_DIR --model_path $MODEL_PATH --n_past 10 --n_future 20
  • for Cityscapes:
python evaluate.py --data_root $DATADIR --log_dir $LOG_DIR --model_path $MODEL_PATH --n_past 10 --n_future 20

To calculate FVD results, you can use calculate_fvd.py script as follows:

python calculate_fvd.py $LOG_DIR $SAMPLE_NAME

where $LOG_DIR is the directory containg the results generated by the evaluate script and $SAMPLE_NAME is the file which contains the samples such as psnr.npz, ssim.npz or lpips.npz. The script will print the FVD value at the end.

How to Cite

Please cite the paper if you benefit from our paper or the repository:

@InProceedings{Akan2021ICCV,
    author    = {Akan, Adil Kaan and Erdem, Erkut and Erdem, Aykut and Guney, Fatma},
    title     = {SLAMP: Stochastic Latent Appearance and Motion Prediction},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
    month     = {October},
    year      = {2021},
    pages     = {14728-14737}
}

Acknowledgments

We would like to thank SRVP and SVG authors for making their repositories public. This repository contains several code segments from SRVP's repository and SVG's repository. We appreciate the efforts by Berkay Ugur Senocak for cleaning the code before release.

You might also like...
 Exploring Versatile Prior for Human Motion via Motion Frequency Guidance (3DV2021)
Exploring Versatile Prior for Human Motion via Motion Frequency Guidance (3DV2021)

Exploring Versatile Prior for Human Motion via Motion Frequency Guidance This is the codebase for video-based human motion reconstruction in human-mot

MAU: A Motion-Aware Unit for Video Prediction and Beyond, NeurIPS2021

MAU (NeurIPS2021) Zheng Chang, Xinfeng Zhang, Shanshe Wang, Siwei Ma, Yan Ye, Xinguang Xiang, Wen GAo. Official PyTorch Code for "MAU: A Motion-Aware

Kaggle Lyft Motion Prediction for Autonomous Vehicles 4th place solution

Lyft Motion Prediction for Autonomous Vehicles Code for the 4th place solution of Lyft Motion Prediction for Autonomous Vehicles on Kaggle. Discussion

[arXiv] What-If Motion Prediction for Autonomous Driving ❓🚗💨
[arXiv] What-If Motion Prediction for Autonomous Driving ❓🚗💨

WIMP - What If Motion Predictor Reference PyTorch Implementation for What If Motion Prediction [PDF] [Dynamic Visualizations] Setup Requirements The W

 Waymo motion prediction challenge 2021: 3rd place solution
Waymo motion prediction challenge 2021: 3rd place solution

Waymo motion prediction challenge 2021: 3rd place solution 📜 Technical report 🗨️ Presentation 🎉 Announcement 🛆Motion Prediction Channel Website 🛆

Multi-Person Extreme Motion Prediction

Multi-Person Extreme Motion Prediction Implementation for paper Wen Guo, Xiaoyu Bie, Xavier Alameda-Pineda, Francesc Moreno-Noguer, Multi-Person Extre

Sound and Cost-effective Fuzzing of Stripped Binaries by Incremental and Stochastic Rewriting
Sound and Cost-effective Fuzzing of Stripped Binaries by Incremental and Stochastic Rewriting

StochFuzz: A New Solution for Binary-only Fuzzing StochFuzz is a (probabilistically) sound and cost-effective fuzzing technique for stripped binaries.

Price-Prediction-For-a-Dream-Home - A machine learning based linear regression trained model for house price prediction.
Price-Prediction-For-a-Dream-Home - A machine learning based linear regression trained model for house price prediction.

Price-Prediction-For-a-Dream-Home ROADMAP TO THIS LINEAR REGRESSION BASED HOUSE PRICE PREDICTION PREDICTION MODEL Import all the dependencies of the p

Doge-Prediction - Coding Club prediction ig

Doge-Prediction Coding Club prediction ig Basically: Create an application that

Comments
  • Details on KTH and BAIR Validation Sets

    Details on KTH and BAIR Validation Sets

    Hi! Thanks for providing the implementation of SLAMP. In the data processing scripts (data/kth.py and data/bair.py), how do you generate kth_valset_40.npz and bair_valset_30.npz? Is it following the SRVP's code for generating test sets? Could you please provide some details on those sets? Thank you!

    opened by hanghang177 4
  • nsample missing arguments

    nsample missing arguments

    Hi during running your code, i was unexpectedly see an error due to missing arguments

    File "/notebooks/slamp/helpers.py", line 362, in eval_step nsample = opt.nsample

    File args.py doesnt have any definition about nsample, what does nsample mean? I suppose it should be the number of samples per batch in evaluation which means eval batch size Thanks for your reading

    opened by eric-le-12 1
Releases(v1.0)
Changing the Mind of Transformers for Topically-Controllable Language Generation

We will first introduce the how to run the IPython notebook demo by downloading our pretrained models. Then, we will introduce how to run our training and evaluation code.

IESL 20 Dec 06, 2022
In this project, two programs can help you take full agvantage of time on the model training with a remote server

In this project, two programs can help you take full agvantage of time on the model training with a remote server, which can push notification to your phone about the information during model trainin

GrayLee 8 Dec 27, 2022
RLBot Python bindings for the Rust crate rl_ball_sym

RLBot Python bindings for rl_ball_sym 0.6 Prerequisites: Rust & Cargo Build Tools for Visual Studio RLBot - Verify that the file %localappdata%\RLBotG

Eric Veilleux 2 Nov 25, 2022
Two-stage CenterNet

Probabilistic two-stage detection Two-stage object detectors that use class-agnostic one-stage detectors as the proposal network. Probabilistic two-st

Xingyi Zhou 1.1k Jan 03, 2023
Official PyTorch implementation of the paper: Improving Graph Neural Network Expressivity via Subgraph Isomorphism Counting.

Improving Graph Neural Network Expressivity via Subgraph Isomorphism Counting Official PyTorch implementation of the paper: Improving Graph Neural Net

Giorgos Bouritsas 58 Dec 31, 2022
This is official implementaion of paper "Token Shift Transformer for Video Classification".

This is official implementaion of paper "Token Shift Transformer for Video Classification". We achieve SOTA performance 80.40% on Kinetics-400 val. Paper link

VideoNet 60 Dec 30, 2022
Converting CPT to bert form for use

cpt-encoder 将CPT转成bert形式使用 说明 刚刚刷到又出了一种模型:CPT,看论文显示,在很多中文任务上性能比mac bert还好,就迫不及待想把它用起来。 根据对源码的研究,发现该模型在做nlu建模时主要用的encoder部分,也就是bert,因此我将这部分权重转为bert权重类型

黄辉 1 Oct 14, 2021
Code for the Population-Based Bandits Algorithm, presented at NeurIPS 2020.

Population-Based Bandits (PB2) Code for the Population-Based Bandits (PB2) Algorithm, from the paper Provably Efficient Online Hyperparameter Optimiza

Jack Parker-Holder 22 Nov 16, 2022
Vision-and-Language Navigation in Continuous Environments using Habitat

Vision-and-Language Navigation in Continuous Environments (VLN-CE) Project Website — VLN-CE Challenge — RxR-Habitat Challenge Official implementations

Jacob Krantz 132 Jan 02, 2023
Official implementation of NeurIPS 2021 paper "One Loss for All: Deep Hashing with a Single Cosine Similarity based Learning Objective"

Official implementation of NeurIPS 2021 paper "One Loss for All: Deep Hashing with a Single Cosine Similarity based Learning Objective"

Ng Kam Woh 71 Dec 22, 2022
A general framework for inferring CNNs efficiently. Reduce the inference latency of MobileNet-V3 by 1.3x on an iPhone XS Max without sacrificing accuracy.

GFNet-Pytorch (NeurIPS 2020) This repo contains the official code and pre-trained models for the glance and focus network (GFNet). Glance and Focus: a

Rainforest Wang 169 Oct 28, 2022
Patch-Based Deep Autoencoder for Point Cloud Geometry Compression

Patch-Based Deep Autoencoder for Point Cloud Geometry Compression Overview The ever-increasing 3D application makes the point cloud compression unprec

17 Dec 05, 2022
Official code of CVPR 2021's PLOP: Learning without Forgetting for Continual Semantic Segmentation

PLOP: Learning without Forgetting for Continual Semantic Segmentation This repository contains all of our code. It is a modified version of Cermelli e

Arthur Douillard 116 Dec 14, 2022
Using Machine Learning to Create High-Res Fine Art

BIG.art: Using Machine Learning to Create High-Res Fine Art How to use GLIDE and BSRGAN to create ultra-high-resolution paintings with fine details By

Robert A. Gonsalves 13 Nov 27, 2022
constructing maps of intellectual influence from publication data

Influencemap Project @ ANU Influence in the academic communities has been an area of interest for researchers. This can be seen in the popularity of a

CS Metrics 13 Jun 18, 2022
PyExplainer: A Local Rule-Based Model-Agnostic Technique (Explainable AI)

PyExplainer PyExplainer is a local rule-based model-agnostic technique for generating explanations (i.e., why a commit is predicted as defective) of J

AI Wizards for Software Management (AWSM) Research Group 14 Nov 13, 2022
An adaptive hierarchical energy management strategy for hybrid electric vehicles

An adaptive hierarchical energy management strategy This project contains the source code of an adaptive hierarchical EMS combining heuristic equivale

19 Dec 13, 2022
Source code and data in paper "MDFEND: Multi-domain Fake News Detection (CIKM'21)"

MDFEND: Multi-domain Fake News Detection This is an official implementation for MDFEND: Multi-domain Fake News Detection which has been accepted by CI

Rich 40 Dec 18, 2022
A Kernel fuzzer focusing on race bugs

Razzer: Finding kernel race bugs through fuzzing Environment setup $ source scripts/envsetup.sh scripts/envsetup.sh sets up necessary environment var

Systems and Software Security Lab at Seoul National University (SNU) 328 Dec 26, 2022
The first public PyTorch implementation of Attentive Recurrent Comparators

arc-pytorch PyTorch implementation of Attentive Recurrent Comparators by Shyam et al. A blog explaining Attentive Recurrent Comparators Visualizing At

Sanyam Agarwal 150 Oct 14, 2022