DeFMO: Deblurring and Shape Recovery of Fast Moving Objects (CVPR 2021)

Overview

Evaluation, Training, Demo, and Inference of DeFMO

DeFMO: Deblurring and Shape Recovery of Fast Moving Objects (CVPR 2021)

Denys Rozumnyi, Martin R. Oswald, Vittorio Ferrari, Jiri Matas, Marc Pollefeys

Qualitative results: https://www.youtube.com/watch?v=pmAynZvaaQ4

Pre-trained models

The pre-trained DeFMO model as reported in the paper is available here: https://polybox.ethz.ch/index.php/s/M06QR8jHog9GAcF. Put them into ./saved_models sub-folder.

Inference

For generating video temporal super-resolution:

python run.py --video example/falling_pen.avi

For generating temporal super-resolution of a single frame with the given background:

python run.py --im example/im.png --bgr example/bgr.png

Evaluation

After downloading the pre-trained models and downloading the evaluation datasets, you can run

python eval_dataset.py

Synthetic dataset generation

For the dataset generation, please download:

Then, insert your paths in renderer/settings.py file. To generate the dataset, run in renderer sub-folder:

python run_render.py

Note that the full training dataset with 50 object categories, 1000 objects per category, and 24 timestamps takes up to 1 TB of storage memory. Due to this and also the ShapeNet licence, we cannot make the pre-generated dataset public - please generate it by yourself using the steps above.

Training

Set up all paths in main_settings.py and run

python train.py

Evaluation on real-world datasets

All evaluation datasets can be found at http://cmp.felk.cvut.cz/fmo/. We provide a download_datasets.sh script to download the Falling Objects, the TbD-3D, and the TbD datasets.

Reference

If you use this repository, please cite the following publication ( https://arxiv.org/abs/2012.00595 ):

@inproceedings{defmo,
  author = {Denys Rozumnyi and Martin R. Oswald and Vittorio Ferrari and Jiri Matas and Marc Pollefeys},
  title = {DeFMO: Deblurring and Shape Recovery of Fast Moving Objects},
  booktitle = {CVPR},
  address = {Nashville, Tennessee, USA},
  month = jun,
  year = {2021}
}
Comments
  • Question about training set

    Question about training set

    Hi, thanks for your generous sharing.

    I have a question about training set generating in your work. I generated a training set following your codes. Its size is about 100GB, far less than 1TB. Is there anything wrong?

    Thanks.

    opened by fan-hd 11
  • Apply your model on custom longer video clips

    Apply your model on custom longer video clips

    Hi thank you for releasing your code,

    Can your model be applied on custom videos about high speed train crossing? Video clips last from 3 to 10 seconds, my idea was to preprocess them with your code in order to keep the same frame rate and have a better video quality for later object detection. This is an example frame from original video clip:

    vlcsnap-2021-05-25-15h27m32s030

    I tried to run your code on a video about 6 seconds and the result was a longer video (about 13min) with a lower level of detail, probably I'm doing something wrong. This is an example frame from output video clip:

    vlcsnap-2021-05-25-15h26m22s237

    How can I correctly reconstruct the quality of single frames usin all the information contained in the video?

    opened by fabiozappo 4
  • Question about comparison with Jin et al.'s work (CVPR2018)

    Question about comparison with Jin et al.'s work (CVPR2018)

    Hi, thank you for your interesting work! I have a question about the comparison of methods in your work. When making comparisons, did you retrain Jin et al.'s model ("Learning to Extract a Video Sequence from a Single Motion-Blurred Image" from CVPR 2018), or did you just use their pre-trained checkpoints? I couldn't find the training code on their github page.

    opened by zzh-tech 2
  • Padding in Time-Consistency Loss

    Padding in Time-Consistency Loss

    Hi,

    Congratulations!

    I found that "padding = tuple(side // 10 for side in sh[:2]) + (0,)" for normalized cross-correlation. Does it only implement padding to the height axis, since the padding tuple will be of size (4//10, H//10, 0)?

    Thanks a lot.

    opened by JLiu-Edinburgh 1
  • run on google colab!

    run on google colab!

    I'm confused! and need to run the code on google colab or more explanation about how to implement that code in vscode or something else .if it know someone please help me

    opened by ganikas 3
Releases(v1.0)
Owner
Denys Rozumnyi
PhD student at ETH Zurich.
Denys Rozumnyi
GenshinMapAutoMarkTools - Tools To add/delete/refresh resources mark in Genshin Impact Map

使用说明 适配 windows7以上 64位 原神1920x1080窗口(其他分辨率后续适配) 待更新渊下宫 English version is to be

Zero_Circle 209 Dec 28, 2022
Controlling Hill Climb Racing with Hand Tacking

Controlling Hill Climb Racing with Hand Tacking Opened Palm for Gas Closed Palm for Brake

Rohit Ingole 3 Jan 18, 2022
Jigsaw Rate Severity of Toxic Comments

Jigsaw Rate Severity of Toxic Comments

Guanshuo Xu 66 Nov 30, 2022
Python scripts using the Mediapipe models for Halloween.

Mediapipe-Halloween-Examples Python scripts using the Mediapipe models for Halloween. WHY Mainly for fun. But this repository also includes useful exa

Ibai Gorordo 23 Jan 06, 2023
[3DV 2020] PeeledHuman: Robust Shape Representation for Textured 3D Human Body Reconstruction

PeeledHuman: Robust Shape Representation for Textured 3D Human Body Reconstruction International Conference on 3D Vision, 2020 Sai Sagar Jinka1, Rohan

Rohan Chacko 39 Oct 12, 2022
Evaluating saliency methods on artificial data with different background types

Evaluating saliency methods on artificial data with different background types This repository contains the relevant code for the MedNeurips 2021 subm

2 Jul 05, 2022
[CVPR2021] Look before you leap: learning landmark features for one-stage visual grounding.

LBYL-Net This repo implements paper Look Before You Leap: Learning Landmark Features For One-Stage Visual Grounding CVPR 2021. Getting Started Prerequ

SVIP Lab 45 Dec 12, 2022
Working demo of the Multi-class and Anomaly classification model using the CLIP feature space

👁️ Hindsight AI: Crime Classification With Clip About For Educational Purposes Only This is a recursive neural net trained to classify specific crime

Miles Tweed 2 Jun 05, 2022
PyTorch reimplementation of Diffusion Models

PyTorch pretrained Diffusion Models A PyTorch reimplementation of Denoising Diffusion Probabilistic Models with checkpoints converted from the author'

Patrick Esser 265 Jan 01, 2023
DatasetGAN: Efficient Labeled Data Factory with Minimal Human Effort

DatasetGAN This is the official code and data release for: DatasetGAN: Efficient Labeled Data Factory with Minimal Human Effort Yuxuan Zhang*, Huan Li

302 Jan 05, 2023
This is the code for Deformable Neural Radiance Fields, a.k.a. Nerfies.

Deformable Neural Radiance Fields This is the code for Deformable Neural Radiance Fields, a.k.a. Nerfies. Project Page Paper Video This codebase conta

Google 1k Jan 09, 2023
Visual dialog agents with pre-trained vision-and-language encoders.

Learning Better Visual Dialog Agents with Pretrained Visual-Linguistic Representation Or READ-UP: Referring Expression Agent Dialog with Unified Pretr

7 Oct 08, 2022
code for Multi-scale Matching Networks for Semantic Correspondence, ICCV

MMNet This repo is the official implementation of ICCV 2021 paper "Multi-scale Matching Networks for Semantic Correspondence.". Pre-requisite conda cr

joey zhao 25 Dec 12, 2022
Official repository for the paper "Instance-Conditioned GAN"

Official repository for the paper "Instance-Conditioned GAN" by Arantxa Casanova, Marlene Careil, Jakob Verbeek, Michał Drożdżal, Adriana Romero-Soriano.

Facebook Research 510 Dec 30, 2022
Implementation of ML models like Decision tree, Naive Bayes, Logistic Regression and many other

ML_Model_implementaion Implementation of ML models like Decision tree, Naive Bayes, Logistic Regression and many other dectree_model: Implementation o

Anshuman Dalai 3 Jan 24, 2022
Transformer in Vision

Transformer-in-Vision Recent Transformer-based CV and related works. Welcome to comment/contribute! Keep updated. Resource SCENIC: A JAX Library for C

Yong-Lu Li 1.1k Dec 30, 2022
Official Code for AdvRush: Searching for Adversarially Robust Neural Architectures (ICCV '21)

AdvRush Official Code for AdvRush: Searching for Adversarially Robust Neural Architectures (ICCV '21) Environmental Set-up Python == 3.6.12, PyTorch =

11 Dec 10, 2022
This's an implementation of deepmind Visual Interaction Networks paper using pytorch

Visual-Interaction-Networks An implementation of Deepmind visual interaction networks in Pytorch. Introduction For the purpose of understanding the ch

Mahmoud Gamal Salem 166 Dec 06, 2022
Multi-Anchor Active Domain Adaptation for Semantic Segmentation (ICCV 2021 Oral)

Multi-Anchor Active Domain Adaptation for Semantic Segmentation Munan Ning*, Donghuan Lu*, Dong Wei†, Cheng Bian, Chenglang Yuan, Shuang Yu, Kai Ma, Y

Munan Ning 36 Dec 07, 2022
Event queue (Equeue) dialect is an MLIR Dialect that models concurrent devices in terms of control and structure.

Event Queue Dialect Event queue (Equeue) dialect is an MLIR Dialect that models concurrent devices in terms of control and structure. Motivation The m

Cornell Capra 23 Dec 08, 2022