Seeing Dynamic Scene in the Dark: High-Quality Video Dataset with Mechatronic Alignment (ICCV2021)

Overview

Seeing Dynamic Scene in the Dark: High-Quality Video Dataset with Mechatronic Alignment

This is a pytorch project for the paper Seeing Dynamic Scene in the Dark: High-Quality Video Dataset with Mechatronic Alignment by Ruixing Wang, Xiaogang Xu, Chi-Wing Fu, Jiangbo Lu, Bei Yu and Jiaya Jia presented at ICCV2021.

Introduction

It is important to enhance low-light videos where previous work is mostly trained on paired static images or paired videos of static scene. We instead propose a new dataset formed by our new strategies that contains high-quality spatially-aligned video pairs from dynamic scenes in low- and normal-light conditions. It is by building a mechatronic system to precisely control dynamics during the video capture process, and further align the video pairs, both spatially and temporally, by identifying the system's uniform motion stage. Besides the dataset, we also propose an end-to-end framework, in which we design a self-supervised strategy to reduce noise, while enhancing illumination based on the Retinex theory.

paper link

SDSD dataset

The SDSD dataset is collected as dynamic video pairs containing low-light and normal-light videos. This dataset is consists of two parts, i.e., the indoor subset and the outdoor subset. There are 70 video pairs in the indoor subset, and there are 80 video pairs in the outdoor subset.

All data is hosted on baidu pan (验证码: zcrb):
indoor_np: the data in the indoor subset utilized for training, all video frames are saved as .npy file and the resolution is 512 x 960 for fast training.
outdoor_np: the data in the outdoor subset utilized for training, all video frames are saved as .npy file and the resolution is 512 x 960 for fast training.
indoor_png: the original video data in the indoor subset. All frames are saved as .png file and the resolution is 1080 x 1920.
outdoor_png: the original video data in the outdoor subset. All frames are saved as .png file and the resolution is 1080 x 1920.

The evaluation setting could follow the following descriptions:

  1. randomly select 12 scenes from indoor subset and take others as the training data. The performance on indoor scene is computed on the first 30 frames in each of this 12 scenes, i.e., 360 frames.
  2. randomly select 13 scenes from outdoor subset and take others as the training data. The performance on indoor scene is computed on the first 30 frames in each of this 13 scenes, i.e., 390 frames. (the split of training and testing is pointed out by "testing_dir" in the corresponding config file)

The arrangement of the dataset is
--indoor/outdoor
----GT (the videos under normal light)
--------pair1
--------pair2
--------...
----LQ (the videos under low light)
--------pair1
--------pair2
--------...

After download the dataset, place them in './dataset' (you can also place the dataset in other place, once you modify "path_to_dataset" in the corresponding config file).

The smid dataset for training

Different from the original setting of SMID, our work aims to enhance sRGB videos rather than RAW videos. Thus, we first transfer the RAW data to sRGB data with rawpy. You can download the processed dataset for experiments using the following link: baidu pan (验证码: btux):

The arrangement of the dataset is
--smid
----SMID_Long_np (the frame under normal light)
--------0001
--------0002
--------...
----SMID_LQ_np (the frame under low light)
--------0001
--------0002
--------...

After download the dataset, place them in './dataset'. The arrangement of the dataset is the same as that of SDSD. You can also place the dataset in other place, once you modify "path_to_dataset" in the corresponding config file.

Project Setup

First install Python 3. We advise you to install Python 3 and PyTorch with Anaconda:

conda create --name py36 python=3.6
source activate py36

Clone the repo and install the complementary requirements:

cd $HOME
git clone --recursive [email protected]:dvlab-research/SDSD.git
cd SDSD
pip install -r requirements.txt

And compile the library of DCN:

python setup.py build
python setup.py develop
python setup.py install

Train

The training on indoor subset of SDSD:

python -m torch.distributed.launch --nproc_per_node 1 --master_port 4320 train.py -opt options/train/train_in_sdsd.yml --launcher pytorch

The training on outdoor subset of SDSD:

python -m torch.distributed.launch --nproc_per_node 1 --master_port 4320 train.py -opt options/train/train_out_sdsd.yml --launcher pytorch

The training on SMID:

python -m torch.distributed.launch --nproc_per_node 1 --master_port 4322 train.py -opt options/train/train_smid.yml --launcher pytorch

Quantitative Test

We use PSNR and SSIM as the metrics for evaluation.

For the evaluation on indoor subset of SDSD, you should write the location of checkpoint in "pretrain_model_G" of options/test/test_in_sdsd.yml use the following command line:

python quantitative_test.py -opt options/test/test_in_sdsd.yml

For the evaluation on outdoor subset of SDSD, you should write the location of checkpoint in "pretrain_model_G" of options/test/test_out_sdsd.yml use the following command line:

python quantitative_test.py -opt options/test/test_out_sdsd.yml

For the evaluation on SMID, you should write the location of checkpoint in "pretrain_model_G" of options/test/test_smid.yml use the following command line:

python quantitative_test.py -opt options/test/test_smid.yml

Pre-trained Model

You can download our trained model using the following links: https://drive.google.com/file/d/1_V0Dxtr4dZ5xZuOsU1gUIUYUDKJvj7BZ/view?usp=sharing

the model trained with indoor subset in SDSD: indoor_G.pth
the model trained with outdoor subset in SDSD: outdoor_G.pth
the model trained with SMID: smid_G.pth

Qualitative Test

We provide the script to visualize the enhanced frames. Please download the pretrained models or use your trained models, and then use the following command line

python qualitative_test.py -opt options/test/test_in_sdsd.yml
python qualitative_test.py -opt options/test/test_out_sdsd.yml
python qualitative_test.py -opt options/test/test_smid.yml

Citation Information

If you find the project useful, please cite:

@inproceedings{wang2021sdsd,
  title={Seeing Dynamic Scene in the Dark: High-Quality Video Dataset with Mechatronic Alignment},
  author={Ruixing Wang, Xiaogang Xu, Chi-Wing Fu, Jiangbo Lu, Bei Yu and Jiaya Jia},
  booktitle={ICCV},
  year={2021}
}

Acknowledgments

This source code is inspired by EDVR.

Contributions

If you have any questions/comments/bug reports, feel free to e-mail the author Xiaogang Xu ([email protected]).

Owner
DV Lab
Deep Vision Lab
DV Lab
A repository for the paper "Improved Adversarial Systems for 3D Object Generation and Reconstruction".

Improved Adversarial Systems for 3D Object Generation and Reconstruction: This is a repository for the paper "Improved Adversarial Systems for 3D Obje

Edward Smith 188 Dec 25, 2022
YOLOv4 / Scaled-YOLOv4 / YOLO - Neural Networks for Object Detection (Windows and Linux version of Darknet )

Yolo v4, v3 and v2 for Windows and Linux (neural networks for object detection) Paper YOLO v4: https://arxiv.org/abs/2004.10934 Paper Scaled YOLO v4:

Alexey 20.2k Jan 09, 2023
Mini Software that give reminder to drink water as per your weight.

Water Notification Desktop Python The Mini Software built in Python (tkinter) that will remind you to drink water on specific time span based on your

Om Jogani 5 Dec 16, 2022
Official implementation of "Synthetic Temporal Anomaly Guided End-to-End Video Anomaly Detection" (ICCV Workshops 2021: RSL-CV).

Official PyTorch implementation of "Synthetic Temporal Anomaly Guided End-to-End Video Anomaly Detection" This is the implementation of the paper "Syn

Marcella Astrid 11 Oct 07, 2022
A Simulation Environment to train Robots in Large Realistic Interactive Scenes

iGibson: A Simulation Environment to train Robots in Large Realistic Interactive Scenes iGibson is a simulation environment providing fast visual rend

Stanford Vision and Learning Lab 493 Jan 04, 2023
PyTorch code for the paper "Curriculum Graph Co-Teaching for Multi-target Domain Adaptation" (CVPR2021)

PyTorch code for the paper "Curriculum Graph Co-Teaching for Multi-target Domain Adaptation" (CVPR2021) This repo presents PyTorch implementation of M

Evgeny 79 Dec 19, 2022
PyTorch Implementation of Sparse DETR

Sparse DETR By Byungseok Roh*, Jaewoong Shin*, Wuhyun Shin*, and Saehoon Kim at Kakao Brain. (*: Equal contribution) This repository is an official im

Kakao Brain 113 Dec 28, 2022
TransMorph: Transformer for Medical Image Registration

TransMorph: Transformer for Medical Image Registration keywords: Vision Transformer, Swin Transformer, convolutional neural networks, image registrati

Junyu Chen 180 Jan 07, 2023
A new framework, collaborative cascade prediction based on graph neural networks (CCasGNN) to jointly utilize the structural characteristics, sequence features, and user profiles.

CCasGNN A new framework, collaborative cascade prediction based on graph neural networks (CCasGNN) to jointly utilize the structural characteristics,

5 Apr 29, 2022
Information-Theoretic Multi-Objective Bayesian Optimization with Continuous Approximations

Information-Theoretic Multi-Objective Bayesian Optimization with Continuous Approximations Requirements The code is implemented in Python and requires

1 Nov 03, 2021
Library to enable Bayesian active learning in your research or labeling work.

Bayesian Active Learning (BaaL) BaaL is an active learning library developed at ElementAI. This repository contains techniques and reusable components

ElementAI 687 Dec 25, 2022
Official implementation for the paper "SAPE: Spatially-Adaptive Progressive Encoding for Neural Optimization".

SAPE Project page Paper Official implementation for the paper "SAPE: Spatially-Adaptive Progressive Encoding for Neural Optimization". Environment Cre

36 Dec 09, 2022
Equivariant layers for RC-complement symmetry in DNA sequence data

Equi-RC Equivariant layers for RC-complement symmetry in DNA sequence data This is a repository that implements the layers as described in "Reverse-Co

7 May 19, 2022
Prototypical python implementation of the trust-region algorithm presented in Sequential Linearization Method for Bound-Constrained Mathematical Programs with Complementarity Constraints by Larson, Leyffer, Kirches, and Manns.

Prototypical python implementation of the trust-region algorithm presented in Sequential Linearization Method for Bound-Constrained Mathematical Programs with Complementarity Constraints by Larson, L

3 Dec 02, 2022
Few-NERD: Not Only a Few-shot NER Dataset

Few-NERD: Not Only a Few-shot NER Dataset This is the source code of the ACL-IJCNLP 2021 paper: Few-NERD: A Few-shot Named Entity Recognition Dataset.

THUNLP 319 Dec 30, 2022
DI-smartcross - Decision Intelligence Platform for Traffic Crossing Signal Control

DI-smartcross DI-smartcross - Decision Intelligence Platform for Traffic Crossin

OpenDILab 213 Jan 02, 2023
Prompt Tuning with Rules

PTR Code and datasets for our paper "PTR: Prompt Tuning with Rules for Text Classification" If you use the code, please cite the following paper: @art

THUNLP 118 Dec 30, 2022
This repository contain code on Novelty-Driven Binary Particle Swarm Optimisation for Truss Optimisation Problems.

This repository contain code on Novelty-Driven Binary Particle Swarm Optimisation for Truss Optimisation Problems. The main directory include the code

0 Dec 23, 2021
OrienMask: Real-time Instance Segmentation with Discriminative Orientation Maps

OrienMask This repository implements the framework OrienMask for real-time instance segmentation. It achieves 34.8 mask AP on COCO test-dev at the spe

45 Dec 13, 2022
[CVPR 2021] Counterfactual VQA: A Cause-Effect Look at Language Bias

Counterfactual VQA (CF-VQA) This repository is the Pytorch implementation of our paper "Counterfactual VQA: A Cause-Effect Look at Language Bias" in C

Yulei Niu 94 Dec 03, 2022