Torch implementation of "Enhanced Deep Residual Networks for Single Image Super-Resolution"

Overview

NTIRE2017 Super-resolution Challenge: SNU_CVLab

Introduction

This is our project repository for CVPR 2017 Workshop (2nd NTIRE).

We, Team SNU_CVLab, (Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, and Kyoung Mu Lee of Computer Vision Lab, Seoul National University) are winners of NTIRE2017 Challenge on Single Image Super-Resolution.

Our paper was published in CVPR 2017 workshop (2nd NTIRE), and won the Best Paper Award of the workshop challenge track.

Please refer to our paper for details.

If you find our work useful in your research or publication, please cite our work:

[1] Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, and Kyoung Mu Lee, "Enhanced Deep Residual Networks for Single Image Super-Resolution," 2nd NTIRE: New Trends in Image Restoration and Enhancement workshop and challenge on image super-resolution in conjunction with CVPR 2017. [PDF] [arXiv] [Slide]

@InProceedings{Lim_2017_CVPR_Workshops,
  author = {Lim, Bee and Son, Sanghyun and Kim, Heewon and Nah, Seungjun and Lee, Kyoung Mu},
  title = {Enhanced Deep Residual Networks for Single Image Super-Resolution},
  booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
  month = {July},
  year = {2017}
}

In this repository, we provide

  • Our model architecture description (EDSR, MDSR)
  • NTIRE2017 Super-resolution Challenge Results
  • Demo & Training code
  • Trained models (EDSR, MDSR)
  • Datasets we used (DIV2K, Flickr2K)
  • Super-resolution examples

The code is based on Facebook's Torch implementation of ResNet (facebook/fb.resnet.torch).

We also provide PyTorch version of EDSR and MDSR. (Until now, only some models are available.)

Model Architecture

EDSR (Single-scale model. We provide scale x2, x3, x4 models).

EDSR

MDSR (Multi-scale model. It can handle x2, x3, x4 super-resolution in a single model).

MDSR

Note that the MDSR architecture for the challenge and for the paper[1] is slightly different. During the challenge, MDSR had variation between two challenge tracks. While we had scale-specific feature extraction modules for track 2:unknown downscaling, we didn't use the scale-specific modules for track 1:bicubic downscaling.

We later unified the MDSR model in our paper[1] by including scale-specific modules for both cases. From now on, unless specified as "challenge", we describe the models described in the paper.

NTIRE2017 Super-resolution Challenge Results

We proposed 2 methods and they won the 1st (EDSR) and 2nd (MDSR) place.

Challenge_result

We have also compared the super-resolution performance of our models with previous state-of-the-art methods.

Paper_result

About our code

Dependencies

  • Torch7
  • cuDNN
  • nccl (Optional, for faster GPU communication)

Our code is tested under Ubuntu 14.04 and 16.04 environment with Titan X GPUs (12GB VRAM).

Code

Clone this repository into any place you want. You may follow the example below.

makeReposit = [/the/directory/as/you/wish]
mkdir -p $makeReposit/; cd $makeReposit/
git clone https://github.com/LimBee/NTIRE2017.git

Quick Start (Demo)

You can test our super-resolution algorithm with your own images.

We assume the images are downsampled by bicubic interpolation.

Model Scale File Name Self Esemble # ResBlocks # Filters # Parameters
EDSR baseline x2 baseline_x2.t7 X 16 64 1.5M
EDSR baseline x3 baseline_x3.t7 X 16 64 1.5M
EDSR baseline x4 baseline_x4.t7 X 16 64 1.5M
MDSR baseline Multi baseline_multiscale.t7 X 16 64 3.2M
EDSR x2 EDSR_x2.t7 X 32 256 43M
EDSR x3 EDSR_x3.t7 X 32 256 43M
EDSR x4 EDSR_x4.t7 X 32 256 43M
MDSR Multi MDSR.t7 X 80 64 8.0M
EDSR+ x2 EDSR_x2.t7 O 32 256 43M
EDSR+ x3 EDSR_x3.t7 O 32 256 43M
EDSR+ x4 EDSR_x4.t7 O 32 256 43M
MDSR+ Multi MDSR.t7 O 80 64 8.0M

  1. Download our models

    cd $makeReposit/NTIRE2017/demo/model/
    
    # Our models for the paper[1]
    wget https://cv.snu.ac.kr/research/EDSR/model_paper.tar

    Or, use the link: model_paper.tar

    (If you would like to run the models we used during the challenge, please contact us.)

    After downloading the .tar files, make sure that the model files are placed in proper locations. For example,

    $makeReposit/NTIRE2017/demo/model/bicubic_x2.t7
    $makeReposit/NTIRE2017/demo/model/bicubic_x3.t7
    ...
  2. Place your low-resolution test images at

    $makeReposit/NTIRE2017/demo/img_input/

    The demo code will read .jpg, .jpeg, .png format images.

  3. Run test.lua

    You can run different models and scales by changing input arguments.

    # To run for scale 2, 3, or 4, set -scale as 2, 3, or 4
    # To run EDSR+ and MDSR+, you need to set -selfEnsemble as true
    
    cd $makeReposit/NTIRE2017/demo
    
    # Test EDSR (scale 2)
    th test.lua -model EDSR_x2 -selfEnsemble false
    
    # Test EDSR+ (scale 2)
    th test.lua -model EDSR_x2 -selfEnsemble true
    
    # Test MDSR (scale 2)
    th test.lua -model MDSR -scale 2 -selfEnsemble false
    
    # Test MDSR+ (scale 2)
    th test.lua -model MDSR -scale 2 -selfEnsemble true

    (Note: To run the MDSR, model name should include multiscale or MDSR. e.g. multiscale_blahblahblah.t7)

    The result images will be located at

    $makeReposit/NTIRE2017/demo/img_output/
    • Here are some optional argument examples you can adjust. Please refer to the following explanation.
    # You can test our model with multiple GPU. (n = 1, 2, 4)
    -nGPU       [n]
    
    # You must specify this directory. Default is /var/tmp/dataset
    -dataDir    [$makeData]
    -dataset    [DIV2K | myData]
    -save       [Folder name]
    
    # Please see our paper[1] if you want to know about self-ensemble.
    -selfEnsemble   [true | false]
    
    # Please reduce the chopSize when you see 'out of memory'.
    # The optimal size of S can be vary depend on your maximum GPU memory.
    -chopSize   [S]   
  4. (Optional) Evaluate PSNR and SSIM if you have ground-truth HR images

    Place the GT images at

    $makeReposit/NTIRE2017/demo/img_target

    Evaluation is done by running the MATLAB script.

    matlab -nodisplay <evaluation.m

    If you don't want to calculate SSIM, please modify evaluation.m file as below. (Calculating SSIM of large image is very slow for 3 channel images.)

    line 6:     psnrOnly = false; -> psnrOnly = true;
    

You can reproduce our final results by running makeFinal.sh in NTIRE2017/demo directory. Please uncomment the command you want to execute in the file.

sh makeFinal.sh

Dataset

If you want to train or evaluate our models with DIV2K or Flickr2K dataset, please download the dataset from here. Place the tar file to the location you want. (We recommend /var/tmp/dataset/) If the dataset is located otherwise, you have to change the optional argument -dataset for training and test.

  • DIV2K from NTIRE2017

    makeData = /var/tmp/dataset/ # We recommend this path, but you can freely change it.
    mkdir -p $makeData/; cd $makedata/
    tar -xvf DIV2K.tar

    You should have the following directory structure:

    /var/tmp/dataset/DIV2K/DIV2K_train_HR/0???.png
    /var/tmp/dataset/DIV2K/DIV2K_train_LR_bicubic/X?/0???.png
    /var/tmp/dataset/DIV2K/DIV2K_train_LR_unknown/X?/0???.png

  • Flickr2K dataset collected by ourselves using Flickr API

    makeData = /var/tmp/dataset/
    mkdir -p $makeData/; cd $makedata/
    wget https://cv.snu.ac.kr/research/EDSR/Flickr2K.tar
    tar -xvf Flickr2K.tar

    You should have the following directory structure:

    /var/tmp/dataset/Flickr2K/Flickr2K_HR/00????.png
    /var/tmp/dataset/Flickr2K/Flickr2K_train_LR_bicubic/X?/00????x?.png
    /var/tmp/dataset/Flickr2K/Flickr2K_train_LR_unknown/X?/00????x?.png

    We also provide the code we used for collecting the Flickr2K images at

    $makeReposit/NTIRE2017/code/tools/Flickr2K/

    Use your own flickr API keys to use the script.

    During the challenge, we additionally generated training data by learning simple downsampler networks from DIV2K dataset track 2.
    You can download the downsampler models from here.

To make data loading faster, you can convert the dataset into binary .t7 files

  • Convert DIV2K dataset from .png to into .t7 files
    cd $makeReposit/NTIRE2017/code/tools
    
    # Choose one among below
    
    # This command generates multiple t7 files for
    # each image in DIV2K_train_HR folder (Requires ~2GB RAM for training)
    th png_to_t7.lua -apath $makeData -dataset DIV2K -split true
    
    # This command generates a single t7 file that contains
    # every image in DIV2K_train_HR folder (Requires ~16GB RAM for training)
    th png_to_t7.lua -apath $makeData -dataset DIV2K -split false
  • Convert Flickr2K dataset into .t7 files
    cd $makeReposit/NTIRE2017/code/tools
    
    # This command generates multiple t7 files for
    # each image in Flickr2K_HR folder
    th png_to_t7.lua -apath $makeData -dataset Flickr2K -split true

You can also use .png files too. Please see below Training section for the details.

Training

  1. To train our baseline model, please run the following command:

    th main.lua         # This model is not our final model!
    • Here are some optional arguments you can adjust. If you have any problem, please refer following lines. You can check out details in NTIRE2017/code/opts.lua.
      # You can train the model with multiple GPU. (Not multi-scale model.)
      -nGPU       [n]
      
      # Number of threads for data loading.
      -nThreads   [n]   
      
      # Please specify this directory. Default is /var/tmp/dataset
      -datadir    [$makeData]  
      
      # You can make an experiment folder with the name you want.
      -save       [Folder name]
      
      # You can resume your experiment from the last checkpoint.
      # Please do not set -save and -load at the same time.
      -load       [Folder name]     
      
      # png < t7 < t7pack - requires larger memory
      # png > t7 > t7pack - requires faster CPU & Storage
      -datatype   [png | t7 | t7pack]     
      
      # Please increase the splitBatch when you see 'out of memory' during training.
      # S should be the power of 2. (1, 2, 4, ...)
      -splitBatch [S]
      
      # Please reduce the chopSize when you see 'out of memory' during test.
      # The optimal size of S can be vary depend on your maximum GPU memory.
      -chopSize   [S]
  2. To train our EDSR and MDSR, please use the training.sh in NTIRE2017/code directory. You have to uncomment the line you want to execute.

    cd $makeReposit/NTIRE2017/code
    sh training.sh

    Some model may require pre-trained bicubic scale 2 or bicubic multiscale model. Here, we assume that you already downloaded bicubic_x2.t7 and bicubic_multiscale.t7 in the NTIRE2017/demo/model directory. Otherwise, you can create them yourself. It is also possible to start the traning from scratch by removing -preTrained option in training.sh.


Results

result_1

result_2

result_3

result_4

result_5

result_6

result_7

result_8

result_9

result_10

result_11

result_12

result_13

result_14

result_15

result_16

result_17

result_18

result_19

result_20

NTIRE2017 SR Challenge: Unknown Down-sampling Track

unknown_1

unknown_2

Owner
Bee Lim
Bee Lim
"Learning Free Gait Transition for Quadruped Robots vis Phase-Guided Controller"

PhaseGuidedControl The current version is developed based on the old version of RaiSim series, and possibly requires further modification. It will be

X-Mechanics 12 Oct 21, 2022
Geometric Vector Perceptron --- a rotation-equivariant GNN for learning from biomolecular structure

Geometric Vector Perceptron Code to accompany Learning from Protein Structure with Geometric Vector Perceptrons by B Jing, S Eismann, P Suriana, RJL T

Dror Lab 85 Dec 29, 2022
Continual World is a benchmark for continual reinforcement learning

Continual World Continual World is a benchmark for continual reinforcement learning. It contains realistic robotic tasks which come from MetaWorld. Th

41 Dec 24, 2022
Taking A Closer Look at Domain Shift: Category-level Adversaries for Semantics Consistent Domain Adaptation

Taking A Closer Look at Domain Shift: Category-level Adversaries for Semantics Consistent Domain Adaptation (CVPR2019) This is a pytorch implementatio

Yawei Luo 280 Jan 01, 2023
๐Ÿ“š A collection of all the Deep Learning Metrics that I came across which are not accuracy/loss.

๐Ÿ“š A collection of all the Deep Learning Metrics that I came across which are not accuracy/loss.

Rahul Vigneswaran 1 Jan 17, 2022
PyTorch reimplementation of the paper Involution: Inverting the Inherence of Convolution for Visual Recognition [CVPR 2021].

Involution: Inverting the Inherence of Convolution for Visual Recognition Unofficial PyTorch reimplementation of the paper Involution: Inverting the I

Christoph Reich 100 Dec 01, 2022
๐Ÿ”ฅ TensorFlow Code for technical report: "YOLOv3: An Incremental Improvement"

๐Ÿ†• Are you looking for a new YOLOv3 implemented by TF2.0 ? If you hate the fucking tensorflow1.x very much, no worries! I have implemented a new YOLOv

3.6k Dec 26, 2022
Towards Rolling Shutter Correction and Deblurring in Dynamic Scenes (CVPR2021)

RSCD (BS-RSCD & JCD) Towards Rolling Shutter Correction and Deblurring in Dynamic Scenes (CVPR2021) by Zhihang Zhong, Yinqiang Zheng, Imari Sato We co

81 Dec 15, 2022
True Few-Shot Learning with Language Models

This codebase supports using language models (LMs) for true few-shot learning: learning to perform a task using a limited number of examples from a single task distribution.

Ethan Perez 124 Jan 04, 2023
Distributed Deep learning with Keras & Spark

Elephas: Distributed Deep Learning with Keras & Spark Elephas is an extension of Keras, which allows you to run distributed deep learning models at sc

Max Pumperla 1.6k Jan 05, 2023
Catbird is an open source paraphrase generation toolkit based on PyTorch.

Catbird is an open source paraphrase generation toolkit based on PyTorch. Quick Start Requirements and Installation The project is based on PyTorch 1.

Afonso Salgado de Sousa 5 Dec 15, 2022
The PASS dataset: pretrained models and how to get the data - PASS: Pictures without humAns for Self-Supervised Pretraining

The PASS dataset: pretrained models and how to get the data - PASS: Pictures without humAns for Self-Supervised Pretraining

Yuki M. Asano 249 Dec 22, 2022
Cascaded Deep Video Deblurring Using Temporal Sharpness Prior and Non-local Spatial-Temporal Similarity

This repository is the official PyTorch implementation of Cascaded Deep Video Deblurring Using Temporal Sharpness Prior and Non-local Spatial-Temporal Similarity

hippopmonkey 4 Dec 11, 2022
Multi-Agent Reinforcement Learning for Active Voltage Control on Power Distribution Networks (MAPDN)

Multi-Agent Reinforcement Learning for Active Voltage Control on Power Distribution Networks (MAPDN) This is the implementation of the paper Multi-Age

Future Power Networks 83 Jan 06, 2023
AI grand challenge 2020 Repo (Speech Recognition Track)

KorBERT๋ฅผ ํ™œ์šฉํ•œ ํ•œ๊ตญ์–ด ํ…์ŠคํŠธ ๊ธฐ๋ฐ˜ ์œ„ํ˜‘ ์ƒํ™ฉ์ธ์ง€(2020 ์ธ๊ณต์ง€๋Šฅ ๊ทธ๋žœ๋“œ ์ฑŒ๋ฆฐ์ง€) ๋ณธ ํ”„๋กœ์ ํŠธ๋Š” ETRI์—์„œ ์ œ๊ณต๋œ ํ•œ๊ตญ์–ด korBERT ๋ชจ๋ธ์„ ํ™œ์šฉํ•˜์—ฌ ํญ๋ ฅ ๊ธฐ๋ฐ˜ ํ•œ๊ตญ์–ด ํ…์ŠคํŠธ๋ฅผ ๋ถ„๋ฅ˜ํ•˜๋Š” ๋‹ค์–‘ํ•œ ๋ถ„๋ฅ˜ ๋ชจ๋ธ๋“ค์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ๋ณธ ๊ฐœ๋ฐœ์ž๋“ค์ด ์ฐธ์—ฌํ•œ 2020 ์ธ๊ณต์ง€

Young-Seok Choi 23 Jan 25, 2022
:boar: :bear: Deep Learning based Python Library for Stock Market Prediction and Modelling

bulbea "Deep Learning based Python Library for Stock Market Prediction and Modelling." Table of Contents Installation Usage Documentation Dependencies

Achilles Rasquinha 1.8k Jan 05, 2023
BERT model training impelmentation using 1024 A100 GPUs for MLPerf Training v1.1

Pre-trained checkpoint and bert config json file Location of checkpoint and bert config json file This MLCommons members Google Drive location contain

SAIT (Samsung Advanced Institute of Technology) 12 Apr 27, 2022
Contrastive Language-Image Pretraining

CLIP [Blog] [Paper] [Model Card] [Colab] CLIP (Contrastive Language-Image Pre-Training) is a neural network trained on a variety of (image, text) pair

OpenAI 11.5k Jan 08, 2023
Script for getting information in discord

User-info.py Script for getting information in https://discord.com/ Instalaรงรฃo: apt-get update -y apt-get upgrade -y apt-get install git pkg install

Moleey 1 Dec 18, 2021
EquiBind: Geometric Deep Learning for Drug Binding Structure Prediction

EquiBind: geometric deep learning for fast predictions of the 3D structure in which a small molecule binds to a protein

Hannes Stรคrk 355 Jan 03, 2023