PyTorch code for our paper "Gated Multiple Feedback Network for Image Super-Resolution" (BMVC2019)

Related tags

Deep LearningGMFN
Overview

Gated Multiple Feedback Network for Image Super-Resolution

This repository contains the PyTorch implementation for the proposed GMFN [arXiv].

The framework of our proposed GMFN. The colored arrows among different time steps denote the multiple feedback connections. The high-level information carried by them helps low-level features become more representative.

Demo

Clone SRFBN as the backbone and satisfy its requirements.

Test

  1. Copy ./networks/gmfn_arch.py into SRFBN_CVPR19/networks/

  2. Download the pre-trained models from Google driver or Baidu Netdisk, unzip and place them into SRFBN_CVPR19/models.

  3. Copy ./options/test/ to SRFBN_CVPR19/options/test/.

  4. Run commands cd SRFBN_CVPR19 and one of followings for evaluation on Set5:

python test.py -opt options/test/test_GMFN_x2.json
python test.py -opt options/test/test_GMFN_x3.json
python test.py -opt options/test/test_GMFN_x4.json
  1. Finally, PSNR/SSIM values for Set5 are shown on your screen, you can find the reconstruction images in ./results.

To test GMFN on other standard SR benchmarks or your own images, please refer to the instruction in SRFBN.

Train

  1. Prepare the training set according to this (1-3).
  2. Modify ./options/train/train_GMFN.json by following the instructions in ./options/train/README.md.
  3. Run commands:
cd SRFBN_CVPR19
python train.py -opt options/train/train_GNFN.json
  1. You can monitor the training process in ./experiments.

  2. Finally, you can follow the test pipeline to evaluate the model trained by yourself.

Performance

Quantitative Results

Quantitative evaluation under scale factors x2, x3 and x4. The best performance is shown in bold and the second best performance is underlined.

More Qualitative Results (x4)

Acknowledgment

If you find our work useful in your research or publications, please consider citing:

@inproceedings{li2019gmfn,
    author = {Li, Qilei and Li, Zhen and Lu, Lu and Jeon, Gwanggil and Liu, Kai and Yang, Xiaomin},
    title = {Gated Multiple Feedback Network for Image Super-Resolution},
    booktitle = {The British Machine Vision Conference (BMVC)},
    year = {2019}
}

@inproceedings{li2019srfbn,
    author = {Li, Zhen and Yang, Jinglei and Liu, Zheng and Yang, Xiaomin and Jeon, Gwanggil and Wu, Wei},
    title = {Feedback Network for Image Super-Resolution},
    booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
    year= {2019}
}
You might also like...
Pytorch implementation of our paper under review — Lottery Jackpots Exist in Pre-trained Models

Lottery Jackpots Exist in Pre-trained Models (Paper Link) Requirements Python = 3.7.4 Pytorch = 1.6.1 Torchvision = 0.4.1 Reproduce the Experiment

The repository offers the official implementation of our paper in PyTorch.

Cloth Interactive Transformer (CIT) Cloth Interactive Transformer for Virtual Try-On Bin Ren1, Hao Tang1, Fanyang Meng2, Runwei Ding3, Ling Shao4, Phi

PyTorch implementations for our SIGGRAPH 2021 paper: Editable Free-viewpoint Video Using a Layered Neural Representation.
PyTorch implementations for our SIGGRAPH 2021 paper: Editable Free-viewpoint Video Using a Layered Neural Representation.

st-nerf We provide PyTorch implementations for our paper: Editable Free-viewpoint Video Using a Layered Neural Representation SIGGRAPH 2021 Jiakai Zha

PyTorch implementation of our ICCV2021 paper: StructDepth: Leveraging the structural regularities for self-supervised indoor depth estimation
PyTorch implementation of our ICCV2021 paper: StructDepth: Leveraging the structural regularities for self-supervised indoor depth estimation

StructDepth PyTorch implementation of our ICCV2021 paper: StructDepth: Leveraging the structural regularities for self-supervised indoor depth estimat

Pytorch implementation for  our ICCV 2021 paper
Pytorch implementation for our ICCV 2021 paper "TRAR: Routing the Attention Spans in Transformers for Visual Question Answering".

TRAnsformer Routing Networks (TRAR) This is an official implementation for ICCV 2021 paper "TRAR: Routing the Attention Spans in Transformers for Visu

This is the official pytorch implementation for our ICCV 2021 paper
This is the official pytorch implementation for our ICCV 2021 paper "TRAR: Routing the Attention Spans in Transformers for Visual Question Answering" on VQA Task

🌈 ERASOR (RA-L'21 with ICRA Option) Official page of "ERASOR: Egocentric Ratio of Pseudo Occupancy-based Dynamic Object Removal for Static 3D Point C

PyTorch implementation of our ICCV 2021 paper, Interpretation of Emergent Communication in Heterogeneous Collaborative Embodied Agents.
PyTorch implementation of our ICCV 2021 paper, Interpretation of Emergent Communication in Heterogeneous Collaborative Embodied Agents.

PyTorch implementation of our ICCV 2021 paper, Interpretation of Emergent Communication in Heterogeneous Collaborative Embodied Agents.

PyTorch implementation for our NeurIPS 2021 Spotlight paper
PyTorch implementation for our NeurIPS 2021 Spotlight paper "Long Short-Term Transformer for Online Action Detection".

Long Short-Term Transformer for Online Action Detection Introduction This is a PyTorch implementation for our NeurIPS 2021 Spotlight paper "Long Short

Official PyTorch implemention of our paper "Learning to Rectify for Robust Learning with Noisy Labels".

WarPI The official PyTorch implemention of our paper "Learning to Rectify for Robust Learning with Noisy Labels". Run python main.py --corruption_type

Comments
  • Approximately how many epoches will reach the results in the paper (4x SR result)

    Approximately how many epoches will reach the results in the paper (4x SR result)

    Hi, liqilei After I have run about 700 epoches, the reult on val set is 32.41(highest result). I want to know if my training process seems to be problematic? How long did you reach 32.47 of SRFBN when you were training? How long does it take to reach 32.70? Thank you.

    opened by Senwang98 7
  • train error size not match

    train error size not match

    CUDA_VISIBLE_DEVICES=0 python train.py -opt options/train/train_GMFN.json I use celeba dataset train

    ===> Training Epoch: [1/1000]... Learning Rate: 0.000200 Epoch: [1/1000]: 0%| | 0/251718 [00:00<?, ?it/s] Traceback (most recent call last): File "train.py", line 131, in main() File "train.py", line 69, in main iter_loss = solver.train_step() File "/exp_sr/SRFBN/solvers/SRSolver.py", line 104, in train_step loss_steps = [self.criterion_pix(sr, split_HR) for sr in outputs] File "/exp_sr/SRFBN/solvers/SRSolver.py", line 104, in loss_steps = [self.criterion_pix(sr, split_HR) for sr in outputs] File "/toolscnn/env_pyt0.4_py3.5_awsrn/lib/python3.5/site-packages/torch/nn/modules/module.py", line 477, in call result = self.forward(*input, **kwargs) File "/toolscnn/env_pyt0.4_py3.5_awsrn/lib/python3.5/site-packages/torch/nn/modules/loss.py", line 87, in forward return F.l1_loss(input, target, reduction=self.reduction) File "/toolscnn/env_pyt0.4_py3.5_awsrn/lib/python3.5/site-packages/torch/nn/functional.py", line 1702, in l1_loss input, target, reduction) File "/toolscnn/env_pyt0.4_py3.5_awsrn/lib/python3.5/site-packages/torch/nn/functional.py", line 1674, in _pointwise_loss return lambd_optimized(input, target, reduction) RuntimeError: input and target shapes do not match: input [16 x 3 x 192 x 192], target [16 x 3 x 48 x 48] at /pytorch/aten/src/THCUNN/generic/AbsCriterion.cu:12

    opened by yja1 3
  • Not an Issue

    Not an Issue

    Hey @Paper99,

    Thanks for sharing your code! I wonder if it is possible to help with visualizing featuer-maps as you did in your paper figure 4.

    Thanks

    opened by Auth0rM0rgan 1
  • My training result with scale = 2

    My training result with scale = 2

    Hi, After I have trained the DIV2k, I get the final result(use best_ckp.pth to test):

    set5:38.16/0.9610
    set14:33.91/0.9203
    urban100:32.81/0.9349
    B100:32.30/0.9011
    manga109:39.01/0.9776
    

    It seems much lower than that in your paper.

    opened by Senwang98 6
Owner
Qilei Li
Qilei Li
Moving Object Segmentation in 3D LiDAR Data: A Learning-based Approach Exploiting Sequential Data

LiDAR-MOS: Moving Object Segmentation in 3D LiDAR Data This repo contains the code for our paper: Moving Object Segmentation in 3D LiDAR Data: A Learn

Photogrammetry & Robotics Bonn 394 Dec 29, 2022
Metric learning algorithms in Python

metric-learn: Metric Learning in Python metric-learn contains efficient Python implementations of several popular supervised and weakly-supervised met

1.3k Dec 28, 2022
VIsually-Pivoted Audio and(N) Text

VIP-ANT: VIsually-Pivoted Audio and(N) Text Code for the paper Connecting the Dots between Audio and Text without Parallel Data through Visual Knowled

Yän.PnG 16 Nov 04, 2022
Learning RGB-D Feature Embeddings for Unseen Object Instance Segmentation

Unseen Object Clustering: Learning RGB-D Feature Embeddings for Unseen Object Instance Segmentation Introduction In this work, we propose a new method

NVIDIA Research Projects 132 Dec 13, 2022
TensorFlow implementation of the paper "Hierarchical Attention Networks for Document Classification"

Hierarchical Attention Networks for Document Classification This is an implementation of the paper Hierarchical Attention Networks for Document Classi

Quoc-Tuan Truong 83 Dec 05, 2022
Regression Metrics Calculation Made easy for tensorflow2 and scikit-learn

Regression Metrics Installation To install the package from the PyPi repository you can execute the following command: pip install regressionmetrics I

Ashish Patel 11 Dec 16, 2022
Google Brain - Ventilator Pressure Prediction

Google Brain - Ventilator Pressure Prediction https://www.kaggle.com/c/ventilator-pressure-prediction The ventilator data used in this competition was

Samuele Cucchi 1 Feb 11, 2022
No-Reference Image Quality Assessment via Transformers, Relative Ranking, and Self-Consistency

This repository contains the implementation for the paper: No-Reference Image Quality Assessment via Transformers, Relative Ranking, and Self-Consiste

Alireza Golestaneh 75 Dec 30, 2022
Cross-modal Retrieval using Transformer Encoder Reasoning Networks (TERN). With use of Metric Learning and FAISS for fast similarity search on GPU

Cross-modal Retrieval using Transformer Encoder Reasoning Networks This project reimplements the idea from "Transformer Reasoning Network for Image-Te

Minh-Khoi Pham 5 Nov 05, 2022
A repository for interferometer controller code.

dses-interferometer-controller A repository for interferometer controller code, hardware, and simulations. See dses.science for more information on th

Eli Reed 1 Jan 17, 2022
Code for the paper "Balancing Training for Multilingual Neural Machine Translation, ACL 2020"

Balancing Training for Multilingual Neural Machine Translation Implementation of the paper Balancing Training for Multilingual Neural Machine Translat

Xinyi Wang 21 May 18, 2022
Data Consistency for Magnetic Resonance Imaging

Data Consistency for Magnetic Resonance Imaging Data Consistency (DC) is crucial for generalization in multi-modal MRI data and robustness in detectin

Dimitris Karkalousos 19 Dec 12, 2022
Experiment about Deep Person Re-identification with EfficientNet-v2

We evaluated the baseline with Resnet50 and Efficienet-v2 without using pretrained models. Also Resnet50-IBN-A and Efficientnet-v2 using pretrained on ImageNet. We used two datasets: Market-1501 and

lan.nguyen2k 77 Jan 03, 2023
How to Learn a Domain Adaptive Event Simulator? ACM MM, 2021

LETGAN How to Learn a Domain Adaptive Event Simulator? ACM MM 2021 Running Environment: pytorch=1.4, 1 NVIDIA-1080TI. More details can be found in pap

CVTEAM 4 Sep 20, 2022
Tensorflow-Project-Template - A best practice for tensorflow project template architecture.

Tensorflow Project Template A simple and well designed structure is essential for any Deep Learning project, so after a lot of practice and contributi

Mahmoud G. Salem 3.6k Dec 22, 2022
Source for the paper "Universal Activation Function for machine learning"

Universal Activation Function Tensorflow and Pytorch source code for the paper Yuen, Brosnan, Minh Tu Hoang, Xiaodai Dong, and Tao Lu. "Universal acti

4 Dec 03, 2022
Unofficial PyTorch Implementation of "Augmenting Convolutional networks with attention-based aggregation"

Pytorch Implementation of Augmenting Convolutional networks with attention-based aggregation This is the unofficial PyTorch Implementation of "Augment

DK 20 Sep 09, 2022
Pytorch code for our paper "Feedback Network for Image Super-Resolution" (CVPR2019)

Feedback Network for Image Super-Resolution [arXiv] [CVF] [Poster] Update: Our proposed Gated Multiple Feedback Network (GMFN) will appear in BMVC2019

Zhen Li 539 Jan 06, 2023
Libtorch yolov3 deepsort

Overview It is for my undergrad thesis in Tsinghua University. There are four modules in the project: Detection: YOLOv3 Tracking: SORT and DeepSORT Pr

Xu Wei 226 Dec 13, 2022
Implementation of "DeepOrder: Deep Learning for Test Case Prioritization in Continuous Integration Testing".

DeepOrder Implementation of DeepOrder for the paper "DeepOrder: Deep Learning for Test Case Prioritization in Continuous Integration Testing". Project

6 Nov 07, 2022