Official PyTorch code for Mutual Affine Network for Spatially Variant Kernel Estimation in Blind Image Super-Resolution (MANet, ICCV2021)

Overview

Mutual Affine Network for Spatially Variant Kernel Estimation in Blind Image Super-Resolution (MANet, ICCV2021)

This repository is the official PyTorch implementation of Mutual Affine Network for Spatially Variant Kernel Estimation in Blind Image Super-Resolution (arxiv, supplementary).

🚀 🚀 🚀 News:


Existing blind image super-resolution (SR) methods mostly assume blur kernels are spatially invariant across the whole image. However, such an assumption is rarely applicable for real images whose blur kernels are usually spatially variant due to factors such as object motion and out-of-focus. Hence, existing blind SR methods would inevitably give rise to poor performance in real applications. To address this issue, this paper proposes a mutual affine network (MANet) for spatially variant kernel estimation. Specifically, MANet has two distinctive features. First, it has a moderate receptive field so as to keep the locality of degradation. Second, it involves a new mutual affine convolution (MAConv) layer that enhances feature expressiveness without increasing receptive field, model size and computation burden. This is made possible through exploiting channel interdependence, which applies each channel split with an affine transformation module whose input are the rest channel splits. Extensive experiments on synthetic and real images show that the proposed MANet not only performs favorably for both spatially variant and invariant kernel estimation, but also leads to state-of-the-art blind SR performance when combined with non-blind SR methods.

Requirements

  • Python 3.7, PyTorch >= 1.6, scipy >= 1.6.3
  • Requirements: opencv-python
  • Platforms: Ubuntu 16.04, cuda-10.0 & cuDNN v-7.5

Note: this repository is based on BasicSR. Please refer to their repository for a better understanding of the code framework.

Quick Run

Download stage3_MANet+RRDB_x4.pth from release and put it in ./pretrained_models. Then, run this command:

cd codes
python test.py --opt options/test/test_stage3.yml

Data Preparation

To prepare data, put training and testing sets in ./datasets as ./datasets/DIV2K/HR/0801.png. Commonly used datasets can be downloaded here.

Training

Step1: to train MANet, run this command:

python train.py --opt options/train/train_stage1.yml

Step2: to train non-blind RRDB, run this command:

python train.py --opt options/train/train_stage2.yml

Step3: to fine-tune RRDB with MANet, run this command:

python train.py --opt options/train/train_stage3.yml

All trained models can be downloaded from release. For testing, downloading stage3 models is enough.

Testing

To test MANet (stage1, kernel estimation only), run this command:

python test.py --opt options/test/test_stage1.yml

To test RRDB-SFT (stage2, non-blind SR with ground-truth kernel), run this command:

python test.py --opt options/test/test_stage2.yml

To test MANet+RRDB (stage3, blind SR), run this command:

python test.py --opt options/test/test_stage3.yml

Note: above commands generate LR images on-the-fly. To generate testing sets used in the paper, run this command:

python prepare_testset.py --opt options/test/prepare_testset.yml

Interactive Exploration of Kernels

To explore spaitally variant kernels on an image, use --save_kernel and run this command to save kernel:

python test.py --opt options/test/test_stage1.yml --save_kernel

Then, run this command to creat an interactive window:

python interactive_explore.py --path ../results/001_MANet_aniso_x4_test_stage1/toy_dataset1/npz/toy1.npz

Results

We conducted experiments on both spatially variant and invariant blind SR. Please refer to the paper and supp for results.

Citation

@inproceedings{liang21manet,
  title={Mutual Affine Network for Spatially Variant Kernel Estimation in Blind Image Super-Resolution},
  author={Liang, Jingyun and Sun, Guolei and Zhang, Kai and Van Gool, Luc and Timofte, Radu},
  booktitle={IEEE Conference on International Conference on Computer Vision},
  year={2021}
}

License & Acknowledgement

This project is released under the Apache 2.0 license. The codes are based on BasicSR, MMSR, IKC and KAIR. Please also follow their licenses. Thanks for their great works.

Comments
  • Training and OOM

    Training and OOM

    Thanks for your code. I tried to train the model with train_stage1.yml, and the Cuda OOM. I am using 2080 Ti, I tried to reduce the batch size from 16 to 2 and the GT_size from 192 to 48. However, the training still OOM. May I know is there anything I missed? Thanks.

    opened by hcleung3325 9
  • [How to get SR image by spatially variant estimated blur kernels]

    [How to get SR image by spatially variant estimated blur kernels]

    Hi, Thank you for your excellent and interesting work! I'm not so clear about the process after kernels estimation during SR reconstruction after reading your paper. Could you please explain?

    opened by CaptainEven 7
  • The method of creating kernels

    The method of creating kernels

    I noticed that the function for creating kernel ('anisotropic_gaussian_kernel_matlab') is different from the standard gaussian distribution (e.g. the method that used in IKC, https://github.com/yuanjunchai/IKC/blob/2a846cf1194cd9bace08973d55ecd8fd3179fe48/codes/utils/util.py#L244). I am wondering why a different way is used here. Actually, a test dataset created by IKC with same sigma range seems to have poor performance on MANet, and vice versa.

    opened by zhiqiangfu 3
  • [import error]

    [import error]

        k = scipy.stats.multivariate_normal.pdf(pos, mean=[0, 0], cov=cov)
    AttributeError: module 'scipy' has no attribute 'stats'
    

    scipy version error? So, which version of scipy is required?

    opened by CaptainEven 2
  • A letter from afar

    A letter from afar

    Good evening, boss! I recently discovered your work about MANet.I found that the length of the gaussian kernel your method generated is equal to 18.Does this setting have any specific meaning? image

    opened by fenghao195 0
  • New Super-Resolution Benchmarks

    New Super-Resolution Benchmarks

    Hello,

    MSU Graphics & Media Lab Video Group has recently launched two new Super-Resolution Benchmarks.

    If you are interested in participating, you can add your algorithm following the submission steps:

    We would be grateful for your feedback on our work!

    opened by EvgeneyBogatyrev 0
  • About LR_Image PSNR/SSIM

    About LR_Image PSNR/SSIM

    Many thanks for your excellent work!

    I wonder what is the LR_Image PSNR/SSIM in the ablation study to evaluate the MANet about kernel prediction, and how to compute these?

    opened by Shaosifan 0
  • Questions about the paper

    Questions about the paper

    Thanks again for your great work. I have several questions about the paper. In Figure 2, you mentioned the input for MANet is a LR, but the input for your code seems to be DIV2K GT. Is there any further process I miss? Also, is that possible for the whole model trained in y-channel since my deployed environment only deals with y-channel? Thanks.

    opened by mrgreen3325 0
  • Issue about class BatchBlur_SV in utils.util

    Issue about class BatchBlur_SV in utils.util

    MANet/codes/utils/util.py Line 661: kernel = kernel.flatten(2).unsqueeze(0).expand(3,-1,-1,-1) The kernel shape: [B, HW, l, l] ->[B, HW, l^2] ->[1, B, HW, l^2] ->[C, B, HW, l^2] I think it is wrong, because it is not corresponding to the shape of pad.

    The line 661 should be kernel = kernel.flatten(2).unsqueeze(1).expand(-1, 3,-1,-1) The kernel shape: [B, HW, l, l] ->[B, HW, l^2] ->[B, 1, HW, l^2] ->[B, C, HW, l^2]

    opened by jiangmengyu18 0
Owner
Jingyun Liang
PhD Student at Computer Vision Lab, ETH Zurich
Jingyun Liang
This is a clean and robust Pytorch implementation of DQN and Double DQN.

DQN/DDQN-Pytorch This is a clean and robust Pytorch implementation of DQN and Double DQN. Here is the training curve: All the experiments are trained

XinJingHao 15 Dec 27, 2022
MinHash, LSH, LSH Forest, Weighted MinHash, HyperLogLog, HyperLogLog++, LSH Ensemble

datasketch: Big Data Looks Small datasketch gives you probabilistic data structures that can process and search very large amount of data super fast,

Eric Zhu 1.9k Jan 07, 2023
Wikidated : An Evolving Knowledge Graph Dataset of Wikidata’s Revision History

Wikidated Wikidated 1.0 is a dataset of Wikidata’s full revision history, which encodes changes between Wikidata revisions as sets of deletions and ad

Lukas Schmelzeisen 11 Aug 16, 2022
Code for ECIR'20 paper Diagnosing BERT with Retrieval Heuristics

Bert Axioms This is the repository with the code for the Paper Diagnosing BERT with Retrieval Heuristics Required Data In order to run this code, you

Arthur Câmara 5 Jan 21, 2022
RL-GAN: Transfer Learning for Related Reinforcement Learning Tasks via Image-to-Image Translation

RL-GAN: Transfer Learning for Related Reinforcement Learning Tasks via Image-to-Image Translation RL-GAN is an official implementation of the paper: T

42 Nov 10, 2022
A PyTorch implementation of Sharpness-Aware Minimization for Efficiently Improving Generalization

sam.pytorch A PyTorch implementation of Sharpness-Aware Minimization for Efficiently Improving Generalization ( Foret+2020) Paper, Official implementa

Ryuichiro Hataya 102 Dec 28, 2022
Code for the paper "There is no Double-Descent in Random Forests"

Code for the paper "There is no Double-Descent in Random Forests" This repository contains the code to run the experiments for our paper called "There

2 Jan 14, 2022
The codes I made while I practiced various TensorFlow examples

TensorFlow_Exercises The codes I made while I practiced various TensorFlow examples About the codes I didn't create these codes by myself, but re-crea

Terry Taewoong Um 614 Dec 08, 2022
Multimodal Descriptions of Social Concepts: Automatic Modeling and Detection of (Highly Abstract) Social Concepts evoked by Art Images

MUSCO - Multimodal Descriptions of Social Concepts Automatic Modeling of (Highly Abstract) Social Concepts evoked by Art Images This project aims to i

0 Aug 22, 2021
BoxInst: High-Performance Instance Segmentation with Box Annotations

Introduction This repository is the code that needs to be submitted for OpenMMLab Algorithm Ecological Challenge, the paper is BoxInst: High-Performan

88 Dec 21, 2022
Code for "Steerable Pyramid Transform Enables Robust Left Ventricle Quantification"

Code for "Steerable Pyramid Transform Enables Robust Left Ventricle Quantification" This is an end-to-end framework for accurate and robust left ventr

2 Jul 09, 2022
Anomaly detection in multi-agent trajectories: Code for training, evaluation and the OpenAI highway simulation.

Anomaly Detection in Multi-Agent Trajectories for Automated Driving This is the official project page including the paper, code, simulation, baseline

12 Dec 02, 2022
Asterisk is a framework to generate high-quality training datasets at scale

Asterisk is a framework to generate high-quality training datasets at scale

Mona Nashaat 44 Apr 25, 2022
Code for: Gradient-based Hierarchical Clustering using Continuous Representations of Trees in Hyperbolic Space. Nicholas Monath, Manzil Zaheer, Daniel Silva, Andrew McCallum, Amr Ahmed. KDD 2019.

gHHC Code for: Gradient-based Hierarchical Clustering using Continuous Representations of Trees in Hyperbolic Space. Nicholas Monath, Manzil Zaheer, D

Nicholas Monath 35 Nov 16, 2022
Uncertainty Estimation via Response Scaling for Pseudo-mask Noise Mitigation in Weakly-supervised Semantic Segmentation

Uncertainty Estimation via Response Scaling for Pseudo-mask Noise Mitigation in Weakly-supervised Semantic Segmentation Introduction This is a PyTorch

XMed-Lab 30 Sep 23, 2022
GULAG: GUessing LAnGuages with neural networks

GULAG: GUessing LAnGuages with neural networks Classify languages in text via neural networks. Привет! My name is Egor. Was für ein herrliches Frühl

Egor Spirin 12 Sep 02, 2022
Unofficial PyTorch Implementation for HifiFace (https://arxiv.org/abs/2106.09965)

HifiFace — Unofficial Pytorch Implementation Image source: HifiFace: 3D Shape and Semantic Prior Guided High Fidelity Face Swapping (figure 1, pg. 1)

MINDs Lab 218 Jan 04, 2023
FLSim a flexible, standalone library written in PyTorch that simulates FL settings with a minimal, easy-to-use API

Federated Learning Simulator (FLSim) is a flexible, standalone core library that simulates FL settings with a minimal, easy-to-use API. FLSim is domain-agnostic and accommodates many use cases such a

Meta Research 162 Jan 02, 2023
For visualizing the dair-v2x-i dataset

3D Detection & Tracking Viewer The project is based on hailanyi/3D-Detection-Tracking-Viewer and is modified, you can find the original version of the

34 Dec 29, 2022
Sentinel-1 vessel detection model used in the xView3 challenge

sar_vessel_detect Code for the AI2 Skylight team's submission in the xView3 competition (https://iuu.xview.us) for vessel detection in Sentinel-1 SAR

AI2 6 Sep 10, 2022