Image Super-Resolution Using Very Deep Residual Channel Attention Networks

Overview

论文名称:Image Super-Resolution Using Very Deep Residual Channel Attention Networks

目录

1. 简介
2. 数据集和复现精度
3. 开始使用
4. 代码结构与详细说明
5. 复现模型超分效果
5. 复现模型相关信息

1. 简介

本项目复现的论文是Yulun Zhang, Kunpeng Li, Kai Li, Lichen Wang, Bineng Zhong, and Yun Fu, 发表在ECCV 2018上的论文。 作者提出了一个深度残差通道注意力网络(RCAN)。特别地,作者设计了一个残差中的残差(RIR)结构来构造深层网络,每个 RIR 结构由数个残差组(RG)以及长跳跃连接(LSC)组成,每个 RG 则包含一些残差块和短跳跃连接(SSC)。RIR 结构允许丰富的低频信息通过多个跳跃连接直接进行传播,使主网络专注于学习高频信息。此外,我们还提出了一种通道注意力机制(CA),通过考虑通道之间的相互依赖性来自适应地重新调整特征。

论文: 《Image Super-Resolution Using Very Deep Residual Channel Attention Networks》

参考repo: RCAN

在此非常感谢yulunzhang、MaFuyan、joaoherrera等人贡献的RCAN,提高了本项目的复现效率。

aistudio体验教程: 使用PaddleGAN复现RCAN

2. 数据集和复现精度

本项目所用到的训练集以及测试集包括相应的下载地址如下:

Name 数据集 数据描述 下载
2K Resolution DIV2K proposed in NTIRE17 (800 train and 100 validation) official website
Classical SR Testing Set5 Set5 test dataset Google Drive / Baidu Drive
Classical SR Testing Set14 Set14 test dataset Google Drive / Baidu Drive

数据集DIV2K, Set5 和 Set14 的组成形式如下:

  PaddleGAN
    ├── data
        ├── DIV2K
              ├── DIV2K_train_HR
              ├── DIV2K_train_LR_bicubic
              |    ├──X2
              |    ├──X3
              |    └──X4
              ├── DIV2K_valid_HR
              ├── DIV2K_valid_LR_bicubic
        ├── Set5
              ├── GTmod12
              ├── LRbicx2
              ├── LRbicx3
              ├── LRbicx4
              └── original
        ├── Set14
              ├── GTmod12
              ├── LRbicx2
              ├── LRbicx3
              ├── LRbicx4
              └── original
            ...

论文中模型(torch框架下训练)在Set14与Set5精度与使用paddle复现模型的精度对比:

框架 Set14
paddle 29.02 / 0.7910
torch 28.98 / 0.7910

Paddle模型(.pdparams)下载

模型 数据集 下载地址 提取码
rcan_x4 DIV2K rcan_x4 1ry9

3. 开始使用

3.1 准备环境

  • 硬件: Tesla V100 * 1
  • 框架:
    • PaddlePaddle >= 2.1.0
    • tqdm
    • PyYAML>=5.1
    • scikit-image>=0.14.0
    • scipy>=1.1.0
    • opencv-python
    • imageio==2.9.0
    • imageio-ffmpeg
    • librosa
    • numba==0.53.1
    • natsort
    • munch
    • easydict

将本项目git clone之后进入项目,使用pip install -r requirements.txt安装依赖即可。

3.2 快速开始

第一步:克隆本项目

# clone this repo
git clone https://github.com/kongdebug/RCAN-Paddle.git
cd RCAN-Paddle

第二步:安装依赖项

pip install -r requirements.txt

第三步:开始训练

单卡训练:

python -u tools/main.py --config-file configs/rcan_x4_div2k.yaml

由于本项目没有使用多卡训练,故不提供相关代码。 如使您想使用自己的数据集以及测试集,需要在配置文件中修改数据集为您自己的数据集。

如果训练断掉,想接着训练:

python -u tools/main.py --config-file configs/rcan_x4_div2k.yaml --resume ${PATH_OF_CHECKPOINT}

第四步:测试

  • 输出预测图像
    • 可以通过第二部分拿到paddle复现的模型,放入一个文件夹中,运行如下程序,得到模型的测试结果
    • Fig/visual文件夹中有预测结果,可直接用于精度评价
python -u tools/main.py --config-file configs/rcan_x4_div2k.yaml --evaluate-only --load ${PATH_OF_WEIGHT}
  • 对预测图像精度评价
    • 运行以上代码后,在output_dir文件夹中得到模型得预测结果,然后运行如下代码进行精度评定。注:--gt_dir与 output_dir两个参数得设置需要对应自己的实际路径。
python  tools/cal_psnr_ssim.py  --gt_dir data/Set14/GTmod12 --output_dir output_dir/rcan_x4_div2k*/visual_test

4. 代码结构与详细说明

4.1 代码结构

├─applications                          
├─benchmark                        
├─deploy                         
├─configs                          
├─data                        
├─output_dir                         
├─ppgan       
├─tools
├─test_tipc
├─Figs
│  README_cn.md                     
│  requirements.txt                      
│  setup.py                                         

4.2 结构说明

本项目基于PaddleGAN开发。configs文件夹中的rcan_x4_div2k.yaml是训练的配置文件,格式沿袭PaddleGAN中的SISR任务,参数设置与论文一致。data文件夹存放训练数据以及 测试数据。output_dir文件夹存放运行过程中输出的文件,一开始为空。test_tipc是用于导出模型预测,以及 TIPC测试的文件夹。

4.3 导出模型部署

  • 训练结束后得到rcan_checkpoint.pdparams文件,需要进行导出inference的步骤。
python3.7 tools/export_model.py -c configs/rcan_x4_div2k.yaml --inputs_size="-1,3,-1,-1" --load output_dir/rcan_checkpoint.pdparams --output_dir ./test_tipc/output/rcan_x4
  • 得到以上模型文件之后,基于PaddleInference对待预测推理的测试数据进行预测。
    • 将上一步导出的inference文件(.pdmodel、.pdiparams以及.pdiparams.info )均放入test_tipc/output/rcan_x4文件夹,注:文件名称均为basesrmodel_generator
    • 运行以下命令,在test_tipc/output/文件夹中得到预测结果
python3.7 tools/inference.py --model_type rcan --seed 123 -c configs/rcan_x4_div2k.yaml --output_path test_tipc/output/ --device=gpu --model_path=./test_tipc/output/rcan_x4/basesrmodel_generator

4.5 TIPC测试支持

test_tipc文件夹下文结构

test_tipc/
├── configs/  # 配置文件目录
    ├── rcan    
        ├── train_infer_python.txt      # 测试Linux上python训练预测(基础训练预测)的配置文件
        ├── train_infer_python_resume.txt      # 加载模型的(基础训练预测)的配置文件
├── output/   # 预测结果
├── common_func.sh    # 基础功能程序
├── prepare.sh                        # 需要的数据和模型下载
├── test_train_inference_python.sh    # 测试python训练预测的主程序
├── readme.md                # TIPC基础链接测试需要安装的依赖说明

注意: 本项目仅提供TIPC基础测试链条中模式lite_train_lite_infer的代码与文档。运行之前先使用vim查看.sh文件的filemode,需要为“filemode=unix"格式。

如果没有准备训练数据,可以运行prepare.sh下载训练数据DIV2K,然后对其解压,调整文件组织如第二部分所示; 如果已经准备好数据,运行如下命令完成TIPC基础测试:

  • 从头开始:
 bash test_tipc/test_train_inference_python.sh ./test_tipc/configs/rcan/train_infer_python.txt 'lite_train_lite_infer'

这里需要注意,这里测试训练时所用的配置文件为configs文件夹下专门为从头开始的lite_train_lite_infer模式设置 的rcan_x4_div2k_tipc.yaml文件,没有加载训练好的模型而是从头训练,所以loss会很高。运行得到的结果在output 文件夹中,项目中该文件夹已放入先前运行得到的日志文件。

  • 加载已训练模型:
    • 将下载的rcan_checkpoint.pdparams模型文件,放入output_dir文件夹下,并改名为iter_238000_checkpoint.pdparams
    • 这里测试需要用的configs文件夹下的rcan_x4_div2k.yaml文件以及train_infer_python_resume.txt文件
    • 运行以下命令:
bash test_tipc/test_train_inference_python.sh ./test_tipc/configs/rcan/train_infer_python_resume.txt 'lite_train_lite_infer'

按照”加载已训练模型“的命令运行之后,最后会得到inference预测的结果图以及精度评价,可以看到psnr与ssim均达标。

5.复现模型超分效果

低分辨率 超分重建后 高分辨率

6.复现模型相关信息

相关信息:

信息 描述
作者 不想科研的Key.L
日期 2021年11月
框架版本 PaddlePaddle==2.2.0
应用场景 图像超分
硬件支持 GPU、CPU
在线体验 notebook
Owner
kongdebug
kongdebug
A collection of Reinforcement Learning algorithms from Sutton and Barto's book and other research papers implemented in Python.

Reinforcement-Learning-Notebooks A collection of Reinforcement Learning algorithms from Sutton and Barto's book and other research papers implemented

Pulkit Khandelwal 1k Dec 28, 2022
Deep Text Search is an AI-powered multilingual text search and recommendation engine with state-of-the-art transformer-based multilingual text embedding (50+ languages).

Deep Text Search - AI Based Text Search & Recommendation System Deep Text Search is an AI-powered multilingual text search and recommendation engine w

19 Sep 29, 2022
[ICCV'21] PlaneTR: Structure-Guided Transformers for 3D Plane Recovery

PlaneTR: Structure-Guided Transformers for 3D Plane Recovery This is the official implementation of our ICCV 2021 paper News There maybe some bugs in

73 Nov 30, 2022
code for CVPR paper Zero-shot Instance Segmentation

Code for CVPR2021 paper Zero-shot Instance Segmentation Code requirements python: python3.7 nvidia GPU pytorch1.1.0 GCC =5.4 NCCL 2 the other python

zhengye 86 Dec 13, 2022
Learned image compression

Overview Pytorch code of our recent work A Unified End-to-End Framework for Efficient Deep Image Compression. We first release the code for Variationa

Jiaheng Liu 163 Dec 04, 2022
A BaSiC Tool for Background and Shading Correction of Optical Microscopy Images

BaSiC Matlab code accompanying A BaSiC Tool for Background and Shading Correction of Optical Microscopy Images by Tingying Peng, Kurt Thorn, Timm Schr

Marr Lab 34 Dec 18, 2022
LF-YOLO (Lighter and Faster YOLO) is used to detect defect of X-ray weld image.

This project is based on ultralytics/yolov3. LF-YOLO (Lighter and Faster YOLO) is used to detect defect of X-ray weld image. Download $ git clone http

26 Dec 13, 2022
PyTorch Implementation of CvT: Introducing Convolutions to Vision Transformers

CvT: Introducing Convolutions to Vision Transformers Pytorch implementation of CvT: Introducing Convolutions to Vision Transformers Usage: img = torch

Rishikesh (ऋषिकेश) 193 Jan 03, 2023
Repository for publicly available deep learning models developed in Rosetta community

trRosetta2 This package contains deep learning models and related scripts used by Baker group in CASP14. Installation Linux/Mac clone the package git

81 Dec 29, 2022
Decensoring Hentai with Deep Neural Networks. Formerly named DeepMindBreak.

DeepCreamPy Decensoring Hentai with Deep Neural Networks. Formerly named DeepMindBreak. A deep learning-based tool to automatically replace censored a

616 Jan 06, 2023
Caffe models in TensorFlow

Caffe to TensorFlow Convert Caffe models to TensorFlow. Usage Run convert.py to convert an existing Caffe model to TensorFlow. Make sure you're using

Saumitro Dasgupta 2.8k Dec 31, 2022
Compositional Sketch Search

Compositional Sketch Search Official repository for ICIP 2021 Paper: Compositional Sketch Search Requirements Install and activate conda environment c

Alexander Black 8 Sep 06, 2021
[CVPR 2020] 3D Photography using Context-aware Layered Depth Inpainting

[CVPR 2020] 3D Photography using Context-aware Layered Depth Inpainting [Paper] [Project Website] [Google Colab] We propose a method for converting a

Virginia Tech Vision and Learning Lab 6.2k Jan 01, 2023
Certified Patch Robustness via Smoothed Vision Transformers

Certified Patch Robustness via Smoothed Vision Transformers This repository contains the code for replicating the results of our paper: Certified Patc

Madry Lab 35 Dec 14, 2022
TriMap: Large-scale Dimensionality Reduction Using Triplets

TriMap TriMap is a dimensionality reduction method that uses triplet constraints to form a low-dimensional embedding of a set of points. The triplet c

Ehsan Amid 235 Dec 24, 2022
A new data augmentation method for extreme lighting conditions.

Random Shadows and Highlights This repo has the source code for the paper: Random Shadows and Highlights: A new data augmentation method for extreme l

Osama Mazhar 35 Nov 26, 2022
(CVPR 2021) Back-tracing Representative Points for Voting-based 3D Object Detection in Point Clouds

BRNet Introduction This is a release of the code of our paper Back-tracing Representative Points for Voting-based 3D Object Detection in Point Clouds,

86 Oct 05, 2022
MOpt-AFL provided by the paper "MOPT: Optimized Mutation Scheduling for Fuzzers"

MOpt-AFL 1. Description MOpt-AFL is a AFL-based fuzzer that utilizes a customized Particle Swarm Optimization (PSO) algorithm to find the optimal sele

172 Dec 18, 2022
3D detection and tracking viewer (visualization) for kitti & waymo dataset

3D detection and tracking viewer (visualization) for kitti & waymo dataset

222 Jan 08, 2023
Walk with fastai

Shield: This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Walk with fastai What is this p

Walk with fastai 124 Dec 10, 2022