Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising

Overview

DnCNN

Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising

visitors

News: DRUNet

PyTorch training and testing code - 18/12/2019

I recommend to use the PyTorch code for training and testing. The model parameters of MatConvnet and PyTorch are same.

Merge batch normalization (PyTorch)

import torch
import torch.nn as nn


def merge_bn(model):
    ''' merge all 'Conv+BN' (or 'TConv+BN') into 'Conv' (or 'TConv')
    based on https://github.com/pytorch/pytorch/pull/901
    by Kai Zhang ([email protected]) 
    https://github.com/cszn/DnCNN
    01/01/2019
    '''
    prev_m = None
    for k, m in list(model.named_children()):
        if (isinstance(m, nn.BatchNorm2d) or isinstance(m, nn.BatchNorm1d)) and (isinstance(prev_m, nn.Conv2d) or isinstance(prev_m, nn.Linear) or isinstance(prev_m, nn.ConvTranspose2d)):

            w = prev_m.weight.data

            if prev_m.bias is None:
                zeros = torch.Tensor(prev_m.out_channels).zero_().type(w.type())
                prev_m.bias = nn.Parameter(zeros)
            b = prev_m.bias.data

            invstd = m.running_var.clone().add_(m.eps).pow_(-0.5)
            if isinstance(prev_m, nn.ConvTranspose2d):
                w.mul_(invstd.view(1, w.size(1), 1, 1).expand_as(w))
            else:
                w.mul_(invstd.view(w.size(0), 1, 1, 1).expand_as(w))
            b.add_(-m.running_mean).mul_(invstd)
            if m.affine:
                if isinstance(prev_m, nn.ConvTranspose2d):
                    w.mul_(m.weight.data.view(1, w.size(1), 1, 1).expand_as(w))
                else:
                    w.mul_(m.weight.data.view(w.size(0), 1, 1, 1).expand_as(w))
                b.mul_(m.weight.data).add_(m.bias.data)

            del model._modules[k]
        prev_m = m
        merge_bn(m)


def tidy_sequential(model):
    for k, m in list(model.named_children()):
        if isinstance(m, nn.Sequential):
            if m.__len__() == 1:
                model._modules[k] = m.__getitem__(0)
        tidy_sequential(m)

Training (MatConvNet)

Testing (MatConvNet or Matlab)

  • [demos] Demo_test_DnCNN-.m.

  • [models] including the trained models for Gaussian denoising; a single model for Gaussian denoising, single image super-resolution (SISR) and deblocking.

  • [testsets] BSD68 and Set10 for Gaussian denoising evaluation; Set5, Set14, BSD100 and Urban100 datasets for SISR evaluation; Classic5 and LIVE1 for JPEG image deblocking evaluation.

New FDnCNN Models

I have trained new Flexible DnCNN (FDnCNN) models based on FFDNet.

FDnCNN can handle noise level range of [0, 75] via a single model.

Demo_FDnCNN_Gray.m

Demo_FDnCNN_Gray_Clip.m

Demo_FDnCNN_Color.m

Demo_FDnCNN_Color_Clip.m

Network Architecture and Design Rationale

  • Network Architecture

  • Batch normalization and residual learning are beneficial to Gaussian denoising (especially for a single noise level). The residual of a noisy image corrupted by additive white Gaussian noise (AWGN) follows a constant Gaussian distribution which stablizes batch normalization during training.

    • Histogram of noisy patches, clean patches, and residual (noise) patches from a batch of training. The noise level is 25, the patch size is 40x40, the batch size is 128.
    • Histogram of noisy patches, clean patches, and residual (noise) patches from another batch of training. The noise level is 25, the patch size is 40x40, the batch size is 128.
    • Noise-free image super-resolution does not have this property.
  • Predicting the residual can be interpreted as performing one gradient descent inference step at starting point (i.e., noisy image).

    • The parameters in DnCNN are mainly representing the image priors (task-independent), thus it is possible to learn a single model for different tasks, such as image denoising, image super-resolution and JPEG image deblocking.

    • The left is the input image corrupted by different degradations, the right is the restored image by DnCNN-3.

Results

Gaussian Denoising

The average PSNR(dB) results of different methods on the BSD68 dataset.

Noise Level BM3D WNNM EPLL MLP CSF TNRD DnCNN DnCNN-B FDnCNN DRUNet
15 31.07 31.37 31.21 - 31.24 31.42 31.73 31.61 31.69 31.91
25 28.57 28.83 28.68 28.96 28.74 28.92 29.23 29.16 29.22 29.48
50 25.62 25.87 25.67 26.03 - 25.97 26.23 26.23 26.27 26.59

Visual Results

The left is the noisy image corrupted by AWGN, the middle is the denoised image by DnCNN, the right is the ground-truth.

Gaussian Denoising, Single ImageSuper-Resolution and JPEG Image Deblocking via a Single (DnCNN-3) Model

Average PSNR(dB)/SSIM results of different methods for Gaussian denoising with noise level 15, 25 and 50 on BSD68 dataset, single image super-resolution with upscaling factors 2, 3 and 40 on Set5, Set14, BSD100 and Urban100 datasets, JPEG image deblocking with quality factors 10, 20, 30 and 40 on Classic5 and LIVE11 datasets.

Gaussian Denoising

Dataset Noise Level BM3D TNRD DnCNN-3
15 31.08 / 0.8722 31.42 / 0.8826 31.46 / 0.8826
BSD68 25 28.57 / 0.8017 28.92 / 0.8157 29.02 / 0.8190
50 25.62 / 0.6869 25.97 / 0.7029 26.10 / 0.7076

Single Image Super-Resolution

Dataset Upscaling Factor TNRD VDSR DnCNN-3
2 36.86 / 0.9556 37.56 / 0.9591 37.58 / 0.9590
Set5 3 33.18 / 0.9152 33.67 / 0.9220 33.75 / 0.9222
4 30.85 / 0.8732 31.35 / 0.8845 31.40 / 0.8845
2 32.51 / 0.9069 33.02 / 0.9128 33.03 / 0.9128
Set14 3 29.43 / 0.8232 29.77 / 0.8318 29.81 / 0.8321
4 27.66 / 0.7563 27.99 / 0.7659 28.04 / 0.7672
2 31.40 / 0.8878 31.89 / 0.8961 31.90 / 0.8961
BSD100 3 28.50 / 0.7881 28.82 / 0.7980 28.85 / 0.7981
4 27.00 / 0.7140 27.28 / 0.7256 27.29 / 0.7253
2 29.70 / 0.8994 30.76 / 0.9143 30.74 / 0.9139
Urban100 3 26.42 / 0.8076 27.13 / 0.8283 27.15 / 0.8276
4 24.61 / 0.7291 25.17 / 0.7528 25.20 / 0.7521

JPEG Image Deblocking

Dataset Quality Factor AR-CNN TNRD DnCNN-3
Classic5 10 29.03 / 0.7929 29.28 / 0.7992 29.40 / 0.8026
20 31.15 / 0.8517 31.47 / 0.8576 31.63 / 0.8610
30 32.51 / 0.8806 32.78 / 0.8837 32.91 / 0.8861
40 33.34 / 0.8953 - 33.77 / 0.9003
LIVE1 10 28.96 / 0.8076 29.15 / 0.8111 29.19 / 0.8123
20 31.29 / 0.8733 31.46 / 0.8769 31.59 / 0.8802
30 32.67 / 0.9043 32.84 / 0.9059 32.98 / 0.9090
40 33.63 / 0.9198 - 33.96 / 0.9247

Requirements and Dependencies

or just MATLAB R2015b to test the model. https://github.com/cszn/DnCNN/blob/4a4b5b8bcac5a5ac23433874d4362329b25522ba/Demo_test_DnCNN.m#L64-L65

Citation

@article{zhang2017beyond,
  title={Beyond a {Gaussian} denoiser: Residual learning of deep {CNN} for image denoising},
  author={Zhang, Kai and Zuo, Wangmeng and Chen, Yunjin and Meng, Deyu and Zhang, Lei},
  journal={IEEE Transactions on Image Processing},
  year={2017},
  volume={26}, 
  number={7}, 
  pages={3142-3155}, 
}
@article{zhang2020plug,
  title={Plug-and-Play Image Restoration with Deep Denoiser Prior},
  author={Zhang, Kai and Li, Yawei and Zuo, Wangmeng and Zhang, Lei and Van Gool, Luc and Timofte, Radu},
  journal={arXiv preprint},
  year={2020}
}

====================================================================

Convolutional Neural Networks for Image Denoising and Restoration

@Inbook{zuo2018convolutional,
author={Zuo, Wangmeng and Zhang, Kai and Zhang, Lei},
editor={Bertalm{\'i}o, Marcelo},
title={Convolutional Neural Networks for Image Denoising and Restoration},
bookTitle={Denoising of Photographic Images and Video: Fundamentals, Open Challenges and New Trends},
year={2018},
publisher={Springer International Publishing},
address={Cham},
pages={93--123},
isbn={978-3-319-96029-6},
doi={10.1007/978-3-319-96029-6_4},
url={https://doi.org/10.1007/978-3-319-96029-6_4}
}

Challenges and Possible Solutions (from the above book chapter)

While the image denoising for AWGN removal has been well-studied, little work has been done on real image denoising. The main difficulty arises from the fact that real noises are much more complex than AWGN and it is not an easy task to thoroughly evaluate the performance of a denoiser. Fig. 4.15 shows four typical noise types in real world. It can be seen that the characteristics of those noises are very different and a single noise level may be not enough to parameterize those noise types. In most cases, a denoiser can only work well under a certain noise model. For example, a denoising model trained for AWGN removal is not effective for mixed Gaussian and Poisson noise removal. This is intuitively reasonable because the CNN-based methods can be treated as general case of Eq. (4.3) and the important data fidelity term corresponds to the degradation process. In spite of this, the image denoising for AWGN removal is valuable due to the following reasons. First, it is an ideal test bed to evaluate the effectiveness of different CNN-based denoising methods. Second, in the unrolled inference via variable splitting techniques, many image restoration problems can be addressed by sequentially solving a series of Gaussian denoising subproblems, which further broadens the application fields.

To improve the practicability of a CNN denoiser, perhaps the most straightforward way is to capture adequate amounts of real noisy-clean training pairs for training so that the real degradation space can be covered. This solution has advantage that there is no need to know the complex degradation process. However, deriving the corresponding clean image of a noisy one is not a trivial task due to the need of careful post-processing steps, such as spatial alignment and illumination correction. Alternatively, one can simulate the real degradation process to synthesize noisy images for a clean one. However, it is not easy to accurately model the complex degradation process. In particular, the noise model can be different across different cameras. Nevertheless, it is practically preferable to roughly model a certain noise type for training and then use the learned CNN model for type-specific denoising.

Besides the training data, the robust architecture and robust training also play vital roles for the success of a CNN denoiser. For the robust architecture, designing a deep multiscale CNN which involves a coarse-to-fine procedure is a promising direction. Such a network is expected to inherit the merits of multiscale: (i) the noise level decreases at larger scales; (ii) the ubiquitous low-frequency noise can be alleviated by multiscale procedure; and (iii) downsampling the image before denoising can effectively enlarge the receptive filed. For the robust training, the effectiveness of the denoiser trained with generative adversarial networks (GAN) for real image denoising still remains further investigation. The main idea of GAN-based denoising is to introduce an adversarial loss to improve the perceptual quality of denoised image. Besides, a distinctive advantage of GAN is that it can do unsupervised learning. More specifically, the noisy image without ground truth can be used in the training. So far, we have provided several possible solutions to improve the practicability of a CNN denoiser. We should note that those solutions can be combined to further improve the performance.

Owner
Kai Zhang
Image Restoration; Inverse Problems
Kai Zhang
Code and Experiments for ACL-IJCNLP 2021 Paper Mind Your Outliers! Investigating the Negative Impact of Outliers on Active Learning for Visual Question Answering.

Code and Experiments for ACL-IJCNLP 2021 Paper Mind Your Outliers! Investigating the Negative Impact of Outliers on Active Learning for Visual Question Answering.

Sidd Karamcheti 50 Nov 16, 2022
PyTorch GPU implementation of the ES-RNN model for time series forecasting

Fast ES-RNN: A GPU Implementation of the ES-RNN Algorithm A GPU-enabled version of the hybrid ES-RNN model by Slawek et al that won the M4 time-series

Kaung 305 Jan 03, 2023
Resources related to EMNLP 2021 paper "FAME: Feature-Based Adversarial Meta-Embeddings for Robust Input Representations"

FAME: Feature-based Adversarial Meta-Embeddings This is the companion code for the experiments reported in the paper "FAME: Feature-Based Adversarial

Bosch Research 11 Nov 27, 2022
Codes of the paper Deformable Butterfly: A Highly Structured and Sparse Linear Transform.

Deformable Butterfly: A Highly Structured and Sparse Linear Transform DeBut Advantages DeBut generalizes the square power of two butterfly factor matr

Rui LIN 8 Jun 10, 2022
RM Operation can equivalently convert ResNet to VGG, which is better for pruning; and can help RepVGG perform better when the depth is large.

RMNet: Equivalently Removing Residual Connection from Networks This repository is the official implementation of "RMNet: Equivalently Removing Residua

184 Jan 04, 2023
The repository offers the official implementation of our BMVC 2021 paper in PyTorch.

CrossMLP Cascaded Cross MLP-Mixer GANs for Cross-View Image Translation Bin Ren1, Hao Tang2, Nicu Sebe1. 1University of Trento, Italy, 2ETH, Switzerla

Bingoren 16 Jul 27, 2022
Serving PyTorch 1.0 Models as a Web Server in C++

Serving PyTorch Models in C++ This repository contains various examples to perform inference using PyTorch C++ API. Run git clone https://github.com/W

Onur Kaplan 223 Jan 04, 2023
OpenMMLab Model Deployment Toolset

Introduction English | 简体中文 MMDeploy is an open-source deep learning model deployment toolset. It is a part of the OpenMMLab project. Major features F

OpenMMLab 1.5k Dec 30, 2022
Official Pytorch implementation for "End2End Occluded Face Recognition by Masking Corrupted Features, TPAMI 2021"

End2End Occluded Face Recognition by Masking Corrupted Features This is the Pytorch implementation of our TPAMI 2021 paper End2End Occluded Face Recog

Haibo Qiu 25 Oct 31, 2022
IEGAN — Official PyTorch Implementation Independent Encoder for Deep Hierarchical Unsupervised Image-to-Image Translation

IEGAN — Official PyTorch Implementation Independent Encoder for Deep Hierarchical Unsupervised Image-to-Image Translation Independent Encoder for Deep

30 Nov 05, 2022
A program that uses computer vision to detect hand gestures, used for controlling movie players.

HandGestureDetection This program uses a Haar Cascade algorithm to detect the presence of your hand, and then passes it on to a self-created and self-

2 Nov 22, 2022
Official pytorch implementation of the AAAI 2021 paper Semantic Grouping Network for Video Captioning

Semantic Grouping Network for Video Captioning Hobin Ryu, Sunghun Kang, Haeyong Kang, and Chang D. Yoo. AAAI 2021. [arxiv] Environment Ubuntu 16.04 CU

Hobin Ryu 43 Nov 25, 2022
This is a simple face recognition mini project that was completed by a team of 3 members in 1 week's time

PeekingDuckling 1. Description This is an implementation of facial identification algorithm to detect and identify the faces of the 3 team members Cla

Eric Kwok 2 Jan 25, 2022
Source code for the GPT-2 story generation models in the EMNLP 2020 paper "STORIUM: A Dataset and Evaluation Platform for Human-in-the-Loop Story Generation"

Storium GPT-2 Models This is the official repository for the GPT-2 models described in the EMNLP 2020 paper [STORIUM: A Dataset and Evaluation Platfor

Nader Akoury 27 Dec 20, 2022
An open framework for Federated Learning.

Welcome to Intel® Open Federated Learning Federated learning is a distributed machine learning approach that enables organizations to collaborate on m

Intel Corporation 397 Dec 27, 2022
Wafer Fault Detection using MlOps Integration

Wafer Fault Detection using MlOps Integration This is an end to end machine learning project with MlOps integration for predicting the quality of wafe

Sethu Sai Medamallela 0 Mar 11, 2022
LieTransformer: Equivariant Self-Attention for Lie Groups

LieTransformer This repository contains the implementation of the LieTransformer used for experiments in the paper LieTransformer: Equivariant Self-At

OxCSML (Oxford Computational Statistics and Machine Learning) 50 Dec 28, 2022
Implementation of "A MLP-like Architecture for Dense Prediction"

A MLP-like Architecture for Dense Prediction (arXiv) Updates (22/07/2021) Initial release. Model Zoo We provide CycleMLP models pretrained on ImageNet

Shoufa Chen 244 Dec 27, 2022
Playable Video Generation

Playable Video Generation Playable Video Generation Willi Menapace, Stéphane Lathuilière, Sergey Tulyakov, Aliaksandr Siarohin, Elisa Ricci Paper: ArX

Willi Menapace 136 Dec 31, 2022
A setup script to generate ITK Python Wheels

ITK Python Package This project provides a setup.py script to build ITK Python binary packages and infrastructure to build ITK external module Python

Insight Software Consortium 59 Dec 14, 2022