Official PyTorch code for Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (HCFlow, ICCV2021)

Overview

Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (HCFlow, ICCV2021)

This repository is the official PyTorch implementation of Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (arxiv, supp).

🚀 🚀 🚀 News:


Normalizing flows have recently demonstrated promising results for low-level vision tasks. For image super-resolution (SR), it learns to predict diverse photo-realistic high-resolution (HR) images from the low-resolution (LR) image rather than learning a deterministic mapping. For image rescaling, it achieves high accuracy by jointly modelling the downscaling and upscaling processes. While existing approaches employ specialized techniques for these two tasks, we set out to unify them in a single formulation. In this paper, we propose the hierarchical conditional flow (HCFlow) as a unified framework for image SR and image rescaling. More specifically, HCFlow learns a bijective mapping between HR and LR image pairs by modelling the distribution of the LR image and the rest high-frequency component simultaneously. In particular, the high-frequency component is conditional on the LR image in a hierarchical manner. To further enhance the performance, other losses such as perceptual loss and GAN loss are combined with the commonly used negative log-likelihood loss in training. Extensive experiments on general image SR, face image SR and image rescaling have demonstrated that the proposed HCFlow achieves state-of-the-art performance in terms of both quantitative metrics and visual quality.

         

Requirements

  • Python 3.7, PyTorch == 1.7.1
  • Requirements: opencv-python, lpips, natsort, etc.
  • Platforms: Ubuntu 16.04, cuda-11.0
cd HCFlow-master
pip install -r requirements.txt 

Quick Run (takes 1 Minute)

To run the code with one command (without preparing data), run this command:

cd codes
# face image SR
python test_HCFLow.py --opt options/test/test_SR_CelebA_8X_HCFlow.yml

# general image SR
python test_HCFLow.py --opt options/test/test_SR_DF2K_4X_HCFlow.yml

# image rescaling
python test_HCFLow.py --opt options/test/test_Rescaling_DF2K_4X_HCFlow.yml

Data Preparation

The framework of this project is based on MMSR and SRFlow. To prepare data, put training and testing sets in ./datasets as ./datasets/DIV2K/HR/0801.png. Commonly used SR datasets can be downloaded here. There are two ways for accerleration in data loading: First, one can use ./scripts/png2npy.py to generate .npy files and use data/GTLQnpy_dataset.py. Second, one can use .pklv4 dataset (recommended) and use data/LRHR_PKL_dataset.py. Please refer to SRFlow for more details. Prepared datasets can be downloaded here.

Training

To train HCFlow for general image SR/ face image SR/ image rescaling, run this command:

cd codes

# face image SR
python train_HCFLow.py --opt options/train/train_SR_CelebA_8X_HCFlow.yml

# general image SR
python train_HCFLow.py --opt options/train/train_SR_DF2K_4X_HCFlow.yml

# image rescaling
python train_HCFLow.py --opt options/train/train_Rescaling_DF2K_4X_HCFlow.yml

All trained models can be downloaded from here.

Testing

Please follow the Quick Run section. Just modify the dataset path in test_HCFlow_*.yml.

Results

We achieved state-of-the-art performance on general image SR, face image SR and image rescaling.

For more results, please refer to the paper and supp for details.

Citation

@inproceedings{liang21hcflow,
  title={Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling},
  author={Liang, Jingyun and Lugmayr, Andreas and Zhang, Kai and Danelljan, Martin and Van Gool, Luc and Timofte, Radu},
  booktitle={IEEE Conference on International Conference on Computer Vision},
  year={2021}
}

License & Acknowledgement

This project is released under the Apache 2.0 license. The codes are based on MMSR, SRFlow, IRN and Glow-pytorch. Please also follow their licenses. Thanks for their great works.

Comments
  • Testing without GT

    Testing without GT

    Is there a way to run the test without GT? I just want to infer the model. I found a mode called LQ which -I think- should only load the images in LR directory. But this mode gives me the error: assert real_crop * self.opt['scale'] * 2 > self.opt['kernel_size'] TypeError: '>' not supported between instances of 'int' and 'NoneType'

    in LQ_dataset.py", line 88

    solved ✅ 
    opened by AhmedHashish123 4
  • Add Docker environment & web demo

    Add Docker environment & web demo

    Hey @JingyunLiang !👋

    This pull request makes it possible to run your model inside a Docker environment, which makes it easier for other people to run it. We're using an open source tool called Cog to make this process easier.

    This also means we can make a web page where other people can try out your model! View it here: https://replicate.ai/jingyunliang/hcflow-sr, which currently supports Image Super-Resolution.

    Claim your page here so you can edit it, and we'll feature it on our website and tweet about it too.

    In case you're wondering who I am, I'm from Replicate, where we're trying to make machine learning reproducible. We got frustrated that we couldn't run all the really interesting ML work being done. So, we're going round implementing models we like. 😊

    opened by chenxwh 2
  • The code implementation and the paper description seem different

    The code implementation and the paper description seem different

    Hi, your work is excellent, but there is one thing I don't understand.

    What is written in the paper is:

    "A diagonal covariance matrix with all diagonal elements close to zero"

    But the code implementation in HCFlowNet_SR_arch.py line 64 is: basic. Gaussian diag.logp (LR, - torch. Ones_ like(lr)*6, fake_ lr_ from_ hr)

    why use - torch. Ones_ like(lr)*6 as covariance matrix? This seems to be inconsistent with the description in the paper

    opened by xmyhhh 2
  • environment

    environment

    ImportError: /home/hbw/gcc-build-5.4.0/lib64/libstdc++.so.6: version `GLIBCXX_3.4.22' not found (required by /home/hbw/anaconda3/lib/python3.8/site-packages/scipy/fft/_pocketfft/pypocketfft.cpython-38-x86_64-linux-gnu.so)

    Is this error due to my GCC version being too low, and your version is? looking forward to your reply!

    opened by hbw945 2
  • Code versions of BRISQUE and NIQE used in paper

    Code versions of BRISQUE and NIQE used in paper

    Hi, I have run performance tests with the Matlab versions of the NIQE and BRISQUE codes and found deviations from the values reported in the paper. Could you please provide a link to the code you used? thanks a lot~

    solved ✅ 
    opened by xmyhhh 1
  • Update on Replicate demo

    Update on Replicate demo

    Hello again @JingyunLiang :),

    This pull request does a few little things:

    • Updated the demo link with an icon in README as you suggested
    • A bugfix for cleaning temporary directory on cog

    We have added more functionality to the Example page of your model, now you can add and delete to customise the example gallery as you like (as the owner of the page)

    Also, you could run cog push if you like to update the model of any other models on replicate in the future 😄

    opened by chenxwh 1
  • About training and inference time?

    About training and inference time?

    Thanks for your nice work!

    I want to know how much time do you need to train and inference with your models.

    Furthermore, will information about params / FLOPs be reported?

    Thanks.

    solved ✅ 
    opened by TiankaiHang 1
  • RuntimeError: The size of tensor a (20) must match the size of tensor b (40) at non-singleton dimension 3

    RuntimeError: The size of tensor a (20) must match the size of tensor b (40) at non-singleton dimension 3

    Hi, I've encountered the error when I trained the HCFlowNet. I changed my ".png" dataset to ".pklv4" dataset. I was trained on the platform of windows 10 with 1 single GPU. Could you please help me find the error? Thanks a lot.

    opened by William9Baker 0
  • How to build an invertible mapping between two variables whose dimensions are different ?

    How to build an invertible mapping between two variables whose dimensions are different ?

    Maybe this is a stupid question, but I have been puzzled for quite a long time. In the image super-resolution task, the input and output have different dimensions. How to build an invertible mapping between them? I notice that you calculate the determinant of the Jacobian, so I thought the mapping here is strictly invertible?

    opened by Wangbk-dl 0
  • How to make an invertible mapping between two variables whose dimensions are different ?

    How to make an invertible mapping between two variables whose dimensions are different ?

    Maybe this is a stupid question, but I have been puzzled for quite a long time. In the image super-resolution task, the input and output have different dimensions. How to build such an invertible mapping between them ? Take an example: If I have a low-resolution(LR) image x, and I have had an invertible function G. I can feed LR image x into G, and generate an HR image y. But can you ensure that we could obtain an output the same as x when we feed y into G_inverse?

    y = G(x) x' = G_inverse(y) =? x

    I would appreciate it if you could offer some help.

    opened by Wangbk-dl 0
  • New Super-Resolution Benchmarks

    New Super-Resolution Benchmarks

    Hello,

    MSU Graphics & Media Lab Video Group has recently launched two new Super-Resolution Benchmarks.

    If you are interested in participating, you can add your algorithm following the submission steps:

    We would be grateful for your feedback on our work!

    opened by EvgeneyBogatyrev 0
  • Why NLL is negative during the training?

    Why NLL is negative during the training?

    Great work! During the training process, we found that the output NLL is negative. But theoretically, NLL should be positive. Is there any explanation for this?

    opened by IMSEMZPZ 0
Owner
Jingyun Liang
PhD Student at Computer Vision Lab, ETH Zurich
Jingyun Liang
Create animations for the optimization trajectory of neural nets

Animating the Optimization Trajectory of Neural Nets loss-landscape-anim lets you create animated optimization path in a 2D slice of the loss landscap

Logan Yang 81 Dec 25, 2022
The ARCA23K baseline system

ARCA23K Baseline System This is the source code for the baseline system associated with the ARCA23K dataset. Details about ARCA23K and the baseline sy

4 Jul 02, 2022
DexterRedTool - Dexter's Red Team Tool that creates cronjob/task scheduler to consistently creates users

DexterRedTool Author: Dexter Delandro CSEC 473 - Spring 2022 This tool persisten

2 Feb 16, 2022
Fast and scalable uncertainty quantification for neural molecular property prediction, accelerated optimization, and guided virtual screening.

Evidential Deep Learning for Guided Molecular Property Prediction and Discovery Ava Soleimany*, Alexander Amini*, Samuel Goldman*, Daniela Rus, Sangee

Alexander Amini 75 Dec 15, 2022
The goal of the exercises below is to evaluate the candidate knowledge and problem solving expertise regarding the main development focuses for the iFood ML Platform team: MLOps and Feature Store development.

The goal of the exercises below is to evaluate the candidate knowledge and problem solving expertise regarding the main development focuses for the iFood ML Platform team: MLOps and Feature Store dev

George Rocha 0 Feb 03, 2022
The world's largest toxicity dataset.

The Toxicity Dataset by Surge AI Saving the internet is fun. Combing through thousands of online comments to build a toxicity dataset isn't. That's wh

Surge AI 134 Dec 19, 2022
Codes and pretrained weights for winning submission of 2021 Brain Tumor Segmentation (BraTS) Challenge

Winning submission to the 2021 Brain Tumor Segmentation Challenge This repo contains the codes and pretrained weights for the winning submission to th

94 Dec 28, 2022
A New Approach to Overgenerating and Scoring Abstractive Summaries

We provide the source code for the paper "A New Approach to Overgenerating and Scoring Abstractive Summaries" accepted at NAACL'21. If you find the code useful, please cite the following paper.

Kaiqiang Song 4 Apr 03, 2022
GyroSPD: Vector-valued Distance and Gyrocalculus on the Space of Symmetric Positive Definite Matrices

GyroSPD Code for the paper "Vector-valued Distance and Gyrocalculus on the Space of Symmetric Positive Definite Matrices" accepted at NeurIPS 2021. Re

Federico Lopez 12 Dec 12, 2022
Apply AnimeGAN-v2 across frames of a video clip

title emoji colorFrom colorTo sdk app_file pinned AnimeGAN-v2 For Videos 🔥 blue red gradio app.py false AnimeGAN-v2 For Videos Apply AnimeGAN-v2 acro

Nathan Raw 36 Oct 18, 2022
It's A ML based Web Site build with python and Django to find the breed of the dog

ML-Based-Dog-Breed-Identifier This is a Django Based Web Site To Identify the Breed of which your DOG belogs All You Need To Do is to Follow These Ste

Sanskar Dwivedi 2 Oct 12, 2022
Deep Learning & 3D Convolutional Neural Networks for Speaker Verification

TensorFlow implementation of 3D Convolutional Neural Networks for Speaker Verification - Official Project Page - Pytorch Implementation This repositor

Amirsina Torfi 753 Dec 17, 2022
Code for "Neural Parts: Learning Expressive 3D Shape Abstractions with Invertible Neural Networks", CVPR 2021

Neural Parts: Learning Expressive 3D Shape Abstractions with Invertible Neural Networks This repository contains the code that accompanies our CVPR 20

Despoina Paschalidou 161 Dec 20, 2022
This is a library for training and applying sparse fine-tunings with torch and transformers.

This is a library for training and applying sparse fine-tunings with torch and transformers. Please refer to our paper Composable Sparse Fine-Tuning f

Cambridge Language Technology Lab 37 Dec 30, 2022
Official Implementation for Fast Training of Neural Lumigraph Representations using Meta Learning.

Fast Training of Neural Lumigraph Representations using Meta Learning Project Page | Paper | Data Alexander W. Bergman, Petr Kellnhofer, Gordon Wetzst

Alex 39 Oct 08, 2022
SFD implement with pytorch

S³FD: Single Shot Scale-invariant Face Detector A PyTorch Implementation of Single Shot Scale-invariant Face Detector Description Meanwhile train hand

Jun Li 251 Dec 22, 2022
One implementation of the paper "DMRST: A Joint Framework for Document-Level Multilingual RST Discourse Segmentation and Parsing".

Introduction One implementation of the paper "DMRST: A Joint Framework for Document-Level Multilingual RST Discourse Segmentation and Parsing". Users

seq-to-mind 18 Dec 11, 2022
PointNetVLAD: Deep Point Cloud Based Retrieval for Large-Scale Place Recognition, CVPR 2018

PointNetVLAD: Deep Point Cloud Based Retrieval for Large-Scale Place Recognition PointNetVLAD: Deep Point Cloud Based Retrieval for Large-Scale Place

Mikaela Uy 294 Dec 12, 2022
Time-Optimal Planning for Quadrotor Waypoint Flight

Time-Optimal Planning for Quadrotor Waypoint Flight This is an example implementation of the paper "Time-Optimal Planning for Quadrotor Waypoint Fligh

Robotics and Perception Group 38 Dec 02, 2022
A Next Generation ConvNet by FaceBookResearch Implementation in PyTorch(Original) and TensorFlow.

ConvNeXt A Next Generation ConvNet by FaceBookResearch Implementation in PyTorch(Original) and TensorFlow. A FacebookResearch Implementation on A Conv

Raghvender 2 Feb 14, 2022