WPPNets: Unsupervised CNN Training with Wasserstein Patch Priors for Image Superresolution

Related tags

Deep LearningWPPNets
Overview

WPPNets: Unsupervised CNN Training with Wasserstein Patch Priors for Image Superresolution

This code belongs to the paper [1] available at https://arxiv.org/abs/2201.08157. Please cite the paper, if you use this code.

The paper [1] is The repository contains an implementation of WPPNets as introduced in [1]. It contains scripts for reproducing the numerical example Texture superresolution in Section 5.2.

Moreover, the file wgenpatex.py is adapted from [2] available at https://github.com/johertrich/Wasserstein_Patch_Prior and is adapted from [3]. Furthermore, the folder model is adapted from [5] available at https://github.com/hellloxiaotian/ACNet.

The folders test_img and training_img contain parts of the textures from [4].

For questions and bug reports, please contact Fabian Altekrueger (fabian.altekrueger(at)hu-berlin.de).

CONTENTS

  1. REQUIREMENTS
  2. USAGE AND EXAMPLES
  3. REFERENCES

1. REQUIREMENTS

The code requires several Python packages. We tested the code with Python 3.9.7 and the following package versions:

  • pytorch 1.10.0
  • matplotlib 3.4.3
  • numpy 1.21.2
  • pykeops 1.5

Usually the code is also compatible with some other versions of the corresponding Python packages.

2. USAGE AND EXAMPLES

You can start the training of the WPPNet by calling the scripts. If you want to load the existing network, please set retrain to False. Checkpoints are saved automatically during training such that the progress of the reconstructions is observable. Feel free to vary the parameters and see what happens.

TEXTURE GRASS

The script run_grass.py is the implementation of the superresolution example in [1, Section 5.2] for the Kylberg Texture [4] grass which is available at https://kylberg.org/kylberg-texture-dataset-v-1-0. The high-resolution ground truth and the reference image are different 600×600 sections cropped from the original texture images. Similarly, the low-resolution training data is generated by cropping 100×100 sections from the texture images, artificially downsampling it by a predefined forward operator f and adding Gaussian noise. For more details on the downsampling process, see [1, Section 5.2].

TEXTURE FLOOR

The script run_floor.py is the implementation of the superresolution example in [1, Section 5.2] for the Kylberg Texture [4] Floor which is available at https://kylberg.org/kylberg-texture-dataset-v-1-0. The high-resolution ground truth and the reference image are different 600×600 sections cropped from the original texture images. Similarly, the low-resolution training data is generated by cropping 100×100 sections from the texture images, artificially downsampling it by a predefined forward operator f and adding Gaussian noise. For more details on the downsampling process, see [1, Section 5.2].

3. REFERENCES

[1] F. Altekrueger, J. Hertrich.
WPPNets: Unsupervised CNN Training with Wasserstein Patch Priors for Image Superresolution.
ArXiv Preprint#2201.08157

[2] J. Hertrich, A. Houdard and C. Redenbach.
Wasserstein Patch Prior for Image Superresolution.
ArXiv Preprint#2109.12880

[3] A. Houdard, A. Leclaire, N. Papadakis and J. Rabin.
Wasserstein Generative Models for Patch-based Texture Synthesis.
ArXiv Preprint#2007.03408

[4] G. Kylberg.
The Kylberg texture dataset v. 1.0.
Centre for Image Analysis, Swedish University of Agricultural Sciences and Uppsala University, 2011

[5] C. Tian, Y. Xu, W. Zuo, C.-W. Lin, and D. Zhang.
Asymmetric CNN for image superresolution.
IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2021.

Owner
Fabian Altekrueger
Fabian Altekrueger
This repo contains the code and data used in the paper "Wizard of Search Engine: Access to Information Through Conversations with Search Engines"

Wizard of Search Engine: Access to Information Through Conversations with Search Engines by Pengjie Ren, Zhongkun Liu, Xiaomeng Song, Hongtao Tian, Zh

19 Oct 27, 2022
A vanilla 3D face modeling on pose-invariant and multi-lightning image data

3D-Face-Modeling A vanilla 3D face modeling on pose-invariant and multi-lightning image data Table of Contents Background Install Usage Contributing B

Haochen Zhang 1 Mar 12, 2022
Code for CVPR 2021 oral paper "Exploring Data-Efficient 3D Scene Understanding with Contrastive Scene Contexts"

Exploring Data-Efficient 3D Scene Understanding with Contrastive Scene Contexts The rapid progress in 3D scene understanding has come with growing dem

Facebook Research 182 Dec 30, 2022
Quick program made to generate alpha and delta tables for Hidden Markov Models

HMM_Calc Functions for generating Alpha and Delta tables from a Hidden Markov Model. Parameters: a: Matrix of transition probabilities. a[i][j] = a_{i

Adem Odza 1 Dec 04, 2021
Pytorch implementation for "Implicit Feature Alignment: Learn to Convert Text Recognizer to Text Spotter".

Implicit Feature Alignment: Learn to Convert Text Recognizer to Text Spotter This is a pytorch-based implementation for paper Implicit Feature Alignme

wangtianwei 61 Nov 12, 2022
Official PyTorch implementation of "Contrastive Learning from Extremely Augmented Skeleton Sequences for Self-supervised Action Recognition" in AAAI2022.

AimCLR This is an official PyTorch implementation of "Contrastive Learning from Extremely Augmented Skeleton Sequences for Self-supervised Action Reco

Gty 44 Dec 17, 2022
This Jupyter notebook shows one way to implement a simple first-order low-pass filter on sampled data in discrete time.

How to Implement a First-Order Low-Pass Filter in Discrete Time We often teach or learn about filters in continuous time, but then need to implement t

Joshua Marshall 4 Aug 24, 2022
This repository introduces a short project about Transfer Learning for Classification of MRI Images.

Transfer Learning for MRI Images Classification This repository introduces a short project made during my stay at Neuromatch Summer School 2021. This

Oscar Guarnizo 3 Nov 15, 2022
yolox_backbone is a deep-learning library and is a collection of YOLOX Backbone models.

YOLOX-Backbone yolox-backbone is a deep-learning library and is a collection of YOLOX backbone models. Install pip install yolox-backbone Load a Pret

Yonghye Kwon 21 Dec 28, 2022
LV-BERT: Exploiting Layer Variety for BERT (Findings of ACL 2021)

LV-BERT Introduction In this repo, we introduce LV-BERT by exploiting layer variety for BERT. For detailed description and experimental results, pleas

Weihao Yu 14 Aug 24, 2022
PyTorch implementation of SCAFFOLD (Stochastic Controlled Averaging for Federated Learning, ICML 2020).

Scaffold-Federated-Learning PyTorch implementation of SCAFFOLD (Stochastic Controlled Averaging for Federated Learning, ICML 2020). Environment numpy=

KI 30 Dec 29, 2022
Object DGCNN and DETR3D, Our implementations are built on top of MMdetection3D.

This repo contains the implementations of Object DGCNN (https://arxiv.org/abs/2110.06923) and DETR3D (https://arxiv.org/abs/2110.06922). Our implementations are built on top of MMdetection3D.

Wang, Yue 539 Jan 07, 2023
An Unsupervised Graph-based Toolbox for Fraud Detection

An Unsupervised Graph-based Toolbox for Fraud Detection Introduction: UGFraud is an unsupervised graph-based fraud detection toolbox that integrates s

SafeGraph 99 Dec 11, 2022
Yet another video caption

Yet another video caption

Fan Zhimin 5 May 26, 2022
Link prediction using Multiple Order Local Information (MOLI)

Understanding the network formation pattern for better link prediction Authors: [e

Wu Lab 0 Oct 18, 2021
Reinforcement Learning for Automated Trading

Reinforcement Learning for Automated Trading This thesis has been realized for the obtention of the Master's in Mathematical Engineering at the Polite

Pierpaolo Necchi 80 Jun 19, 2022
The project of phase's key role in complex and real NN

Phase-in-NN This is the code for our project at Princeton (co-authors: Yuqi Nie, Hui Yuan). The paper title is: "Neural Network is heterogeneous: Phas

YuqiNie-lab 1 Nov 04, 2021
InvTorch: memory-efficient models with invertible functions

InvTorch: Memory-Efficient Invertible Functions This module extends the functionality of torch.utils.checkpoint.checkpoint to work with invertible fun

Modar M. Alfadly 12 May 12, 2022
[CVPR 2016] Unsupervised Feature Learning by Image Inpainting using GANs

Context Encoders: Feature Learning by Inpainting CVPR 2016 [Project Website] [Imagenet Results] Sample results on held-out images: This is the trainin

Deepak Pathak 829 Dec 31, 2022
Simple, but essential Bayesian optimization package

BayesO: A Bayesian optimization framework in Python Simple, but essential Bayesian optimization package. http://bayeso.org Online documentation Instal

Jungtaek Kim 74 Dec 05, 2022