Offical implementation for "Trash or Treasure? An Interactive Dual-Stream Strategy for Single Image Reflection Separation".

Overview

Trash or Treasure? An Interactive Dual-Stream Strategy for Single Image Reflection Separation (NeurIPS 2021)

by Qiming Hu, Xiaojie Guo.

Dependencies

  • Python3
  • PyTorch>=1.0
  • OpenCV-Python, TensorboardX, Visdom
  • NVIDIA GPU+CUDA

Network Architecture

figure_arch

🚀 1. Single Image Reflection Separation

Data Preparation

Training dataset

  • 7,643 images from the Pascal VOC dataset, center-cropped as 224 x 224 slices to synthesize training pairs.
  • 90 real-world training pairs provided by Zhang et al.

Tesing dataset

  • 45 real-world testing images from CEILNet dataset.
  • 20 real testing pairs provided by Zhang et al.
  • 454 real testing pairs from SIR^2 dataset, containing three subsets (i.e., Objects (200), Postcard (199), Wild (55)).

Usage

Training

  • For stage 1: python train_sirs.py --inet ytmt_ucs --model ytmt_model_sirs --name ytmt_ucs_sirs --hyper --if_align
  • For stage 2: python train_twostage_sirs.py --inet ytmt_ucs --model twostage_ytmt_model --name ytmt_uct_sirs --hyper --if_align --resume --resume_epoch xx --checkpoints_dir xxx

Testing

python test_sirs.py --inet ytmt_ucs --model twostage_ytmt_model --name ytmt_uct_sirs_test --hyper --if_align --resume --icnn_path ./checkpoints/ytmt_uct_sirs/twostage_unet_68_077_00595364.pt

Trained weights

Google Drive

Visual comparison on real20 and SIR^2

figure_eval

Visual comparison on real45

figure_test

🚀 2. Single Image Denoising

Data Preparation

Training datasets

400 images from the Berkeley segmentation dataset, following DnCNN.

Tesing datasets

BSD68 dataset and Set12.

Usage

Training

python train_denoising.py --inet ytmt_pas --name ytmt_pas_denoising --preprocess True --num_of_layers 9 --mode B --preprocess True

Testing

python test_denoising.py --inet ytmt_pas --name ytmt_pas_denoising_blindtest_25 --test_noiseL 25 --num_of_layers 9 --test_data Set68 --icnn_path ./checkpoints/ytmt_pas_denoising_49_157500.pt

Trained weights

Google Drive

Visual comparison on a sample from BSD68

figure_eval_denoising

🚀 3. Single Image Demoireing

Data Preparation

Training dataset

AIM 2019 Demoireing Challenge

Tesing dataset

100 moireing and clean pairs from AIM 2019 Demoireing Challenge.

Usage

Training

python train_demoire.py --inet ytmt_ucs --model ytmt_model_demoire --name ytmt_uas_demoire --hyper --if_align

Testing

python test_demoire.py --inet ytmt_ucs --model ytmt_model_demoire --name ytmt_uas_demoire_test --hyper --if_align --resume --icnn_path ./checkpoints/ytmt_ucs_demoire/ytmt_ucs_opt_086_00860000.pt

Trained weights

Google Drive

Visual comparison on the validation set of LCDMoire

figure_eval_demoire

You might also like...
Image-to-Image Translation with Conditional Adversarial Networks (Pix2pix) implementation in keras

pix2pix-keras Pix2pix implementation in keras. Original paper: Image-to-Image Translation with Conditional Adversarial Networks (pix2pix) Paper Author

Python implementation of cover trees, near-drop-in replacement for scipy.spatial.kdtree

This is a Python implementation of cover trees, a data structure for finding nearest neighbors in a general metric space (e.g., a 3D box with periodic

Home repository for the Regularized Greedy Forest (RGF) library. It includes original implementation from the paper and multithreaded one written in C++, along with various language-specific wrappers.

Regularized Greedy Forest Regularized Greedy Forest (RGF) is a tree ensemble machine learning method described in this paper. RGF can deliver better r

Implementation of Restricted Boltzmann Machine (RBM) and its variants in Tensorflow

xRBM Library Implementation of Restricted Boltzmann Machine (RBM) and its variants in Tensorflow Installation Using pip: pip install xrbm Examples Tut

A fast Evolution Strategy implementation in Python

Evostra: Evolution Strategy for Python Evolution Strategy (ES) is an optimization technique based on ideas of adaptation and evolution. You can learn

🌳 A Python-inspired implementation of the Optimum-Path Forest classifier.

OPFython: A Python-Inspired Optimum-Path Forest Classifier Welcome to OPFython. Note that this implementation relies purely on the standard LibOPF. Th

Implementation of Geometric Vector Perceptron, a simple circuit for 3d rotation equivariance for learning over large biomolecules, in Pytorch. Idea proposed and accepted at ICLR 2021
Implementation of Geometric Vector Perceptron, a simple circuit for 3d rotation equivariance for learning over large biomolecules, in Pytorch. Idea proposed and accepted at ICLR 2021

Geometric Vector Perceptron Implementation of Geometric Vector Perceptron, a simple circuit with 3d rotation equivariance for learning over large biom

Official implementation of AAAI-21 paper
Official implementation of AAAI-21 paper "Label Confusion Learning to Enhance Text Classification Models"

Description: This is the official implementation of our AAAI-21 accepted paper Label Confusion Learning to Enhance Text Classification Models. The str

Official PyTorch implementation for paper Context Matters: Graph-based Self-supervised Representation Learning for Medical Images
Official PyTorch implementation for paper Context Matters: Graph-based Self-supervised Representation Learning for Medical Images

Context Matters: Graph-based Self-supervised Representation Learning for Medical Images Official PyTorch implementation for paper Context Matters: Gra

Comments
  • Datasets

    Datasets

    Hi,

    I have been trying to experiment with the model but I'm having trouble finding the correct datasets for testing. The Sirs2 dataset in the provided link doesn't have the images set up with the naming conventions used in the script. Could you please direct me to the correct data sets for testing and training? Is there a separate repository that you have used?

    Thanks so much,

    David

    opened by davidgaddie 3
  • About Training Details

    About Training Details

    Hello, thank you for sharing your wonderful work. I have some question about the triaining details. It says the training epoch is 120 in your paper but the epoch is set to 60 in YTMT-Strategy/options/net_options/train_options.py. Moreover, the best model in your paper is YTMT-UCT which need two stages training. Can you provide the training settings of the YTMT-UCT (epoch, batchsize...)? Look forward to your reply!

    opened by DUT-CSJ 2
  • CUDA vram allocation issue

    CUDA vram allocation issue

    Hi,

    I've been trying to run the reflection test code, but I get this error: RuntimeError: CUDA out of memory. Tried to allocate 15.66 GiB (GPU 0; 22.20 GiB total capacity; 16.09 GiB already allocated; 2.68 GiB free; 17.55 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

    I'm running on an A10G GPU on AWS. I suspect that maybe the dataset is incorrect as each image in the dataset I have is around 800MB. If that's the case can I please be directed to the correct repository for the read20_420 images?

    Thanks so much,

    David

    opened by davidgaddie 1
  • test demoire error

    test demoire error

    Thanks for your great work ,but some error when I run: python test_demoire.py --inet ytmt_ucs --model ytmt_model_demoire --name ytmt_uas_demoire_test --hyper --if_align --resume --icnn_path checkpoints/ytmt_ucs_demoire/ytmt_ucs_demoire_opt_086_00860000.pt

    -------------- End ---------------- [i] initialization method [edsr] Traceback (most recent call last): File "test_demoire.py", line 28, in engine = Engine(opt) File "/nfs_data/code/YTMT-Strategy-main/engine.py", line 19, in init self.__setup() File "/nfs_data/code/YTMT-Strategy-main/engine.py", line 29, in __setup self.model.initialize(opt) File "/nfs_data/code/YTMT-Strategy-main/models/ytmt_model_demoire.py", line 242, in initialize self.load(self, opt.resume_epoch) File "/nfs_data/code/YTMT-Strategy-main/models/ytmt_model_demoire.py", line 413, in load model.net_i.load_state_dict(state_dict['icnn']) File "/opt/conda/envs/torch/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1223, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state_dict for YTMT_US: Missing key(s) in state_dict: "inc.ytmt_head.fusion_l.weight", "inc.ytmt_head.fusion_l.bias", "inc.ytmt_head.fusion_r.weight", "inc.ytmt_head.fusion_r.bias", "down1.model.ytmt_head.fusion_l.weight", "down1.model.ytmt_head.fusion_l.bias", "down1.model.ytmt_head.fusion_r.weight", "down1.model.ytmt_head.fusion_r.bias", "down2.model.ytmt_head.fusion_l.weight", "down2.model.ytmt_head.fusion_l.bias", "down2.model.ytmt_head.fusion_r.weight", "down2.model.ytmt_head.fusion_r.bias", "down3.model.ytmt_head.fusion_l.weight", "down3.model.ytmt_head.fusion_l.bias", "down3.model.ytmt_head.fusion_r.weight", "down3.model.ytmt_head.fusion_r.bias", "down4.model.ytmt_head.fusion_l.weight", "down4.model.ytmt_head.fusion_l.bias", "down4.model.ytmt_head.fusion_r.weight", "down4.model.ytmt_head.fusion_r.bias", "up1.model.ytmt_head.fusion_l.weight", "up1.model.ytmt_head.fusion_l.bias", "up1.model.ytmt_head.fusion_r.weight", "up1.model.ytmt_head.fusion_r.bias", "up2.model.ytmt_head.fusion_l.weight", "up2.model.ytmt_head.fusion_l.bias", "up2.model.ytmt_head.fusion_r.weight", "up2.model.ytmt_head.fusion_r.bias", "up3.model.ytmt_head.fusion_l.weight", "up3.model.ytmt_head.fusion_l.bias", "up3.model.ytmt_head.fusion_r.weight", "up3.model.ytmt_head.fusion_r.bias", "up4.model.ytmt_head.fusion_l.weight", "up4.model.ytmt_head.fusion_l.bias", "up4.model.ytmt_head.fusion_r.weight", "up4.model.ytmt_head.fusion_r.bias".

    opened by zdyshine 1
Owner
Qiming Hu
Qiming Hu
[NeurIPS 2020] Official Implementation: "SMYRF: Efficient Attention using Asymmetric Clustering".

SMYRF: Efficient attention using asymmetric clustering Get started: Abstract We propose a novel type of balanced clustering algorithm to approximate a

Giannis Daras 46 Dec 22, 2022
SparseML is a libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models

SparseML is a toolkit that includes APIs, CLIs, scripts and libraries that apply state-of-the-art sparsification algorithms such as pruning and quantization to any neural network. General, recipe-dri

Neural Magic 1.5k Dec 30, 2022
AdaMML: Adaptive Multi-Modal Learning for Efficient Video Recognition

AdaMML: Adaptive Multi-Modal Learning for Efficient Video Recognition [ArXiv] [Project Page] This repository is the official implementation of AdaMML:

International Business Machines 43 Dec 26, 2022
Official Pytorch implementation for video neural representation (NeRV)

NeRV: Neural Representations for Videos (NeurIPS 2021) Project Page | Paper | UVG Data Hao Chen, Bo He, Hanyu Wang, Yixuan Ren, Ser-Nam Lim, Abhinav S

hao 214 Dec 28, 2022
Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more

Apache MXNet (incubating) for Deep Learning Master Docs License Apache MXNet (incubating) is a deep learning framework designed for both efficiency an

ROCm Software Platform 29 Nov 16, 2022
Implementation of TimeSformer, a pure attention-based solution for video classification

TimeSformer - Pytorch Implementation of TimeSformer, a pure and simple attention-based solution for reaching SOTA on video classification.

Phil Wang 602 Jan 03, 2023
Code for Neural-GIF: Neural Generalized Implicit Functions for Animating People in Clothing(ICCV21)

NeuralGIF Code for Neural-GIF: Neural Generalized Implicit Functions for Animating People in Clothing(ICCV21) We present Neural Generalized Implicit F

Garvita Tiwari 104 Nov 18, 2022
[ICLR'21] Counterfactual Generative Networks

This repository contains the code for the ICLR 2021 paper "Counterfactual Generative Networks" by Axel Sauer and Andreas Geiger. If you want to take the CGN for a spin and generate counterfactual ima

88 Jan 02, 2023
SpeechNAS Better Trade off between Latency and Accuracy for Large Scale Speaker Verification

SpeechNAS Better Trade off between Latency and Accuracy for Large Scale Speaker Verification

Wentao Zhu 24 May 20, 2022
TuckER: Tensor Factorization for Knowledge Graph Completion

TuckER: Tensor Factorization for Knowledge Graph Completion This codebase contains PyTorch implementation of the paper: TuckER: Tensor Factorization f

Ivana Balazevic 296 Dec 06, 2022
A multi-mode modulator for multi-domain few-shot classification (ICCV)

A multi-mode modulator for multi-domain few-shot classification (ICCV)

Yanbin Liu 8 Apr 28, 2022
Implicit Model Specialization through DAG-based Decentralized Federated Learning

Federated Learning DAG Experiments This repository contains software artifacts to reproduce the experiments presented in the Middleware '21 paper "Imp

Operating Systems and Middleware Group 5 Oct 16, 2022
Source code of NeurIPS 2021 Paper ''Be Confident! Towards Trustworthy Graph Neural Networks via Confidence Calibration''

CaGCN This repo is for source code of NeurIPS 2021 paper "Be Confident! Towards Trustworthy Graph Neural Networks via Confidence Calibration". Paper L

6 Dec 19, 2022
Official repository for MixFaceNets: Extremely Efficient Face Recognition Networks

MixFaceNets This is the official repository of the paper: MixFaceNets: Extremely Efficient Face Recognition Networks. (Accepted in IJCB2021) https://i

Fadi Boutros 51 Dec 13, 2022
Code I use to automatically update my videos' metadata on YouTube

mCodingYouTube This repository contains the code I use to automatically update my videos' metadata on YouTube, including: titles, descriptions, tags,

James Murphy 19 Oct 07, 2022
QT Py Media Knob using rotary encoder & neopixel ring

QTPy-Knob QT Py USB Media Knob using rotary encoder & neopixel ring The QTPy-Knob features: Media knob for volume up/down/mute with "qtpy-knob.py" Cir

Tod E. Kurt 56 Dec 30, 2022
CS5242_2021 - Neural Networks and Deep Learning, NUS CS5242, 2021

CS5242_2021 Neural Networks and Deep Learning, NUS CS5242, 2021 Cloud Machine #1 : Google Colab (Free GPU) Follow this Notebook installation : https:/

Xavier Bresson 165 Oct 25, 2022
[TOG 2021] PyTorch implementation for the paper: SofGAN: A Portrait Image Generator with Dynamic Styling.

This repository contains the official PyTorch implementation for the paper: SofGAN: A Portrait Image Generator with Dynamic Styling. We propose a SofGAN image generator to decouple the latent space o

Anpei Chen 694 Dec 23, 2022
This repository is the offical Pytorch implementation of ContextPose: Context Modeling in 3D Human Pose Estimation: A Unified Perspective (CVPR 2021).

Context Modeling in 3D Human Pose Estimation: A Unified Perspective (CVPR 2021) Introduction This repository is the offical Pytorch implementation of

37 Nov 21, 2022
A Multi-modal Perception Tracker (MPT) for speaker tracking using both audio and visual modalities

MPT A Multi-modal Perception Tracker (MPT) for speaker tracking using both audio and visual modalities. Implementation for our AAAI 2022 paper: Multi-

yidiLi 4 May 08, 2022