ICCV2021 - A New Journey from SDRTV to HDRTV.

Related tags

Deep LearningHDRTVNet
Overview

HDRTVNet [Paper Link]

A New Journey from SDRTV to HDRTV

By Xiangyu Chen*, Zhengwen Zhang*, Jimmy S. Ren, Lynhoo Tian, Yu Qiao and Chao Dong

(* indicates equal contribution)

This paper is accepted to ICCV 2021.

Overview

Simplified SDRTV/HDRTV formation pipeline:

Overview of the method:

Getting Started

  1. Dataset
  2. Configuration
  3. How to test
  4. How to train
  5. Metrics
  6. Visualization

Dataset

We conduct a dataset using videos with 4K resolutions under HDR10 standard (10-bit, Rec.2020, PQ) and their counterpart SDR versions from Youtube. The dataset consists of a training set with 1235 image pairs and a test set with 117 image pairs. Please refer to the paper for the details on the processing of the dataset. The dataset can be downloaded from Baidu Netdisk (access code: 6qvu) or OneDrive (access code: HDRTVNet).

We also provide the original Youtube links of these videos, which can be found in this file. Note that we cannot provide the download links since we do not have the copyright to distribute. Please download this dataset only for academic use.

Configuration

Please refer to the requirements. Matlab is also used to process the data, but it is not necessary and can be replaced by OpenCV.

How to test

We provide the pretrained models to test, which can be downloaded from Baidu Netdisk (access code: 2me9) or OneDrive (access code: HDRTVNet). Since our method is casaded of three steps, the results also need to be inferenced step by step.

  • Before testing, it is optional to generate the downsampled inputs of the condition network in advance. Make sure the input_folder and save_LR_folder in ./scripts/generate_mod_LR_bic.m are correct, then run the file using Matlab. After that, matlab-bicubic-downsampled versions of the input SDR images are generated that will be input to the condition network. Note that this step is not necessary, but can reproduce more precise performance.
  • For the first part of AGCM, make sure the paths of dataroot_LQ, dataroot_cond, dataroot_GT and pretrain_model_G in ./codes/options/test/test_AGCM.yml are correct, then run
cd codes
python test.py -opt options/test/test_AGCM.yml
  • Note that if the first step is not preformed, the line of dataroot_cond should be commented. The test results will be saved to ./results/Adaptive_Global_Color_Mapping.
  • For the second part of LE, make sure dataroot_LQ is modified into the path of results obtained by AGCM, then run
python test.py -opt options/test/test_LE.yml
  • Note that results generated by LE can achieve the best quantitative performance. The part of HG is for the completeness of the solution and improving the visual quality forthermore. For testing the last part of HG, make sure dataroot_LQ is modified into the path of results obtained by LE, then run
python test.py -opt options/test/test_HG.yml
  • Note that the results of the each step are 16-bit images that can be converted into HDR10 video.

How to train

  • Prepare the data. Generate the sub-images with specific patch size using ./scripts/extract_subimgs_single.py and generate the down-sampled inputs for the condition network (using the ./scripts/generate_mod_LR_bic.m or any other methods).
  • For AGCM, make sure that the paths and settings in ./options/train/train_AGCM.yml are correct, then run
cd codes
python train.py -opt options/train/train_AGCM.yml
  • For LE, the inputs are generated by the trained AGCM model. The original data should be inferenced through the first step (refer to the last part on how to test AGCM) and then be processed by extracting sub-images. After that, modify the corresponding settings in ./options/train/train_LE.yml and run
python train.py -opt options/train/train_LE.yml
  • For HG, the inputs are also obtained by the last part LE, thus the training data need to be processed by similar operations as the previous two parts. When the data is prepared, it is recommended to pretrain the generator at first by running
python train.py -opt options/train/train_HG_Generator.yml
  • After that, choose a pretrained model and modify the path of pretrained model in ./options/train/train_HG_GAN.yml, then run
python train.py -opt options/train/train_HG_GAN.yml
  • All models and training states are stored in ./experiments.

Metrics

Five metrics are used to evaluate the quantitative performance of different methods, including PSNR, SSIM, SR_SIM, Delta EITP (ITU Rec.2124) and HDR-VDP3. Since the latter three metrics are not very common in recent papers, we provide some reference codes in ./metrics for convenient usage.

Visualization

Since HDR10 is an HDR standard using PQ transfer function for the video, the correct way to visualize the results is to synthesize the image results into a video format and display it on the HDR monitor or TVs that support HDR. The HDR images in our dataset are generated by directly extracting frames from the original HDR10 videos, thus these images consisting of PQ values look relatively dark compared to their true appearances. We provide the reference commands of our extracting frames and synthesizing videos in ./scripts. Please use MediaInfo to check the format and the encoding information of synthesized videos before visualization. If circumstances permit, we strongly recommend to observe the HDR results and the original HDR resources by this way on the HDR dispalyer.

If the HDR displayer is not available, some media players with HDR render can play the HDR video and show a relatively realistic look, such as Potplayer. Note that this is only an approximate alternative, and it still cannot fully restore the appearance of HDR content on HDR monitors.

Citation

If our work is helpful to you, please cite our paper:

@inproceedings{chen2021new,
  title={A New Journey from SDRTV to HDRTV}, 
  author={Chen, Xiangyu and Zhang, Zhengwen and Ren, Jimmy S. and Tian, Lynhoo and Qiao, Yu and Dong, Chao},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  year={2021}
}
Owner
XyChen
PhD. Student,Computer Vision
XyChen
[CVPR 2021] NormalFusion: Real-Time Acquisition of Surface Normals for High-Resolution RGB-D Scanning

NormalFusion: Real-Time Acquisition of Surface Normals for High-Resolution RGB-D Scanning Project Page | Paper | Supplemental material #1 | Supplement

KAIST VCLAB 49 Nov 24, 2022
A deep learning library that makes face recognition efficient and effective

Distributed Arcface Training in Pytorch This is a deep learning library that makes face recognition efficient, and effective, which can train tens of

Sajjad Aemmi 10 Nov 23, 2021
minimizer-space de Bruijn graphs (mdBG) for whole genome assembly

rust-mdbg: Minimizer-space de Bruijn graphs (mdBG) for whole-genome assembly rust-mdbg is an ultra-fast minimizer-space de Bruijn graph (mdBG) impleme

Barış Ekim 148 Dec 01, 2022
Official Implementation of 'UPDeT: Universal Multi-agent Reinforcement Learning via Policy Decoupling with Transformers' ICLR 2021(spotlight)

UPDeT Official Implementation of UPDeT: Universal Multi-agent Reinforcement Learning via Policy Decoupling with Transformers (ICLR 2021 spotlight) The

hhhusiyi 96 Dec 22, 2022
Full Stack Deep Learning Labs

Full Stack Deep Learning Labs Welcome! Project developed during lab sessions of the Full Stack Deep Learning Bootcamp. We will build a handwriting rec

Full Stack Deep Learning 1.2k Dec 31, 2022
PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation

BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation

Salesforce 1.3k Dec 31, 2022
Implementation and replication of ProGen, Language Modeling for Protein Generation, in Jax

ProGen - (wip) Implementation and replication of ProGen, Language Modeling for Protein Generation, in Pytorch and Jax (the weights will be made easily

Phil Wang 71 Dec 01, 2022
Multi-modal co-attention for drug-target interaction annotation and Its Application to SARS-CoV-2

CoaDTI Multi-modal co-attention for drug-target interaction annotation and Its Application to SARS-CoV-2 Abstract Environment The test was conducted i

Layne_Huang 7 Nov 14, 2022
The materials used in the SaxonJS tutorial presented at Declarative Amsterdam, 2021

SaxonJS-Tutorial-2021, version 1.0.4 Last updated on 4 November, 2021. Table of contents Background Prerequisites Starting a web server Running a Java

Saxonica 11 Oct 23, 2022
Magic tool for managing internet connection in local network by @zalexdev

Megacut ✂️ A new powerful Python3 tool for managing internet on a local network Installation git clone https://github.com/stryker-project/megacut cd m

Stryker 12 Dec 15, 2022
Reading list for research topics in Masked Image Modeling

awesome-MIM Reading list for research topics in Masked Image Modeling(MIM). We list the most popular methods for MIM, if I missed something, please su

ligang 231 Dec 07, 2022
Ensemble Visual-Inertial Odometry (EnVIO)

Ensemble Visual-Inertial Odometry (EnVIO) Authors : Jae Hyung Jung, Yeongkwon Choe, and Chan Gook Park 1. Overview This is a ROS package of Ensemble V

Jae Hyung Jung 95 Jan 03, 2023
TraND: Transferable Neighborhood Discovery for Unsupervised Cross-domain Gait Recognition.

TraND This is the code for the paper "Jinkai Zheng, Xinchen Liu, Chenggang Yan, Jiyong Zhang, Wu Liu, Xiaoping Zhang and Tao Mei: TraND: Transferable

Jinkai Zheng 32 Apr 04, 2022
CenterFace(size of 7.3MB) is a practical anchor-free face detection and alignment method for edge devices.

CenterFace Introduce CenterFace(size of 7.3MB) is a practical anchor-free face detection and alignment method for edge devices. Recent Update 2019.09.

StarClouds 1.2k Dec 21, 2022
Source code of "Hold me tight! Influence of discriminative features on deep network boundaries"

Hold me tight! Influence of discriminative features on deep network boundaries This is the source code to reproduce the experiments of the NeurIPS 202

EPFL LTS4 19 Dec 10, 2021
3D Pose Estimation for Vehicles

3D Pose Estimation for Vehicles Introduction This work generates 4 key-points and 2 key-edges from vertices and edges of vehicles as ground truth. The

Jingyi Wang 1 Nov 01, 2021
This folder contains the python code of UR5E's advanced forward kinematics model.

This folder contains the python code of UR5E's advanced forward kinematics model. By entering the angle of the joint of UR5e, the detailed coordinates of up to 48 points around the robot arm can be c

Qiang Wang 4 Sep 17, 2022
Python package provinding tools for artistic interactive applications using AI

Documentation redrawing Python package provinding tools for artistic interactive applications using AI Created by ReDrawing Campinas team for the Open

ReDrawing Campinas 1 Sep 30, 2021
Sequence Modeling with Structured State Spaces

Structured State Spaces for Sequence Modeling This repository provides implementations and experiments for the following papers. S4 Efficiently Modeli

HazyResearch 896 Jan 01, 2023
PyTorch implementation of the ideas presented in the paper Interaction Grounded Learning (IGL)

Interaction Grounded Learning This repository contains a simple PyTorch implementation of the ideas presented in the paper Interaction Grounded Learni

Arthur Juliani 4 Aug 31, 2022