Reference code for the paper "Cross-Camera Convolutional Color Constancy" (ICCV 2021)

Overview

Cross-Camera Convolutional Color Constancy, ICCV 2021 (Oral)

Mahmoud Afifi1,2, Jonathan T. Barron2, Chloe LeGendre2, Yun-Ta Tsai2, and Francois Bleibel2

1York University   2Google Research

Paper | Poster | PPT | Video

C5_teaser

Reference code for the paper Cross-Camera Convolutional Color Constancy. Mahmoud Afifi, Jonathan T. Barron, Chloe LeGendre, Yun-Ta Tsai, and Francois Bleibel. In ICCV, 2021. If you use this code, please cite our paper:

@InProceedings{C5,
  title={Cross-Camera Convolutional Color Constancy},
  author={Afifi, Mahmoud and Barron, Jonathan T and LeGendre, Chloe and Tsai, Yun-Ta and Bleibel, Francois},
  booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
  year={2021}
}

C5_figure

Code

Prerequisite

  • Pytorch
  • opencv-python
  • tqdm

Training

To train C5, training/validation data should have the following formatting:

- train_folder/
       | image1_sensorname_camera1.png
       | image1_sensorname_camera1_metadata.json
       | image2_sensorname_camera1.png
       | image2_sensorname_camera1_metadata.json
       ...
       | image1_sensorname_camera2.png
       | image1_sensorname_camera2_metadata.json
       ...

In src/ops.py, the function add_camera_name(dataset_dir) can be used to rename image filenames and corresponding ground-truth JSON files. Each JSON file should include a key named either illuminant_color_raw or gt_ill that has the ground-truth illuminant color of the corresponding image.

The training code is given in train.py. The following parameters are required to set model configuration and training data information.

  • --data-num: the number of images used for each inference (additional images + input query image). This was mentioned in the main paper as m.
  • --input-size: number of histogram bins.
  • --learn-G: to use a G multiplier as explained in the paper.
  • --training-dir-in: training image directory.
  • --validation-dir-in: validation image directory; when this variable is None (default), the validation set will be taken from the training data based on the --validation-ratio.
  • --validation-ratio: when --validation-dir-in is None, this argument determines the validation set ratio of the image set in --training-dir-in directory.
  • --augmentation-dir: directory(s) of augmentation data (optional).
  • --model-name: name of the trained model.

The following parameters are useful to control training settings and hyperparameters:

  • --epochs: number of epochs
  • --batch-size: batch size
  • --load-hist: to load histogram if pre-computed (recommended).
  • -optimizer: optimization algorithm for stochastic gradient descent; options are: Adam or SGD.
  • --learning-rate: Learning rate
  • --l2reg: L2 regularization factor
  • --load: to load C5 model from a .pth file; default is False
  • --model-location: when --load is True, this variable should point to the fullpath of the .pth model file.
  • --validation-frequency: validation frequency (in epochs).
  • --cross-validation: To use three-fold cross-validation. When this variable is True, --validation-dir-in and --validation-ratio will be ignored and 3-fold cross-validation, on the data provided in the --training-dir-in, will be applied.
  • --gpu: GPU device ID.
  • --smoothness-factor-*: smoothness loss factor of the following model components: F (conv filter), B (bias), G (multiplier layer). For example, --smoothness-factor-F can be used to set the smoothness loss for the conv filter.
  • --increasing-batch-size: for increasing batch size during training.
  • --grad-clip-value: gradient clipping value; if it's set to 0 (default), no clipping is applied.

Testing

To test a pre-trained C5 model, testing data should have the following formatting:

- test_folder/
       | image1_sensorname_camera1.png
       | image1_sensorname_camera1_metadata.json
       | image2_sensorname_camera1.png
       | image2_sensorname_camera1_metadata.json
       ...
       | image1_sensorname_camera2.png
       | image1_sensorname_camera2_metadata.json
       ...

The testing code is given in test.py. The following parameters are required to set model configuration and testing data information.

  • --model-name: name of the trained model.
  • --data-num: the number of images used for each inference (additional images + input query image). This was mentioned in the main paper as m.
  • --input-size: number of histogram bins.
  • --g-multiplier: to use a G multiplier as explained in the paper.
  • --testing-dir-in: testing image directory.
  • --batch-size: batch size
  • --load-hist: to load histogram if pre-computed (recommended).
  • --multiple_test: to apply multiple tests (ten as mentioned in the paper) and save their results.
  • --white-balance: to save white-balanced testing images.
  • --cross-validation: to use three-fold cross-validation. When it is set to True, it is supposed to have three pre-trained models saved with a postfix of the fold number. The testing image filenames should be listed in .npy files located in the folds directory with the same name of the dataset, which should be the same as the folder name in --testing-dir-in.
  • --gpu: GPU device ID.

In the images directory, there are few examples captured by Mobile Sony IMX135 from the INTEL-TAU dataset. To white balance these raw images, as shown in the figure below, using a C5 model (trained on DSLR cameras from NUS and Gehler-Shi datasets), use the following command:

python test.py --testing-dir-in ./images --white-balance True --model-name C5_m_7_h_64

c5_examples

To test with the gain multiplie, use the following command:

python test.py --testing-dir-in ./images --white-balance True --g-multiplier True --model-name C5_m_7_h_64_w_G

Note that in testing, C5 does not require any metadata. The testing code only uses JSON files to load ground-truth illumination for comparisons with our estimated values.

Data augmentation

The raw-to-raw augmentation functions are provided in src/aug_ops.opy. Call the set_sampling_params function to set sampling parameters (e.g., excluding certain camera/dataset from the soruce set, determine the number of augmented images, etc.). Then, call the map_raw_images function to generate a new augmentation set with the determined parameters. The function map_raw_images takes four arguments:

  • xyz_img_dir: directory of XYZ images; you can download the CIE XYZ images from here. All images were transformed to the CIE XYZ space after applying the black-level normalization and masking out the calibration object (i.e., the color rendition chart or SpyderCUBE).
  • target_cameras: a list of one or more of the following camera models: Canon EOS 550D, Canon EOS 5D, Canon EOS-1DS, Canon EOS-1Ds Mark III, Fujifilm X-M1, Nikon D40, Nikon D5200, Olympus E-PL6, Panasonic DMC-GX1, Samsung NX2000, Sony SLT-A57, or All.
  • output_dir: output directory to save the augmented images and their metadata files.
  • params: sampling parameters set by the set_sampling_params function.
Owner
Mahmoud Afifi
Mahmoud Afifi
Instant-nerf-pytorch - NeRF trained SUPER FAST in pytorch

instant-nerf-pytorch This is WORK IN PROGRESS, please feel free to contribute vi

94 Nov 22, 2022
CS50x-AI - Artificial Intelligence with Python from Harvard University

CS50x-AI Artificial Intelligence with Python from Harvard University 📖 Table of

Hosein Damavandi 6 Aug 22, 2022
Low Complexity Channel estimation with Neural Network Solutions

Interpolation-ResNet Invited paper for WSA 2021, called 'Low Complexity Channel estimation with Neural Network Solutions'. Low complexity residual con

Dianxin 10 Dec 10, 2022
A knowledge base construction engine for richly formatted data

Fonduer is a Python package and framework for building knowledge base construction (KBC) applications from richly formatted data. Note that Fonduer is

HazyResearch 386 Dec 05, 2022
A stable algorithm for GAN training

DRAGAN (Deep Regret Analytic Generative Adversarial Networks) Link to our paper - https://arxiv.org/abs/1705.07215 Pytorch implementation (thanks!) -

195 Oct 10, 2022
Implementation of Deformable Attention in Pytorch from the paper "Vision Transformer with Deformable Attention"

Deformable Attention Implementation of Deformable Attention from this paper in Pytorch, which appears to be an improvement to what was proposed in DET

Phil Wang 128 Dec 24, 2022
PICARD - Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models

This is the official implementation of the following paper: Torsten Scholak, Nathan Schucher, Dzmitry Bahdanau. PICARD - Parsing Incrementally for Con

ElementAI 217 Jan 01, 2023
PyTorch implementation of "Conformer: Convolution-augmented Transformer for Speech Recognition" (INTERSPEECH 2020)

PyTorch implementation of Conformer: Convolution-augmented Transformer for Speech Recognition. Transformer models are good at capturing content-based

Soohwan Kim 565 Jan 04, 2023
The description of FMFCC-A (audio track of FMFCC) dataset and Challenge resluts.

FMFCC-A This project is the description of FMFCC-A (audio track of FMFCC) dataset and Challenge resluts. The FMFCC-A dataset is shared through BaiduCl

18 Dec 24, 2022
IndoNLI: A Natural Language Inference Dataset for Indonesian

IndoNLI: A Natural Language Inference Dataset for Indonesian This is a repository for data and code accompanying our EMNLP 2021 paper "IndoNLI: A Natu

15 Feb 10, 2022
A framework for Quantification written in Python

QuaPy QuaPy is an open source framework for quantification (a.k.a. supervised prevalence estimation, or learning to quantify) written in Python. QuaPy

41 Dec 14, 2022
My solutions for Stanford University course CS224W: Machine Learning with Graphs Fall 2021 colabs (GNN, GAT, GraphSAGE, GCN)

machine-learning-with-graphs My solutions for Stanford University course CS224W: Machine Learning with Graphs Fall 2021 colabs Course materials can be

Marko Njegomir 7 Dec 14, 2022
Hyperparameter tuning for humans

KerasTuner KerasTuner is an easy-to-use, scalable hyperparameter optimization framework that solves the pain points of hyperparameter search. Easily c

Keras 2.6k Dec 27, 2022
なりすまし検出(anti-spoof-mn3)のWebカメラ向けデモ

FaceDetection-Anti-Spoof-Demo なりすまし検出(anti-spoof-mn3)のWebカメラ向けデモです。 モデルはPINTO_model_zoo/191_anti-spoof-mn3からONNX形式のモデルを使用しています。 Requirement mediapipe

KazuhitoTakahashi 8 Nov 18, 2022
Source code and Dataset creation for the paper "Neural Symbolic Regression That Scales"

NeuralSymbolicRegressionThatScales Pytorch implementation and pretrained models for the paper "Neural Symbolic Regression That Scales", presented at I

35 Nov 25, 2022
OpenDelta - An Open-Source Framework for Paramter Efficient Tuning.

OpenDelta is a toolkit for parameter efficient methods (we dub it as delta tuning), by which users could flexibly assign (or add) a small amount parameters to update while keeping the most paramters

THUNLP 386 Dec 26, 2022
A community run, 5-day PyTorch Deep Learning Bootcamp

Deep Learning Winter School, November 2107. Tel Aviv Deep Learning Bootcamp : http://deep-ml.com. About Tel-Aviv Deep Learning Bootcamp is an intensiv

Shlomo Kashani. 1.3k Sep 04, 2021
MPViT:Multi-Path Vision Transformer for Dense Prediction

MPViT : Multi-Path Vision Transformer for Dense Prediction This repository inlcu

Youngwan Lee 272 Dec 20, 2022
A library for researching neural networks compression and acceleration methods.

A library for researching neural networks compression and acceleration methods.

Intel Labs 100 Dec 29, 2022
Bunch of different tools which helps visualizing and annotating images for semantic/instance segmentation tasks

Data Framework for Semantic/Instance Segmentation Bunch of different tools which helps visualizing, transforming and annotating images for semantic/in

Bruno Fernandes Carvalho 5 Dec 21, 2022