BabelCalib: A Universal Approach to Calibrating Central Cameras. In ICCV (2021)

Overview

BabelCalib: A Universal Approach to Calibrating Central Cameras

Paper Datasets Conference Poster Youtube

This repository contains the MATLAB implementation of the BabelCalib calibration framework.

Method overview and result. (left) BabelCalib pipeline: the camera model proposal step ensures a good initialization (right) example result showing residuals of reprojected corners of test images.


Projection of calibration target from estimated calibration. Detected corners are red crosses, target projected using initial calibration are blue squares and using the final calibration are cyan circles.

Description

BabelCalib is a calibration framework that can estimate camera models for all types of central projection cameras. Calibration is robust and fully automatic. BabelCalib provides models for pinhole cameras with additive distortion as well as omni-directional cameras and catadioptric rigs. The supported camera models are listed under the solvers directory. BabelCalib supports calibration targets made of a collection of calibration boards, i.e., multiple planar targets. The method is agnostic to the pattern type on the calibration boards. It is robust to inaccurately localized corners, outlying detections and occluded targets.

Table of Contents


Installation

You need to clone the repository. The required library Visual Geometry Toolkit is added as a submodule. Please clone the repository with submodules:

git clone --recurse-submodules https://github.com/ylochman/babelcalib

If you already cloned the project without submodules, you can run

git submodule update --init --recursive 

Calibration

Calibration is performed by the function calibrate.m. The user provides the 2D<->3D correspondence of the corner detections in the captured images as well as the coordinates of the calibration board fiducials and the absolute poses of the calibration boards. Any calibration board of the target may be partially or fully occluded in a calibration image. The camera model is returned as well as diagnostics about the calibration.

function [model, res, corners, boards] = calibrate(corners, boards, imgsize, varargin)

Parameters:

  • corners : type corners
  • boards : type boards
  • imgsize : 1x2 array specifying the height and width of the images; all images in a capture are assumed to have the same dimensions.
  • varargin : optional arguments

Returns

Evaluation

BabelCalib adopts the train-test set methodology for fitting and evaluation. The training set contains the images used for calibration, and the test set contains held-out images for evaluation. Evaluating a model on test-set images demonstrates how well a calibration generalizes to unseen imagery. During testing, the intriniscs are kept fixed and only the poses of the camera are regressed. The RMS re-projection error is used to assess calibration quality. The poses are estimated by get_poses.m:

function [model, res, corners, boards] = get_poses(intrinsics, corners, boards, imgsize, varargin)

Parameters:

  • intrinsics : type model
  • corners : type corners
  • boards : type boards
  • imgsize : 1x2 array specifies the height and width of the images; all the images are assumed to have the same dimensions
  • varargin : optional arguments

Returns

Type Defintions

corners : 1xN struct array

Contains the set of 2D<->3D correspondences of the calibration board fiducials to the detected corners in each image. Here, we let N be the number of images; Kn be the number of detected corners in the n-th image, where (n=1,...,N); and B be the number of planar calibration boards.

field data type description
x 2xKn array 2D coordinates specifying the detected corners
cspond 2xKn array correspondences, where each column is a correspondence and the first row contains the indices to points and the second row contains indices to calibration board fiducials

boards : 1xB struct array

Contains the set of absolute poses for each of the B calibration boards of the target, where (b=1,...,B) indexes the calibration boards. Also specifies the coordinates of the fiducials on each of the calibration boards.

field data type description
Rt 3x4 array absolute orientation of each pose is encoded in the 3x4 pose matrix
X 2xKb array 2D coordinates of the fiducials on board b of the target. The coordinates are specified with respect to the 2D coordinate system attached to each board

model : struct

Contains the intrinsics and extrinsics of the regressed camera model. The number of parameters of the back-projection or projection model, denoted C, depends on the chosen camera model and model complexity.

field data type description
proj_model str name of the target projection model
proj_params 1xC array parameters of the projection/back-projection function
K 3x3 array camera calibration matrix (relating to A in the paper: K = inv(A))
Rt 3x4xN array camera poses stacked along the array depth

res : struct

Contains the information about the residuals, loss and initialization (minimal solution). Here, we let K be the total number of corners in all the images.

field data type description
loss double loss value
ir double inlier ratio
reprojerrs 1xK array reprojection errors
rms double root mean square reprojection error
wrms double root mean square weighted reprojection error (Huber weights)
info type info

info : struct

Contains additional information about the residuals, loss and initialization (minimal solution).

field data type description
dx 2xK array re-projection difference vectors: dx = x - x_hat
w 1xK array Huber weights on the norms of dx
residual 2xK array residuals: residual = w .* dx
cs 1xK array (boolean) consensus set indicators (1 if inlier, 0 otherwise)
min_model type model model corresponding to the minimal solution
min_res type res residual info corresponding to the minimal solution

cfg

cfg contains the optional configurations. Default values for the optional parameters are loaded from parse_cfg.m. These values can be changed by using the varargin parameter. Parameters values passed in by varargin take precedence. The varargin format is 'param_1', value_1, 'param_2', value_2, .... The parameter descriptions are grouped by which component of BabelCalib they change.

Solver configurations:

  • final_model - the selected camera model (default: 'kb')
  • final_complexity - a degree of the polynomial if the final model is polynomial, otherwise ignored (default: 4)

Sampler configurations:

  • min_trial_count - minimum number of iterations (default: 20)
  • max_trial_count - maximum number of iterations (default: 50)
  • max_num_retries - maximum number of sampling tries in the case of a solver failure (default: 50)
  • confidence - confidence rate (default: 0.995)
  • sample_size - the number of 3D<->2D correspondences that are sampled for each RANSAC iteration (default: 14)

RANSAC configurations:

  • display - toggles the display of verbose output of intermediate steps (default: true)
  • display_freq - frequency of output during the iterations of robust sampling. (default: 1)
  • irT - minimum inlier ratio to perform refinement (default: 0)

Refinement configurations:

  • reprojT - reprojection error threshold (default: 1.5)
  • max_iter - maximum number of iterations on the refinement (default: 50)

Examples and wrappers

2D<->3D correspondences

BabelCalib provides a convenience wrapper calib_run_opt1.m for running the calibration calibrate.m with a training set and evaluating get_poses.m with a test set.

Deltille

The Deltille detector is a robust deltille and checkerboard detector. It comes with detector library, example detector code, and MATLAB bindings. BabelCalib provides functions for calibration and evaluation using the Deltille software's outputs. Calibration from Deltille detections requires format conversion which is peformed by import_ODT.m. A complete example of using calibrate and get_poses with import_ODT is provided in calib_run_opt2.m.

Citation

If you find this work useful in your research, please consider citing:

@InProceedings{Lochman-ICCV21,
    title     = {BabelCalib: A Universal Approach to Calibrating Central Cameras},
    author    = {Lochman, Yaroslava and Liepieshov, Kostiantyn and Chen, Jianhui and Perdoch, Michal and Zach, Christopher and Pritts, James},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
    year      = {2021},
}

License

The software is licensed under the MIT license. Please see LICENSE for details.

HeatNet is a python package that provides tools to build, train and evaluate neural networks designed to predict extreme heat wave events globally on daily to subseasonal timescales.

HeatNet HeatNet is a python package that provides tools to build, train and evaluate neural networks designed to predict extreme heat wave events glob

Google Research 6 Jul 07, 2022
OpenDelta - An Open-Source Framework for Paramter Efficient Tuning.

OpenDelta is a toolkit for parameter efficient methods (we dub it as delta tuning), by which users could flexibly assign (or add) a small amount parameters to update while keeping the most paramters

THUNLP 386 Dec 26, 2022
Listing arxiv - Personalized list of today's articles from ArXiv

Personalized list of today's articles from ArXiv Print and/or send to your gmail

Lilianne Nakazono 5 Jun 17, 2022
Flexible Option Learning - NeurIPS 2021

Flexible Option Learning This repository contains code for the paper Flexible Option Learning presented as a Spotlight at NeurIPS 2021. The implementa

Martin Klissarov 7 Nov 09, 2022
Code for the TIP 2021 Paper "Salient Object Detection with Purificatory Mechanism and Structural Similarity Loss"

PurNet Project for the TIP 2021 Paper "Salient Object Detection with Purificatory Mechanism and Structural Similarity Loss" Abstract Image-based salie

Jinming Su 4 Aug 25, 2022
Brain tumor detection using CNN (InceptionResNetV2 Model)

Brain-Tumor-Detection Building a detection model using a convolutional neural network in Tensorflow & Keras. Used brain MRI images. InceptionResNetV2

1 Feb 13, 2022
Facestar dataset. High quality audio-visual recordings of human conversational speech.

Facestar Dataset Description Existing audio-visual datasets for human speech are either captured in a clean, controlled environment but contain only a

Meta Research 87 Dec 21, 2022
Code of our paper "Contrastive Object-level Pre-training with Spatial Noise Curriculum Learning"

CCOP Code of our paper Contrastive Object-level Pre-training with Spatial Noise Curriculum Learning Requirement Install OpenSelfSup Install Detectron2

Chenhongyi Yang 21 Dec 13, 2022
Integrated Semantic and Phonetic Post-correction for Chinese Speech Recognition

Integrated Semantic and Phonetic Post-correction for Chinese Speech Recognition | paper | dataset | pretrained detection model | Authors: Yi-Chang Che

Yi-Chang Chen 1 Aug 23, 2022
Let's create a tool to convert Thailand budget from PDF to CSV.

thailand-budget-pdf2csv Let's create a tool to convert Thailand Government Budgeting from PDF to CSV! รวมพลัง Dev แปลงงบ จาก PDF สู่ Machine-readable

Kao.Geek 88 Dec 19, 2022
Magisk module to enable hidden features on Android 12 Developer Preview 1.

Android 12 Extensions This is a Magisk module that enables hidden features on Android 12 Developer Preview 1. Features Scrolling screenshots Wallpaper

Danny Lin 384 Jan 06, 2023
A implemetation of the LRCN in mxnet

A implemetation of the LRCN in mxnet ##Abstract LRCN is a combination of CNN and RNN ##Installation Download UCF101 dataset ./avi2jpg.sh to split the

44 Aug 25, 2022
A Keras implementation of CapsNet in the paper: Sara Sabour, Nicholas Frosst, Geoffrey E Hinton. Dynamic Routing Between Capsules

NOTE This implementation is fork of https://github.com/XifengGuo/CapsNet-Keras , applied to IMDB texts reviews dataset. CapsNet-Keras A Keras implemen

Lauro Moraes 5 Oct 23, 2022
Lucid Sonic Dreams syncs GAN-generated visuals to music.

Lucid Sonic Dreams Lucid Sonic Dreams syncs GAN-generated visuals to music. By default, it uses NVLabs StyleGAN2, with pre-trained models lifted from

731 Jan 02, 2023
Deep Learning Emotion decoding using EEG data from Autism individuals

Deep Learning Emotion decoding using EEG data from Autism individuals This repository includes the python and matlab codes using for processing EEG 2D

Juan Manuel Mayor Torres 12 Dec 08, 2022
On Evaluation Metrics for Graph Generative Models

On Evaluation Metrics for Graph Generative Models Authors: Rylee Thompson, Boris Knyazev, Elahe Ghalebi, Jungtaek Kim, Graham Taylor This is the offic

13 Jan 07, 2023
DeepLab is a state-of-art deep learning system for semantic image segmentation built on top of Caffe.

DeepLab Introduction DeepLab is a state-of-art deep learning system for semantic image segmentation built on top of Caffe. It combines densely-compute

Ali 234 Nov 14, 2022
UnsupervisedR&R: Unsupervised Pointcloud Registration via Differentiable Rendering

UnsupervisedR&R: Unsupervised Pointcloud Registration via Differentiable Rendering This repository holds all the code and data for our recent work on

Mohamed El Banani 118 Dec 06, 2022
Official repository for the paper F, B, Alpha Matting

FBA Matting Official repository for the paper F, B, Alpha Matting. This paper and project is under heavy revision for peer reviewed publication, and s

Marco Forte 404 Jan 05, 2023
A simple, high level, easy-to-use open source Computer Vision library for Python.

ZoomVision : Slicing Aid Detection A simple, high level, easy-to-use open source Computer Vision library for Python. Installation Installing dependenc

Nurettin Sinanoğlu 2 Mar 04, 2022