Differentiable Factor Graph Optimization for Learning Smoothers @ IROS 2021

Related tags

Deep Learningdfgo
Overview

Differentiable Factor Graph Optimization for Learning Smoothers

mypy

Figure describing the overall training pipeline proposed by our IROS paper. Contains five sections, arranged left to right: (1) system models, (2) factor graphs for state estimation, (3) MAP inference, (4) state estimates, and (5) errors with respect to ground-truth. Arrows show how gradients are backpropagated from right to left, starting directly from the final stage (error with respect to ground-truth) back to parameters of the system models.

Overview

Code release for our IROS 2021 conference paper:

Brent Yi1, Michelle A. Lee1, Alina Kloss2, Roberto Martรญn-Martรญn1, and Jeannette Bohg1. Differentiable Factor Graph Optimization for Learning Smoothers. Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), October 2021.

1Stanford University, {brentyi,michellelee,robertom,bohg}@cs.stanford.edu
2Max Planck Institute for Intelligent Systems, [email protected]


This repository contains models, training scripts, and experimental results, and can be used to either reproduce our results or as a reference for implementation details.

Significant chunks of the code written for this paper have been factored out of this repository and released as standalone libraries, which may be useful for building on our work. You can find each of them linked here:

  • jaxfg is our core factor graph optimization library.
  • jaxlie is our Lie theory library for working with rigid body transformations.
  • jax_dataclasses is our library for building JAX pytrees as dataclasses. It's similar to flax.struct, but has workflow improvements for static analysis and nested structures.
  • jax-ekf contains our EKF implementation.

Status

Included in this repo for the disk task:

  • Smoother training & results
    • Training: python train_disk_fg.py --help
    • Evaluation: python cross_validate.py --experiment-paths ./experiments/disk/fg/**/
  • Filter baseline training & results
    • Training: python train_disk_ekf.py --help
    • Evaluation: python cross_validate.py --experiment-paths ./experiments/disk/ekf/**/
  • LSTM baseline training & results
    • Training: python train_disk_lstm.py --help
    • Evaluation: python cross_validate.py --experiment-paths ./experiments/disk/lstm/**/

And, for the visual odometry task:

  • Smoother training & results (including ablations)
    • Training: python train_kitti_fg.py --help
    • Evaluation: python cross_validate.py --experiment-paths ./experiments/kitti/fg/**/
  • EKF baseline training & results
    • Training: python train_kitti_ekf.py --help
    • Evaluation: python cross_validate.py --experiment-paths ./experiments/kitti/ekf/**/
  • LSTM baseline training & results
    • Training: python train_kitti_lstm.py --help
    • Evaluation: python cross_validate.py --experiment-paths ./experiments/kitti/lstm/**/

Note that **/ indicates a recursive glob in zsh. This can be emulated in bash>4 via the globstar option (shopt -q globstar).

We've done our best to make our research code easy to parse, but it's still being iterated on! If you have questions, suggestions, or any general comments, please reach out or file an issue.

Setup

We use Python 3.8 and miniconda for development.

  1. Any calls to CHOLMOD (via scikit-sparse, sometimes used for eval but never for training itself) will require SuiteSparse:

    # Mac
    brew install suite-sparse
    
    # Debian
    sudo apt-get install -y libsuitesparse-dev
  2. Dependencies can be installed via pip:

    pip install -r requirements.txt

    In addition to JAX and the first-party dependencies listed above, note that this also includes various other helpers:

    • datargs (currently forked) is super useful for building type-safe argument parsers.
    • torch's Dataset and DataLoader interfaces are used for training.
    • fannypack contains some utilities for downloading datasets, working with PDB, polling repository commit hashes.

The requirements.txt provided will install the CPU version of JAX by default. For CUDA support, please see instructions from the JAX team.

Datasets

Datasets synced from Google Drive and loaded via h5py automatically as needed. If you're interested in downloading them manually, see lib/kitti/data_loading.py and lib/disk/data_loading.py.

Training

The naming convention for training scripts is as follows: train_{task}_{model type}.py.

All of the training scripts provide a command-line interface for configuring experiment details and hyperparameters. The --help flag will summarize these settings and their default values. For example, to run the training script for factor graphs on the disk task, try:

> python train_disk_fg.py --help

Factor graph training script for disk task.

optional arguments:
  -h, --help            show this help message and exit
  --experiment-identifier EXPERIMENT_IDENTIFIER
                        (default: disk/fg/default_experiment/fold_{dataset_fold})
  --random-seed RANDOM_SEED
                        (default: 94305)
  --dataset-fold {0,1,2,3,4,5,6,7,8,9}
                        (default: 0)
  --batch-size BATCH_SIZE
                        (default: 32)
  --train-sequence-length TRAIN_SEQUENCE_LENGTH
                        (default: 20)
  --num-epochs NUM_EPOCHS
                        (default: 30)
  --learning-rate LEARNING_RATE
                        (default: 0.0001)
  --warmup-steps WARMUP_STEPS
                        (default: 50)
  --max-gradient-norm MAX_GRADIENT_NORM
                        (default: 10.0)
  --noise-model {CONSTANT,HETEROSCEDASTIC}
                        (default: CONSTANT)
  --loss {JOINT_NLL,SURROGATE_LOSS}
                        (default: SURROGATE_LOSS)
  --pretrained-virtual-sensor-identifier PRETRAINED_VIRTUAL_SENSOR_IDENTIFIER
                        (default: disk/pretrain_virtual_sensor/fold_{dataset_fold})

When run, train scripts serialize experiment configurations to an experiment_config.yaml file. You can find hyperparameters in the experiments/ directory for all results presented in our paper.

Evaluation

All evaluation metrics are recorded at train time. The cross_validate.py script can be used to compute metrics across folds:

# Summarize all experiments with means and standard errors of recorded metrics.
python cross_validate.py

# Include statistics for every fold -- this is much more data!
python cross_validate.py --disaggregate

# We can also glob for a partial set of experiments; for example, all of the
# disk experiments.
# Note that the ** wildcard may fail in bash; see above for a fix.
python cross_validate.py --experiment-paths ./experiments/disk/**/

Acknowledgements

We'd like to thank Rika Antonova, Kevin Zakka, Nick Heppert, Angelina Wang, and Philipp Wu for discussions and feedback on both our paper and codebase. Our software design also benefits from ideas from several open-source projects, including Sophus, GTSAM, Ceres Solver, minisam, and SwiftFusion.

This work is partially supported by the Toyota Research Institute (TRI) and Google. This article solely reflects the opinions and conclusions of its authors and not TRI, Google, or any entity associated with TRI or Google.

Owner
Brent Yi
Brent Yi
PyTorch implementations of neural network models for keyword spotting

Honk: CNNs for Keyword Spotting Honk is a PyTorch reimplementation of Google's TensorFlow convolutional neural networks for keyword spotting, which ac

Castorini 475 Dec 15, 2022
Atif Hassan 103 Dec 14, 2022
From a body shape, infer the anatomic skeleton.

OSSO: Obtaining Skeletal Shape from Outside (CVPR 2022) This repository contains the official implementation of the skeleton inference from: OSSO: Obt

Marilyn Keller 166 Dec 28, 2022
Official Implementation of "Tracking Grow-Finish Pigs Across Large Pens Using Multiple Cameras"

Multi Camera Pig Tracking Official Implementation of Tracking Grow-Finish Pigs Across Large Pens Using Multiple Cameras CVPR2021 CV4Animals Workshop P

44 Jan 06, 2023
The goal of the exercises below is to evaluate the candidate knowledge and problem solving expertise regarding the main development focuses for the iFood ML Platform team: MLOps and Feature Store development.

The goal of the exercises below is to evaluate the candidate knowledge and problem solving expertise regarding the main development focuses for the iFood ML Platform team: MLOps and Feature Store dev

George Rocha 0 Feb 03, 2022
HeartRate detector with ArduinoandPython - Use Arduino and Python create a heartrate detector.

Syllabus of Contents Syllabus of Contents Introduction Of Project Features Develop With Python code introduction Installation License Developer Contac

1 Jan 05, 2022
๐ŸŽ“Automatically Update CV Papers Daily using Github Actions (Update at 12:00 UTC Every Day)

๐ŸŽ“Automatically Update CV Papers Daily using Github Actions (Update at 12:00 UTC Every Day)

Realcat 270 Jan 07, 2023
Does MAML Only Work via Feature Re-use? A Data Set Centric Perspective

Does-MAML-Only-Work-via-Feature-Re-use-A-Data-Set-Centric-Perspective Does MAML Only Work via Feature Re-use? A Data Set Centric Perspective Installin

2 Nov 07, 2022
Official Implementation of VAT

Semantic correspondence Few-shot segmentation Cost Aggregation Is All You Need for Few-Shot Segmentation For more information, check out project [Proj

Hamacojr 114 Dec 27, 2022
Implementation of Retrieval-Augmented Denoising Diffusion Probabilistic Models in Pytorch

Retrieval-Augmented Denoising Diffusion Probabilistic Models (wip) Implementation of Retrieval-Augmented Denoising Diffusion Probabilistic Models in P

Phil Wang 55 Jan 01, 2023
SAT: 2D Semantics Assisted Training for 3D Visual Grounding, ICCV 2021 (Oral)

SAT: 2D Semantics Assisted Training for 3D Visual Grounding SAT: 2D Semantics Assisted Training for 3D Visual Grounding by Zhengyuan Yang, Songyang Zh

Zhengyuan Yang 22 Nov 30, 2022
Code of the paper "Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition"

SEW (Squeezed and Efficient Wav2vec) The repo contains the code of the paper "Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speec

ASAPP Research 67 Dec 01, 2022
Construct a neural network frame by Numpy

ๆœฌ้กน็›ฎ็š„CSDNๅšๅฎข้“พๆŽฅ๏ผšhttps://blog.csdn.net/weixin_41578567/article/details/111482022 1. ๆฆ‚่งˆ ๆœฌ้กน็›ฎไธป่ฆ็”จไบŽ็ฅž็ป็ฝ‘็ปœ็š„ๅญฆไน ๏ผŒ้€š่ฟ‡ๅŸบไบŽnumpy็š„ๅฎž็Žฐ๏ผŒไบ†่งฃ็ฅž็ป็ฝ‘็ปœๅบ•ๅฑ‚ๅ‰ๅ‘ไผ ๆ’ญใ€ๅๅ‘ไผ ๆ’ญไปฅๅŠๅ„็ฑปไผ˜ๅŒ–ๅ™จ็š„ๅŽŸ็†ใ€‚ ่ฏฅ้กน็›ฎ็›ฎๅ‰ๅทฒๅฎž็Žฐ็š„ๅŠŸ

24 Jan 22, 2022
Hierarchical probabilistic 3D U-Net, with attention mechanisms (โ€”๐˜ˆ๐˜ต๐˜ต๐˜ฆ๐˜ฏ๐˜ต๐˜ช๐˜ฐ๐˜ฏ ๐˜œ-๐˜•๐˜ฆ๐˜ต, ๐˜š๐˜Œ๐˜™๐˜ฆ๐˜ด๐˜•๐˜ฆ๐˜ต) and a nested decoder structure with deep supervision (โ€”๐˜œ๐˜•๐˜ฆ๐˜ต++).

Hierarchical probabilistic 3D U-Net, with attention mechanisms (โ€”๐˜ˆ๐˜ต๐˜ต๐˜ฆ๐˜ฏ๐˜ต๐˜ช๐˜ฐ๐˜ฏ ๐˜œ-๐˜•๐˜ฆ๐˜ต, ๐˜š๐˜Œ๐˜™๐˜ฆ๐˜ด๐˜•๐˜ฆ๐˜ต) and a nested decoder structure with deep supervision (โ€”๐˜œ๐˜•๐˜ฆ๐˜ต++). Built in TensorFlow 2.5. Configured for vox

Diagnostic Image Analysis Group 32 Dec 08, 2022
Text to image synthesis using thought vectors

Text To Image Synthesis Using Thought Vectors This is an experimental tensorflow implementation of synthesizing images from captions using Skip Though

Paarth Neekhara 2.1k Jan 05, 2023
On-device speech-to-index engine powered by deep learning.

On-device speech-to-index engine powered by deep learning.

Picovoice 30 Nov 24, 2022
FCOSR: A Simple Anchor-free Rotated Detector for Aerial Object Detection

FCOSR: A Simple Anchor-free Rotated Detector for Aerial Object Detection FCOSR: A Simple Anchor-free Rotated Detector for Aerial Object Detection arXi

59 Nov 29, 2022
This script scrapes and stores the availability of timeslots for Car Driving Test at all RTA Serivce NSW centres in the state.

This script scrapes and stores the availability of timeslots for Car Driving Test at all RTA Serivce NSW centres in the state. Dependencies Account wi

Balamurugan Soundararaj 21 Dec 14, 2022
Using VideoBERT to tackle video prediction

VideoBERT This repo reproduces the results of VideoBERT (https://arxiv.org/pdf/1904.01766.pdf). Inspiration was taken from https://github.com/MDSKUL/M

75 Dec 14, 2022