A flexible submap-based framework towards spatio-temporally consistent volumetric mapping and scene understanding.

Overview

Panoptic Mapping

This package contains panoptic_mapping, a general framework for semantic volumetric mapping. We provide, among other, a submap-based approach that leverages panoptic scene understanding towards adaptive spatio-temporally consistent volumetric mapping, as well as regular, monolithic semantic mapping.

combined

Multi-resolution 3D Reconstruction, active and inactive panoptic submaps for temporal consistency, online change detection, and more.

Table of Contents

Credits

Setup

Examples

Other

Paper

If you find this package useful for your research, please consider citing our paper:

  • Lukas Schmid, Jeffrey Delmerico, Johannes Schönberger, Juan Nieto, Marc Pollefeys, Roland Siegwart, and Cesar Cadena. "Panoptic Multi-TSDFs: a Flexible Representation for Online Multi-resolution Volumetric Mapping and Long-term Dynamic Scene Consistency" arXiv preprint arXiv:2109.10165 (2021). [ArXiv]
    @ARTICLE{schmid2021panoptic,
      title={Panoptic Multi-TSDFs: a Flexible Representation for Online Multi-resolution Volumetric Mapping and Long-term Dynamic Scene Consistency},
      author={Schmid, Lukas and Delmerico, Jeffrey and Sch{\"o}nberger, Johannes and Nieto, Juan and Pollefeys, Marc and Siegwart, Roland and Cadena, Cesar},
      journal={arXiv preprint arXiv:2109.10165},
      year={2021}
    }

Video

A short video overview explaining the approach will be released upon publication.

Installation

Installation instructions for Linux. The repository was developed on Ubuntu 18.04 with ROS melodic and also tested on Ubuntu 20.04 with ROS noetic.

Prerequisites

  1. If not already done so, install ROS (Desktop-Full is recommended).

  2. If not already done so, create a catkin workspace with catkin tools:

    # Create a new workspace
    sudo apt-get install python-catkin-tools
    mkdir -p ~/catkin_ws/src
    cd ~/catkin_ws
    catkin init
    catkin config --extend /opt/ros/$ROS_DISTRO
    catkin config --cmake-args -DCMAKE_BUILD_TYPE=RelWithDebInfo
    catkin config --merge-devel

Installation

  1. Install system dependencies:

    sudo apt-get install python-wstool python-catkin-tools
  2. Move to your catkin workspace:

    cd ~/catkin_ws/src
  3. Download repo using SSH:

    git clone [email protected]:ethz-asl/panoptic_mapping.git
  4. Download and install package dependencies using ros install:

    • If you created a new workspace.
    wstool init . ./panoptic_mapping/panoptic_mapping.rosinstall
    wstool update
    • If you use an existing workspace. Notice that some dependencies require specific branches that will be checked out.
    wstool merge -t . ./panoptic_mapping/panoptic_mapping.rosinstall
    wstool update
  5. Compile and source:

    catkin build panoptic_mapping_utils
    source ../devel/setup.bash

Datasets

The datasets described in the paper and used for the demo can be downloaded from the ASL Datasets.

To a utility script is provided to directly download the data:

roscd panoptic_mapping_utils
export FLAT_DATA_DIR="/home/$USER/Documents"  # Or whichever path you prefer.
chmod +x panoptic_mapping_utils/scripts/download_flat_dataset.sh
./panoptic_mapping_utils/scripts/download_flat_dataset.sh

Additional data to run the mapper on the 3RScan dataset will follow.

Examples

Running the Panoptic Mapper

This example explains how to run the Panoptic Multi-TSDF mapper on the flat dataset.

  1. First, download the flat dataset:

    export FLAT_DATA_DIR="/home/$USER/Documents"  # Or whichever path you prefer.
    chmod +x panoptic_mapping_utils/scripts/download_flat_dataset.sh
    ./panoptic_mapping_utils/scripts/download_flat_dataset.sh
    
  2. Replace the data base_path in launch/run.launch (L10) and file_name in config/mapper/flat_groundtruth.yaml (L15) to the downloaded path.

  3. Run the mapper:

    roslaunch panoptic_mapping_ros run.launch
    
  4. You should now see the map being incrementally built:

  5. After the map finished building, you can save the map:

    rosservice call /panoptic_mapper/save_map "file_path: '/path/to/run1.panmap'" 
    
  6. Terminate the mapper pressing Ctrl+C. You can continue the experiment on run2 of the flat dataset by changing the base_path-ending in launch/run.launch (L10) to run2, and load_map and load_path in launch/run.launch (L26-27) to true and /path/to/run1.panmap, respectively. Optionally, you can also change the color_mode in config/mapper/flat_groundtruth.yaml (L118) to change to better highlight the change detection at work.

    roslaunch panoptic_mapping_ros run.launch
    
  7. You should now see the map being updated based on the first run:

Monolithic Semantic Mapping

This example will follow shortly.

Running the RIO Dataset

This example will follow shortly.

Contributing

panoptic_mapping is an open-source project, any contributions are welcome!

For issues, bugs, or suggestions, please open a GitHub Issue.

To add to this repository:

  • Please employ the feature-branch workflow.
  • Setup our auto-formatter for coherent style (we follow the google style guide):
    # Download the linter
    cd <linter_dest>
    git clone [email protected]:ethz-asl/linter.git
    cd linter
    echo ". $(realpath setup_linter.sh)" >> ~/.bashrc
    bash
    roscd panoptic_mapping/..
    init_linter_git_hooks
    # You're all set to go!
    
  • Please open a Pull Request for your changes.
  • Thank you for contributing!
Owner
ETHZ ASL
ETHZ ASL
Automates Machine Learning Pipeline with Feature Engineering and Hyper-Parameters Tuning :rocket:

MLJAR Automated Machine Learning Documentation: https://supervised.mljar.com/ Source Code: https://github.com/mljar/mljar-supervised Table of Contents

MLJAR 2.4k Dec 31, 2022
TAUFE: Task-Agnostic Undesirable Feature DeactivationUsing Out-of-Distribution Data

A deep neural network (DNN) has achieved great success in many machine learning tasks by virtue of its high expressive power. However, its prediction can be easily biased to undesirable features, whi

KAIST Data Mining Lab 8 Dec 07, 2022
[ACM MM 2019 Oral] Cycle In Cycle Generative Adversarial Networks for Keypoint-Guided Image Generation

Contents Cycle-In-Cycle GANs Installation Dataset Preparation Generating Images Using Pretrained Model Train and Test New Models Acknowledgments Relat

Hao Tang 67 Dec 14, 2022
Applications using the GTN library and code to reproduce experiments in "Differentiable Weighted Finite-State Transducers"

gtn_applications An applications library using GTN. Current examples include: Offline handwriting recognition Automatic speech recognition Installing

Facebook Research 68 Dec 29, 2022
Proximal Backpropagation - a neural network training algorithm that takes implicit instead of explicit gradient steps

Proximal Backpropagation Proximal Backpropagation (ProxProp) is a neural network training algorithm that takes implicit instead of explicit gradient s

Thomas Frerix 40 Dec 17, 2022
This repository contains the code for "Self-Diagnosis and Self-Debiasing: A Proposal for Reducing Corpus-Based Bias in NLP".

Self-Diagnosis and Self-Debiasing This repository contains the source code for Self-Diagnosis and Self-Debiasing: A Proposal for Reducing Corpus-Based

Timo Schick 62 Dec 12, 2022
Code for the ECIR'22 paper "Evaluating the Robustness of Retrieval Pipelines with Query Variation Generators"

Query Variation Generators This repository contains the code and annotation data for the ECIR'22 paper "Evaluating the Robustness of Retrieval Pipelin

Gustavo Penha 12 Nov 20, 2022
PyTorch implementation of Tacotron speech synthesis model.

tacotron_pytorch PyTorch implementation of Tacotron speech synthesis model. Inspired from keithito/tacotron. Currently not as much good speech quality

Ryuichi Yamamoto 279 Dec 09, 2022
Code and Datasets from the paper "Self-supervised contrastive learning for volcanic unrest detection from InSAR data"

Code and Datasets from the paper "Self-supervised contrastive learning for volcanic unrest detection from InSAR data" You can download the pretrained

Bountos Nikos 3 May 07, 2022
Pytorch implementation of Cut-Thumbnail in the paper Cut-Thumbnail:A Novel Data Augmentation for Convolutional Neural Network.

Cut-Thumbnail (Accepted at ACM MULTIMEDIA 2021) Tianshu Xie, Xuan Cheng, Xiaomin Wang, Minghui Liu, Jiali Deng, Tao Zhou, Ming Liu This is the officia

3 Apr 12, 2022
Fully-automated scripts for collecting AI-related papers

AI-Paper-collector Fully-automated scripts for collecting AI-related papers List of Conferences to crawel ACL: 21-19 (including findings) EMNLP: 21-19

Gordon Lee 776 Jan 08, 2023
ManimML is a project focused on providing animations and visualizations of common machine learning concepts with the Manim Community Library.

ManimML ManimML is a project focused on providing animations and visualizations of common machine learning concepts with the Manim Community Library.

259 Jan 04, 2023
This is a clean and robust Pytorch implementation of DQN and Double DQN.

DQN/DDQN-Pytorch This is a clean and robust Pytorch implementation of DQN and Double DQN. Here is the training curve: All the experiments are trained

XinJingHao 15 Dec 27, 2022
Keras-1D-NN-Classifier

Keras-1D-NN-Classifier This code is based on the reference codes linked below. reference 1, reference 2 This code is for 1-D array data classification

Jae-Hoon Shim 6 May 18, 2021
Hamiltonian Dynamics with Non-Newtonian Momentum for Rapid Sampling

Hamiltonian Dynamics with Non-Newtonian Momentum for Rapid Sampling Code for the paper: Greg Ver Steeg and Aram Galstyan. "Hamiltonian Dynamics with N

Greg Ver Steeg 25 Mar 14, 2022
PyTorch implementation of "VRT: A Video Restoration Transformer"

VRT: A Video Restoration Transformer Jingyun Liang, Jiezhang Cao, Yuchen Fan, Kai Zhang, Rakesh Ranjan, Yawei Li, Radu Timofte, Luc Van Gool Computer

Jingyun Liang 837 Jan 09, 2023
Project NII pytorch scripts

project-NII-pytorch-scripts By Xin Wang, National Institute of Informatics, since 2021 I am a new pytorch user. If you have any suggestions or questio

Yamagishi and Echizen Laboratories, National Institute of Informatics 184 Dec 23, 2022
A two-stage U-Net for high-fidelity denoising of historical recordings

A two-stage U-Net for high-fidelity denoising of historical recordings Official repository of the paper (not submitted yet): E. Moliner and V. Välimäk

Eloi Moliner Juanpere 57 Jan 05, 2023
Implementation of "Semi-supervised Domain Adaptive Structure Learning"

Semi-supervised Domain Adaptive Structure Learning - ASDA This repo contains the source code and dataset for our ASDA paper. Illustration of the propo

3 Dec 13, 2021
Code for "Discovering Non-monotonic Autoregressive Orderings with Variational Inference" (paper and code updated from ICLR 2021)

Discovering Non-monotonic Autoregressive Orderings with Variational Inference Description This package contains the source code implementation of the

Xuanlin (Simon) Li 10 Dec 29, 2022