A framework for annotating 3D meshes using the predictions of a 2D semantic segmentation model.

Overview

Semantic Meshes

A framework for annotating 3D meshes using the predictions of a 2D semantic segmentation model.

Build License: MIT

Paper

If you find this framework useful in your research, please consider citing: [arxiv]

@misc{fervers2021improving,
      title={Improving Semantic Image Segmentation via Label Fusion in Semantically Textured Meshes},
      author={Florian Fervers, Timo Breuer, Gregor Stachowiak, Sebastian Bullinger, Christoph Bodensteiner, Michael Arens},
      year={2021},
      eprint={2111.11103},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Workflow

  1. Reconstruct a mesh of your scene from a set of images (e.g. using Colmap).
  2. Send all undistorted images through your segmentation model (e.g. from tfcv or image-segmentation-keras) to produce 2D semantic annotation images.
  3. Project all 2D annotations into the 3D mesh and fuse conflicting predictions.
  4. Render the annotated mesh from original camera poses to produce new 2D consistent annotation images, or save it as a colorized ply file.

Example output for a traffic scene with annotations produced by a model that was trained on Cityscapes:

view1 view2

Usage

We provide a python interface that enables easy integration with numpy and machine learning frameworks like Tensorflow. A full example script is provided in colorize_cityscapes_mesh.py that annotates a mesh using a segmentation model that was pretrained on Cityscapes. The model is downloaded automatically and the prediction peformed on-the-fly.

import semantic_meshes

...

# Load a mesh from ply file
mesh = semantic_meshes.data.Ply(args.input_ply)
# Instantiate a triangle renderer for the mesh
renderer = semantic_meshes.render.triangles(mesh)
# Load colmap workspace for camera poses
colmap_workspace = semantic_meshes.data.Colmap(args.colmap)
# Instantiate an aggregator for aggregating the 2D input annotations per 3D primitive
aggregator = semantic_meshes.fusion.MeshAggregator(primitives=renderer.getPrimitivesNum(), classes=19)

...

# Process all input images
for image_file in image_files:
    # Load image from file
    image = imageio.imread(image_file)
    ...
    # Predict class probability distributions for all pixels in the input image
    prediction = predictor(image)
    ...
    # Render the mesh from the pose of the given image
    # This returns an image that contains the index of the projected mesh primitive per pixel
    primitive_indices, _ = renderer.render(colmap_workspace.getCamera(image_file))
    ...
    # Aggregate the class probability distributions of all pixels per primitive
    aggregator.add(primitive_indices, prediction)

# After all images have been processed, the mesh contains a consistent semantic representation of the environment
aggregator.get() # Returns an array that contains the class probability distribution for each primitive

...

# Save colorized mesh to ply
mesh.save(args.output_ply, primitive_colors)

Docker

If you want to skip installation and jump right in, we provide a docker file that can be used without any further steps. Otherwise, see Installation.

  1. Install docker and gpu support
  2. Build the docker image: docker build -t semantic-meshes https://github.com/fferflo/semantic-meshes.git#master
    • If your system is using a proxy, add: --build-arg HTTP_PROXY=... --build-arg HTTPS_PROXY=...
  3. Open a command prompt in the docker image and mount a folder from your host system (HOST_PATH) that contains your colmap workspace into the docker image (DOCKER_PATH): docker run -v /HOST_PATH:/DOCKER_PATH --gpus all -it semantic-meshes bash
  4. Run the provided example script inside the docker image to annotate the mesh with Cityscapes annotations: colorize_cityscapes_mesh.py --colmap /DOCKER_PATH/colmap/dense/sparse --input_ply /DOCKER_PATH/colmap/dense/meshed-delaunay.ply --images /DOCKER_PATH/colmap/dense/images --output_ply /DOCKER_PATH/colorized_mesh.ply

Running the repository inside a docker image is significantly slower than running it in the host system (12sec/image vs 2sec/image on RTX 6000).

Installation

Dependencies

  • CUDA: https://developer.nvidia.com/cuda-downloads
  • OpenMP: On Ubuntu: sudo apt install libomp-dev
  • Python 3
  • Boost: Requires the python and numpy components of the Boost library, which have to be compiled for the python version that you are using. If you're lucky, your OS ships compatible Boost and Python3 versions. Otherwise, compile boost from source and make sure to include the --with-python=python3 switch.

Build

The repository contains CMake code that builds the project and provides a python package in the build folder that can be installed using pip.

CMake downloads, builds and installs all other dependencies automatically. If you don't want to clutter your global system directories, add -DCMAKE_INSTALL_PREFIX=... to install to a local directory.

The framework has to be compiled for specific number of classes (e.g. 19 for Cityscapes, or 2 for a binary segmentation). Add a semicolon-separated list with -DCLASSES_NUMS=2;19;... for all number of classes that you want to use. A longer list will significantly increase the compilation time.

An example build:

git clone https://github.com/fferflo/semantic-meshes
cd semantic-meshes
mkdir build
mkdir install
cd build
cmake -DCMAKE_INSTALL_PREFIX=../install -DCLASSES_NUMS=19 ..
make -j8
make install # Installs to the local install directory
pip install ./python

Build with incompatible Boost or Python versions

Alternatively, in case your OS versions of Boost or Python do not match the version requirements of semantic-meshes, we provide an installation script that also fetches and locally installs compatible versions of these dependencies: install.sh. Since the script builds python from source, make sure to first install all optional Python dependencies that you require (see e.g. https://github.com/python/cpython/blob/main/.github/workflows/posix-deps-apt.sh).

Owner
Florian
Florian
Notes taking website build with Docker + Django + React.

Notes website. Try it in browser! / But how to run? Description. This is monorepository with notes website. Website provides web interface for creatin

Kirill Zhosul 2 Jul 27, 2022
A PyTorch implementation of unsupervised SimCSE

A PyTorch implementation of unsupervised SimCSE

99 Dec 23, 2022
Deep Halftoning with Reversible Binary Pattern

Deep Halftoning with Reversible Binary Pattern ICCV Paper | Project Website | BibTex Overview Existing halftoning algorithms usually drop colors and f

Menghan Xia 17 Nov 22, 2022
A python library to build Model Trees with Linear Models at the leaves.

A python library to build Model Trees with Linear Models at the leaves.

Marco Cerliani 212 Dec 30, 2022
Direct application of DALLE-2 to video synthesis, using factored space-time Unet and Transformers

DALLE2 Video (wip) ** only to be built after DALLE2 image is done and replicated, and the importance of the prior network is validated ** Direct appli

Phil Wang 105 May 15, 2022
Realtime YOLO Monster Detection With Non Maximum Supression

Realtime-YOLO-Monster-Detection-With-Non-Maximum-Supression Table of Contents In

5 Oct 07, 2022
SIR model parameter estimation using a novel algorithm for differentiated uniformization.

TenSIR Parameter estimation on epidemic data under the SIR model using a novel algorithm for differentiated uniformization of Markov transition rate m

The Spang Lab 4 Nov 30, 2022
Discriminative Condition-Aware PLDA

DCA-PLDA This repository implements the Discriminative Condition-Aware Backend described in the paper: L. Ferrer, M. McLaren, and N. Brümmer, "A Speak

Luciana Ferrer 31 Aug 05, 2022
A Runtime method overload decorator which should behave like a compiled language

strongtyping-pyoverload A Runtime method overload decorator which should behave like a compiled language there is a override decorator from typing whi

20 Oct 31, 2022
Official Implementation for "ReStyle: A Residual-Based StyleGAN Encoder via Iterative Refinement" https://arxiv.org/abs/2104.02699

ReStyle: A Residual-Based StyleGAN Encoder via Iterative Refinement Recently, the power of unconditional image synthesis has significantly advanced th

967 Jan 04, 2023
DeepFill v1/v2 with Contextual Attention and Gated Convolution, CVPR 2018, and ICCV 2019 Oral

Generative Image Inpainting An open source framework for generative image inpainting task, with the support of Contextual Attention (CVPR 2018) and Ga

2.9k Dec 16, 2022
Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting

Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting This is the origin Pytorch implementation of Informer in the followin

Haoyi 3.1k Dec 29, 2022
Checkout some cool self-projects you can try your hands on to curb your boredom this December!

SoC-Winter Checkout some cool self-projects you can try your hands on to curb your boredom this December! These are short projects that you can do you

Web and Coding Club, IIT Bombay 29 Nov 08, 2022
Repositório criado para abrigar os notebooks com a listas de exercícios propostos pelo professor Gustavo Guanabara do canal Curso em Vídeo do YouTube durante o Curso de Python 3

Curso em Vídeo - Exercícios de Python 3 Sobre o repositório Este repositório contém os notebooks com a listas de exercícios propostos pelo professor G

João Pedro Pereira 9 Oct 15, 2022
A library of extension and helper modules for Python's data analysis and machine learning libraries.

Mlxtend (machine learning extensions) is a Python library of useful tools for the day-to-day data science tasks. Sebastian Raschka 2014-2020 Links Doc

Sebastian Raschka 4.2k Jan 02, 2023
Tree LSTM implementation in PyTorch

Tree-Structured Long Short-Term Memory Networks This is a PyTorch implementation of Tree-LSTM as described in the paper Improved Semantic Representati

Riddhiman Dasgupta 529 Dec 10, 2022
CDTrans: Cross-domain Transformer for Unsupervised Domain Adaptation

[ICCV2021] TransReID: Transformer-based Object Re-Identification [pdf] The official repository for TransReID: Transformer-based Object Re-Identificati

DamoCV 569 Dec 30, 2022
The codes and related files to reproduce the results for Image Similarity Challenge Track 1.

ISC-Track1-Submission The codes and related files to reproduce the results for Image Similarity Challenge Track 1. Required dependencies To begin with

Wenhao Wang 115 Jan 02, 2023
Generic ecosystem for feature extraction from aerial and satellite imagery

Note: Robosat is neither maintained not actively developed any longer by Mapbox. See this issue. The main developers (@daniel-j-h, @bkowshik) are no l

Mapbox 1.9k Jan 06, 2023
Recurrent Neural Network Tutorial, Part 2 - Implementing a RNN in Python and Theano

Please read the blog post that goes with this code! Jupyter Notebook Setup System Requirements: Python, pip (Optional) virtualenv To start the Jupyter

Denny Britz 863 Dec 15, 2022