HDMapNet: A Local Semantic Map Learning and Evaluation Framework

Related tags

Deep LearningHDMapNet
Overview

HDMapNet_devkit

Devkit for HDMapNet.

HDMapNet: A Local Semantic Map Learning and Evaluation Framework

Qi Li, Yue Wang, Yilun Wang, Hang Zhao

[Paper] [Project Page] [5-min video]

Abstract: Estimating local semantics from sensory inputs is a central component for high-definition map constructions in autonomous driving. However, traditional pipelines require a vast amount of human efforts and resources in annotating and maintaining the semantics in the map, which limits its scalability. In this paper, we introduce the problem of local semantic map learning, which dynamically constructs the vectorized semantics based on onboard sensor observations. Meanwhile, we introduce a local semantic map learning method, dubbed HDMapNet. HDMapNet encodes image features from surrounding cameras and/or point clouds from LiDAR, and predicts vectorized map elements in the bird's-eye view. We benchmark HDMapNet on nuScenes dataset and show that in all settings, it performs better than baseline methods. Of note, our fusion-based HDMapNet outperforms existing methods by more than 50% in all metrics. In addition, we develop semantic-level and instance-level metrics to evaluate the map learning performance. Finally, we showcase our method is capable of predicting a locally consistent map. By introducing the method and metrics, we invite the community to study this novel map learning problem. Code and evaluation kit will be released to facilitate future development.

Questions/Requests: Please file an issue or email me at [email protected].

Preparation

  1. Download nuScenes dataset and put it to dataset/ folder.

  2. Install dependencies by running

pip install -r requirement.txt

Vectorization

Run python vis_label.py for demo of vectorized labels. The visualizations are in dataset/nuScenes/samples/GT.

Evaluation

Run python evaluate.py --result_path [submission file] for evaluation. The script accepts vectorized or rasterized maps as input. For vectorized map, We firstly rasterize the vectors to map to do evaluation. For rasterized map, you should make sure the line width=1.

Below is the format for vectorized submission:

-- Whether this submission uses camera data as an input. "use_lidar": -- Whether this submission uses lidar data as an input. "use_radar": -- Whether this submission uses radar data as an input. "use_external": -- Whether this submission uses external data as an input. "vector": true -- Whether this submission uses vector format. }, "results": { sample_token : List[vectorized_line] -- Maps each sample_token to a list of vectorized lines. } } vectorized_line { "pts": List[ ] -- Ordered points to define the vectorized line. "pts_num": , -- Number of points in this line. "type": <0, 1, 2> -- Type of the line: 0: ped; 1: divider; 2: boundary "confidence_level": -- Confidence level for prediction (used by Average Precision) }">
vectorized_submission {
    "meta": {
        "use_camera":   
          
             -- Whether this submission uses camera data as an input.
        "use_lidar":    
           
              -- Whether this submission uses lidar data as an input.
        "use_radar":    
            
               -- Whether this submission uses radar data as an input.
        "use_external": 
             
                -- Whether this submission uses external data as an input.
        "vector":        true   -- Whether this submission uses vector format.
    },
    "results": {
        sample_token 
              
               : List[vectorized_line] -- Maps each sample_token to a list of vectorized lines. } } vectorized_line { "pts": List[
               
                ] -- Ordered points to define the vectorized line. "pts_num": 
                
                 , -- Number of points in this line. "type": <0, 1, 2> -- Type of the line: 0: ped; 1: divider; 2: boundary "confidence_level": 
                 
                   -- Confidence level for prediction (used by Average Precision) } 
                 
                
               
              
             
            
           
          

For rasterized submission, the format is:

-- Whether this submission uses camera data as an input. "use_lidar": -- Whether this submission uses lidar data as an input. "use_radar": -- Whether this submission uses radar data as an input. "use_external": -- Whether this submission uses external data as an input. "vector": false -- Whether this submission uses vector format. }, "results": { sample_token : { -- Maps each sample_token to a list of vectorized lines. "map": [ ], -- Raster map of prediction (C=0: ped; 1: divider 2: boundary). The value indicates the line idx (start from 1). "confidence_level": Array[float], -- confidence_level[i] stands for confidence level for i^th line (start from 1). } } }">
rasterized_submisson {
    "meta": {
        "use_camera":   
        
           -- Whether this submission uses camera data as an input.
        "use_lidar":    
         
            -- Whether this submission uses lidar data as an input.
        "use_radar":    
          
             -- Whether this submission uses radar data as an input.
        "use_external": 
           
              -- Whether this submission uses external data as an input.
        "vector":       false   -- Whether this submission uses vector format.
    },
    "results": {
        sample_token 
            
             : { -- Maps each sample_token to a list of vectorized lines. "map": [
             
              ], -- Raster map of prediction (C=0: ped; 1: divider 2: boundary). The value indicates the line idx (start from 1). "confidence_level": Array[float], -- confidence_level[i] stands for confidence level for i^th line (start from 1). } } } 
             
            
           
          
         
        

Run python export_to_json.py to get a demo of vectorized submission. Run python export_to_json.py --raster for rasterized submission.

Citation

If you found this useful in your research, please consider citing

@misc{li2021hdmapnet,
      title={HDMapNet: A Local Semantic Map Learning and Evaluation Framework}, 
      author={Qi Li and Yue Wang and Yilun Wang and Hang Zhao},
      year={2021},
      eprint={2107.06307},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
Owner
Tsinghua MARS Lab
MARS Lab at IIIS, Tsinghua University
Tsinghua MARS Lab
LEDNet: A Lightweight Encoder-Decoder Network for Real-time Semantic Segmentation

LEDNet: A Lightweight Encoder-Decoder Network for Real-time Semantic Segmentation Table of Contents: Introduction Project Structure Installation Datas

Yu Wang 492 Dec 02, 2022
Incomplete easy-to-use math solver and PDF generator.

Math Expert Let me do your work Preview preview.mp4 Introduction Math Expert is our (@salastro, @younis-tarek, @marawn-mogeb) math high school graduat

SalahDin Ahmed 22 Jul 11, 2022
Fluency ENhanced Sentence-bert Evaluation (FENSE), metric for audio caption evaluation. And Benchmark dataset AudioCaps-Eval, Clotho-Eval.

FENSE The metric, Fluency ENhanced Sentence-bert Evaluation (FENSE), for audio caption evaluation, proposed in the paper "Can Audio Captions Be Evalua

Zhiling Zhang 13 Dec 23, 2022
CondLaneNet: a Top-to-down Lane Detection Framework Based on Conditional Convolution

CondLaneNet: a Top-to-down Lane Detection Framework Based on Conditional Convolution This is the official implementation code of the paper "CondLaneNe

Alibaba Cloud 311 Dec 30, 2022
A light and fast one class detection framework for edge devices. We provide face detector, head detector, pedestrian detector, vehicle detector......

A Light and Fast Face Detector for Edge Devices Big News: LFD, which is a big update of LFFD, now is released (2021.03.09). It is strongly recommended

YonghaoHe 1.3k Dec 25, 2022
Implemenets the Contourlet-CNN as described in C-CNN: Contourlet Convolutional Neural Networks, using PyTorch

C-CNN: Contourlet Convolutional Neural Networks This repo implemenets the Contourlet-CNN as described in C-CNN: Contourlet Convolutional Neural Networ

Goh Kun Shun (KHUN) 10 Nov 03, 2022
Anomaly Detection Based on Hierarchical Clustering of Mobile Robot Data

We proposed a new approach to detect anomalies of mobile robot data. We investigate each data seperately with two clustering method hierarchical and k-means. There are two sub-method that we used for

Zekeriyya Demirci 1 Jan 09, 2022
Code for "Offline Meta-Reinforcement Learning with Advantage Weighting" [ICML 2021]

Offline Meta-Reinforcement Learning with Advantage Weighting (MACAW) MACAW code used for the experiments in the ICML 2021 paper. Installing the enviro

Eric Mitchell 28 Jan 01, 2023
Learning kernels to maximize the power of MMD tests

Code for the paper "Generative Models and Model Criticism via Optimized Maximum Mean Discrepancy" (arXiv:1611.04488; published at ICLR 2017), by Douga

Danica J. Sutherland 201 Dec 17, 2022
PrimitiveNet: Primitive Instance Segmentation with Local Primitive Embedding under Adversarial Metric (ICCV 2021)

PrimitiveNet Source code for the paper: Jingwei Huang, Yanfeng Zhang, Mingwei Sun. [PrimitiveNet: Primitive Instance Segmentation with Local Primitive

Jingwei Huang 47 Dec 06, 2022
Official implementation for "Style Transformer for Image Inversion and Editing" (CVPR 2022)

Style Transformer for Image Inversion and Editing (CVPR2022) https://arxiv.org/abs/2203.07932 Existing GAN inversion methods fail to provide latent co

Xueqi Hu 153 Dec 02, 2022
A library that allows for inference on probabilistic models

Bean Machine Overview Bean Machine is a probabilistic programming language for inference over statistical models written in the Python language using

Meta Research 234 Dec 29, 2022
KITTI-360 Annotation Tool is a framework that developed based on python(cherrypy + jinja2 + sqlite3) as the server end and javascript + WebGL as the front end.

KITTI-360 Annotation Tool is a framework that developed based on python(cherrypy + jinja2 + sqlite3) as the server end and javascript + WebGL as the front end.

86 Dec 12, 2022
A cross-document event and entity coreference resolution system, trained and evaluated on the ECB+ corpus.

A Comprehensive Comparison of Word Embeddings in Event & Entity Coreference Resolution. Introduction This repo contains experimental code derived from

2 May 09, 2022
You are AllSet: A Multiset Function Framework for Hypergraph Neural Networks.

AllSet This is the repo for our paper: You are AllSet: A Multiset Function Framework for Hypergraph Neural Networks. We prepared all codes and a subse

Jianhao 51 Dec 24, 2022
李云龙二次元风格化!打滚卖萌,使用了animeGANv2进行了视频的风格迁移

李云龙二次元风格化!一键star、fork,你也可以生成这样的团长! 打滚卖萌求star求fork! 0.效果展示 视频效果前往B站观看效果最佳:李云龙二次元风格化: github开源repo:李云龙二次元风格化 百度AIstudio开源地址,一键fork即可运行: 李云龙二次元风格化!一键fork

oukohou 44 Dec 04, 2022
Convnet transfer - Code for paper How transferable are features in deep neural networks?

How transferable are features in deep neural networks? This repository contains source code necessary to reproduce the results presented in the follow

Jason Yosinski 143 Sep 13, 2022
Source code for "Understanding Knowledge Integration in Language Models with Graph Convolutions"

Graph Convolution Simulator (GCS) Source code for "Understanding Knowledge Integration in Language Models with Graph Convolutions" Requirements: PyTor

yifan 10 Oct 18, 2022
Dynamical movement primitives (DMPs), probabilistic movement primitives (ProMPs), spatially coupled bimanual DMPs.

Movement Primitives Movement primitives are a common group of policy representations in robotics. There are many different types and variations. This

DFKI Robotics Innovation Center 63 Jan 06, 2023
TransferNet: Learning Transferrable Knowledge for Semantic Segmentation with Deep Convolutional Neural Network

TransferNet: Learning Transferrable Knowledge for Semantic Segmentation with Deep Convolutional Neural Network Created by Seunghoon Hong, Junhyuk Oh,

42 Jun 29, 2022