PyTorch implementation of 'Gen-LaneNet: a generalized and scalable approach for 3D lane detection'

Overview

(pytorch) Gen-LaneNet: a generalized and scalable approach for 3D lane detection

Introduction

This is a pytorch implementation of Gen-LaneNet, which predicts 3D lanes from a single image. Specifically, Gen-LaneNet is a unified network solution that solves image encoding, spatial transform of features and 3D lane prediction simultaneously. The method refers to the ECCV 2020 paper:

'Gen-LaneNet: a generalized and scalable approach for 3D lane detection', Y Guo, etal. ECCV 2020. [eccv][arxiv]

Key features:

  • A geometry-guided lane anchor representation generalizable to novel scenes.

  • A scalable two-stage framework that decouples the learning of image segmentation subnetwork and geometry encoding subnetwork.

  • A synthetic dataset for 3D lane detection [repo] [data].

Another baseline

This repo also includes an unofficial implementation of '3D-LaneNet' in pytorch for comparison. The method refers to

"3d-lanenet: end-to-end 3d multiple lane detection", N. Garnet, etal., ICCV 2019. [paper]

Requirements

If you have Anaconda installed, you can directly import the provided environment file.

conda env update --file environment.yaml

Those important packages includes:

  • opencv-python 4.1.0.25
  • pytorch 1.4.0
  • torchvision 0.5.0
  • tensorboard 1.15.0
  • tensorboardx 1.7
  • py3-ortools 5.1.4041

Data preparation

The 3D lane detection method is trained and tested on the 3D lane synthetic dataset. Running the demo code on a single image should directly work. However, repeating the training, testing and evaluation requires to prepare the dataset:

If you prefer to build your own data splits using the dataset, please follow the steps described in the 3D lane synthetic dataset repository. All necessary codes are included here already.

Run the Demo

python main_demo_GenLaneNet_ext.py

Specifically, this code predict 3D lane from an image given known camera height and pitch angle. Pretrained models for the segmentation subnetwork and the 3D geometry subnetwork are loaded. Meanwhile, anchor normalization parameters wrt. the training set are also loaded. The demo code will produce lane predication from a single image visualized in the following figure.

The lane results are visualized in three coordinate frames, respectively image plane, virtual top-view, and ego-vehicle coordinate frame. The lane-lines are shown in the top row and the center-lines are shown in the bottom row.

How to train the model

Step 1: Train the segmentation subnetwork

The training of Gen-LaneNet requires to first train the segmentation subnetwork, ERFNet.

  • The training of the ERFNet is based on a pytorch implementation [repo] modified to train the model on the 3D lane synthetic dataset.

  • The trained model should be saved as 'pretrained/erfnet_model_sim3d.tar'. A pre-trained model is already included.

Step 2: Train the 3D-geometry subnetwork

python main_train_GenLaneNet_ext.py
  • Set 'args.dataset_name' to a certain data split to train the model.
  • Set 'args.dataset_dir' to the folder saving the raw dataset.
  • The trained model will be saved in the directory corresponding to certain data split and model name, e.g. 'data_splits/illus_chg/Gen_LaneNet_ext/model*'.
  • The anchor offset std will be recorded for certain data split at the same time, e.g. 'data_splits/illus_chg/geo_anchor_std.json'.

The training progress can be monitored by tensorboard as follows.

cd datas_splits/Gen_LaneNet_ext
./tensorboard  --logdir ./

Batch testing

python main_test_GenLaneNet_ext.py
  • Set 'args.dataset_name' to a certain data split to test the model.
  • Set 'args.dataset_dir' to the folder saving the raw dataset.

The batch testing code not only produces the prediction results, e.g., 'data_splits/illus_chg/Gen_LaneNet_ext/test_pred_file.json', but also perform full-range precision-recall evaluation to produce AP and max F-score.

Other methods

In './experiments', we include the training codes for other variants of Gen-LaneNet models as well as for the baseline method 3D-LaneNet as well as its extended version integrated with the new anchor proposed in Gen-LaneNet. Interested users are welcome to repeat the full set of ablation study reported in the gen-lanenet paper. For example, to train 3D-LaneNet:

cd experiments
python main_train_3DLaneNet.py

Evaluation

Stand-alone evaluation can also be performed.

cd tools
python eval_3D_lane.py

Basically, you need to set 'method_name' and 'data_split' properly to compare the predicted lanes against ground-truth lanes. Evaluation details can refer to the 3D lane synthetic dataset repository or the Gen-LaneNet paper. Overall, the evaluation metrics include:

  • Average Precision (AP)
  • max F-score
  • x-error in close range (0-40 m)
  • x-error in far range (40-100 m)
  • z-error in close range (0-40 m)
  • z-error in far range (40-100 m)

We show the evaluation results comparing two methods:

  • "3d-lanenet: end-to-end 3d multiple lane detection", N. Garnet, etal., ICCV 2019
  • "Gen-lanenet: a generalized and scalable approach for 3D lane detection", Y. Guo, etal., Arxiv, 2020 (GenLaneNet_ext in code)

Comparisons are conducted under three distinguished splits of the dataset. For simplicity, only lane-line results are reported here. The results from the code could be marginally different from that reported in the paper due to different random splits.

  • Standard
Method AP F-Score x error near (m) x error far (m) z error near (m) z error far (m)
3D-LaneNet 89.3 86.4 0.068 0.477 0.015 0.202
Gen-LaneNet 90.1 88.1 0.061 0.496 0.012 0.214
  • Rare Subset
Method AP F-Score x error near (m) x error far (m) z error near (m) z error far (m)
3D-LaneNet 74.6 72.0 0.166 0.855 0.039 0.521
Gen-LaneNet 79.0 78.0 0.139 0.903 0.030 0.539
  • Illumination Change
Method AP F-Score x error near (m) x error far (m) z error near (m) z error far (m)
3D-LaneNet 74.9 72.5 0.115 0.601 0.032 0.230
Gen-LaneNet 87.2 85.3 0.074 0.538 0.015 0.232

Visualization

Visual comparisons to the ground truth can be generated per image when setting 'vis = True' in 'tools/eval_3D_lane.py'. We show two examples for each method under the data split involving illumination change.

  • 3D-LaneNet

  • Gen-LaneNet

Citation

Please cite the paper in your publications if it helps your research:

@article{guo2020gen,
  title={Gen-LaneNet: A Generalized and Scalable Approach for 3D Lane Detection},
  author={Yuliang Guo, Guang Chen, Peitao Zhao, Weide Zhang, Jinghao Miao, Jingao Wang, and Tae Eun Choe},
  booktitle={Computer Vision - {ECCV} 2020 - 16th European Conference},
  year={2020}
}

Copyright and License

The copyright of this work belongs to Baidu Apollo, which is provided under the Apache-2.0 license.

Owner
Yuliang Guo
Researcher in Computer Vision
Yuliang Guo
Minecraft agent to farm resources using reinforcement learning

BarnyardBot CS 175 group project using Malmo download BarnyardBot.py into the python examples directory and run 'python BarnyardBot.py' in the console

0 Jul 26, 2022
Self-Supervised Learning with Kernel Dependence Maximization

Self-Supervised Learning with Kernel Dependence Maximization This is the code for SSL-HSIC, a self-supervised learning loss proposed in the paper Self

DeepMind 29 Dec 29, 2022
Iranian Cars Detection using Yolov5s, PyTorch

Iranian Cars Detection using Yolov5 Train 1- git clone https://github.com/ultralytics/yolov5 cd yolov5 pip install -r requirements.txt 2- Dataset ../

Nahid Ebrahimian 22 Dec 05, 2022
Codes for ACL-IJCNLP 2021 Paper "Zero-shot Fact Verification by Claim Generation"

Zero-shot-Fact-Verification-by-Claim-Generation This repository contains code and models for the paper: Zero-shot Fact Verification by Claim Generatio

Liangming Pan 47 Jan 01, 2023
SWA Object Detection

SWA Object Detection This project hosts the scripts for training SWA object detectors, as presented in our paper: @article{zhang2020swa, title={SWA

237 Nov 28, 2022
PyTorch Implementation of CvT: Introducing Convolutions to Vision Transformers

CvT: Introducing Convolutions to Vision Transformers Pytorch implementation of CvT: Introducing Convolutions to Vision Transformers Usage: img = torch

Rishikesh (ऋषिकेश) 193 Jan 03, 2023
For visualizing the dair-v2x-i dataset

3D Detection & Tracking Viewer The project is based on hailanyi/3D-Detection-Tracking-Viewer and is modified, you can find the original version of the

34 Dec 29, 2022
[NeurIPS 2021] Shape from Blur: Recovering Textured 3D Shape and Motion of Fast Moving Objects

[NeurIPS 2021] Shape from Blur: Recovering Textured 3D Shape and Motion of Fast Moving Objects YouTube | arXiv Prerequisites Kaolin is available here:

Denys Rozumnyi 107 Dec 26, 2022
A curated list of the latest breakthroughs in AI (in 2021) by release date with a clear video explanation, link to a more in-depth article, and code.

2021: A Year Full of Amazing AI papers- A Review 📌 A curated list of the latest breakthroughs in AI by release date with a clear video explanation, l

Louis-François Bouchard 2.9k Dec 31, 2022
[AAAI 2022] Sparse Structure Learning via Graph Neural Networks for Inductive Document Classification

Sparse Structure Learning via Graph Neural Networks for inductive document classification Make graph dataset create co-occurrence graph for datasets.

16 Dec 22, 2022
Source code of our TTH paper: Targeted Trojan-Horse Attacks on Language-based Image Retrieval.

Targeted Trojan-Horse Attacks on Language-based Image Retrieval Source code of our TTH paper: Targeted Trojan-Horse Attacks on Language-based Image Re

fine 7 Aug 23, 2022
An efficient toolkit for Face Stylization based on the paper "AgileGAN: Stylizing Portraits by Inversion-Consistent Transfer Learning"

MMGEN-FaceStylor English | 简体中文 Introduction This repo is an efficient toolkit for Face Stylization based on the paper "AgileGAN: Stylizing Portraits

OpenMMLab 182 Dec 27, 2022
Simple streamlit app to demonstrate HERE Tour Planning

Table of Contents About the Project Built With Getting Started Prerequisites Installation Usage Roadmap Contributing License Acknowledgements About Th

Amol 8 Sep 05, 2022
Python Environment for Bayesian Learning

Pebl is a python library and command line application for learning the structure of a Bayesian network given prior knowledge and observations. Pebl in

Abhik Shah 103 Jul 14, 2022
Full Stack Deep Learning Labs

Full Stack Deep Learning Labs Welcome! Project developed during lab sessions of the Full Stack Deep Learning Bootcamp. We will build a handwriting rec

Full Stack Deep Learning 1.2k Dec 31, 2022
Official Implementation of DDOD (Disentangle your Dense Object Detector), ACM MM2021

Disentangle Your Dense Object Detector This repo contains the supported code and configuration files to reproduce object detection results of Disentan

loveSnowBest 51 Jan 07, 2023
The Official Repository for "Generalized OOD Detection: A Survey"

Generalized Out-of-Distribution Detection: A Survey 1. Overview This repository is with our survey paper: Title: Generalized Out-of-Distribution Detec

Jingkang Yang 338 Jan 03, 2023
This repository is the code of the paper "Sparse Spatial Transformers for Few-Shot Learning".

🌟 Sparse Spatial Transformers for Few-Shot Learning This code implements the Sparse Spatial Transformers for Few-Shot Learning(SSFormers). Our code i

chx_nju 38 Dec 13, 2022
DFM: A Performance Baseline for Deep Feature Matching

DFM: A Performance Baseline for Deep Feature Matching Python (Pytorch) and Matlab (MatConvNet) implementations of our paper DFM: A Performance Baselin

143 Jan 02, 2023
Invertible conditional GANs for image editing

Invertible Conditional GANs This is the implementation of the IcGAN model proposed in our paper: Invertible Conditional GANs for image editing. Novemb

Guim 278 Dec 12, 2022