PyTorch implementation of 'Gen-LaneNet: a generalized and scalable approach for 3D lane detection'

Overview

(pytorch) Gen-LaneNet: a generalized and scalable approach for 3D lane detection

Introduction

This is a pytorch implementation of Gen-LaneNet, which predicts 3D lanes from a single image. Specifically, Gen-LaneNet is a unified network solution that solves image encoding, spatial transform of features and 3D lane prediction simultaneously. The method refers to the ECCV 2020 paper:

'Gen-LaneNet: a generalized and scalable approach for 3D lane detection', Y Guo, etal. ECCV 2020. [eccv][arxiv]

Key features:

  • A geometry-guided lane anchor representation generalizable to novel scenes.

  • A scalable two-stage framework that decouples the learning of image segmentation subnetwork and geometry encoding subnetwork.

  • A synthetic dataset for 3D lane detection [repo] [data].

Another baseline

This repo also includes an unofficial implementation of '3D-LaneNet' in pytorch for comparison. The method refers to

"3d-lanenet: end-to-end 3d multiple lane detection", N. Garnet, etal., ICCV 2019. [paper]

Requirements

If you have Anaconda installed, you can directly import the provided environment file.

conda env update --file environment.yaml

Those important packages includes:

  • opencv-python 4.1.0.25
  • pytorch 1.4.0
  • torchvision 0.5.0
  • tensorboard 1.15.0
  • tensorboardx 1.7
  • py3-ortools 5.1.4041

Data preparation

The 3D lane detection method is trained and tested on the 3D lane synthetic dataset. Running the demo code on a single image should directly work. However, repeating the training, testing and evaluation requires to prepare the dataset:

If you prefer to build your own data splits using the dataset, please follow the steps described in the 3D lane synthetic dataset repository. All necessary codes are included here already.

Run the Demo

python main_demo_GenLaneNet_ext.py

Specifically, this code predict 3D lane from an image given known camera height and pitch angle. Pretrained models for the segmentation subnetwork and the 3D geometry subnetwork are loaded. Meanwhile, anchor normalization parameters wrt. the training set are also loaded. The demo code will produce lane predication from a single image visualized in the following figure.

The lane results are visualized in three coordinate frames, respectively image plane, virtual top-view, and ego-vehicle coordinate frame. The lane-lines are shown in the top row and the center-lines are shown in the bottom row.

How to train the model

Step 1: Train the segmentation subnetwork

The training of Gen-LaneNet requires to first train the segmentation subnetwork, ERFNet.

  • The training of the ERFNet is based on a pytorch implementation [repo] modified to train the model on the 3D lane synthetic dataset.

  • The trained model should be saved as 'pretrained/erfnet_model_sim3d.tar'. A pre-trained model is already included.

Step 2: Train the 3D-geometry subnetwork

python main_train_GenLaneNet_ext.py
  • Set 'args.dataset_name' to a certain data split to train the model.
  • Set 'args.dataset_dir' to the folder saving the raw dataset.
  • The trained model will be saved in the directory corresponding to certain data split and model name, e.g. 'data_splits/illus_chg/Gen_LaneNet_ext/model*'.
  • The anchor offset std will be recorded for certain data split at the same time, e.g. 'data_splits/illus_chg/geo_anchor_std.json'.

The training progress can be monitored by tensorboard as follows.

cd datas_splits/Gen_LaneNet_ext
./tensorboard  --logdir ./

Batch testing

python main_test_GenLaneNet_ext.py
  • Set 'args.dataset_name' to a certain data split to test the model.
  • Set 'args.dataset_dir' to the folder saving the raw dataset.

The batch testing code not only produces the prediction results, e.g., 'data_splits/illus_chg/Gen_LaneNet_ext/test_pred_file.json', but also perform full-range precision-recall evaluation to produce AP and max F-score.

Other methods

In './experiments', we include the training codes for other variants of Gen-LaneNet models as well as for the baseline method 3D-LaneNet as well as its extended version integrated with the new anchor proposed in Gen-LaneNet. Interested users are welcome to repeat the full set of ablation study reported in the gen-lanenet paper. For example, to train 3D-LaneNet:

cd experiments
python main_train_3DLaneNet.py

Evaluation

Stand-alone evaluation can also be performed.

cd tools
python eval_3D_lane.py

Basically, you need to set 'method_name' and 'data_split' properly to compare the predicted lanes against ground-truth lanes. Evaluation details can refer to the 3D lane synthetic dataset repository or the Gen-LaneNet paper. Overall, the evaluation metrics include:

  • Average Precision (AP)
  • max F-score
  • x-error in close range (0-40 m)
  • x-error in far range (40-100 m)
  • z-error in close range (0-40 m)
  • z-error in far range (40-100 m)

We show the evaluation results comparing two methods:

  • "3d-lanenet: end-to-end 3d multiple lane detection", N. Garnet, etal., ICCV 2019
  • "Gen-lanenet: a generalized and scalable approach for 3D lane detection", Y. Guo, etal., Arxiv, 2020 (GenLaneNet_ext in code)

Comparisons are conducted under three distinguished splits of the dataset. For simplicity, only lane-line results are reported here. The results from the code could be marginally different from that reported in the paper due to different random splits.

  • Standard
Method AP F-Score x error near (m) x error far (m) z error near (m) z error far (m)
3D-LaneNet 89.3 86.4 0.068 0.477 0.015 0.202
Gen-LaneNet 90.1 88.1 0.061 0.496 0.012 0.214
  • Rare Subset
Method AP F-Score x error near (m) x error far (m) z error near (m) z error far (m)
3D-LaneNet 74.6 72.0 0.166 0.855 0.039 0.521
Gen-LaneNet 79.0 78.0 0.139 0.903 0.030 0.539
  • Illumination Change
Method AP F-Score x error near (m) x error far (m) z error near (m) z error far (m)
3D-LaneNet 74.9 72.5 0.115 0.601 0.032 0.230
Gen-LaneNet 87.2 85.3 0.074 0.538 0.015 0.232

Visualization

Visual comparisons to the ground truth can be generated per image when setting 'vis = True' in 'tools/eval_3D_lane.py'. We show two examples for each method under the data split involving illumination change.

  • 3D-LaneNet

  • Gen-LaneNet

Citation

Please cite the paper in your publications if it helps your research:

@article{guo2020gen,
  title={Gen-LaneNet: A Generalized and Scalable Approach for 3D Lane Detection},
  author={Yuliang Guo, Guang Chen, Peitao Zhao, Weide Zhang, Jinghao Miao, Jingao Wang, and Tae Eun Choe},
  booktitle={Computer Vision - {ECCV} 2020 - 16th European Conference},
  year={2020}
}

Copyright and License

The copyright of this work belongs to Baidu Apollo, which is provided under the Apache-2.0 license.

Owner
Yuliang Guo
Researcher in Computer Vision
Yuliang Guo
Weakly Supervised Segmentation with Tensorflow. Implements instance segmentation as described in Simple Does It: Weakly Supervised Instance and Semantic Segmentation, by Khoreva et al. (CVPR 2017).

Weakly Supervised Segmentation with TensorFlow This repo contains a TensorFlow implementation of weakly supervised instance segmentation as described

Phil Ferriere 220 Dec 13, 2022
Finetuner allows one to tune the weights of any deep neural network for better embeddings on search tasks

Finetuner allows one to tune the weights of any deep neural network for better embeddings on search tasks

Jina AI 794 Dec 31, 2022
VGGFace2-HQ - A high resolution face dataset for face editing purpose

The first open source high resolution dataset for face swapping!!! A high resolution version of VGGFace2 for academic face editing purpose

Naiyuan Liu 232 Dec 29, 2022
Analyses of the individual electric field magnitudes with Roast.

Aloi Davide - PhD Student (UoB) Analysis of electric field magnitudes (wp2a dataset only at the moment) and correlation analysis with Dynamic Causal M

Davide Aloi 7 Dec 15, 2022
Code for our ICASSP 2021 paper: SA-Net: Shuffle Attention for Deep Convolutional Neural Networks

SA-Net: Shuffle Attention for Deep Convolutional Neural Networks (paper) By Qing-Long Zhang and Yu-Bin Yang [State Key Laboratory for Novel Software T

Qing-Long Zhang 199 Jan 08, 2023
Code for our SIGCOMM'21 paper "Network Planning with Deep Reinforcement Learning".

0. Introduction This repository contains the source code for our SIGCOMM'21 paper "Network Planning with Deep Reinforcement Learning". Notes The netwo

NetX Group 68 Nov 24, 2022
Configure SRX interfaces with Scrapli

Configure SRX interfaces with Scrapli Overview This example will show how to configure interfaces on Juniper's SRX firewalls. In addition to the Pytho

Calvin Remsburg 1 Jan 07, 2022
Optimising chemical reactions using machine learning

Summit Summit is a set of tools for optimising chemical processes. We’ve started by targeting reactions. What is Summit? Currently, reaction optimisat

Sustainable Reaction Engineering Group 75 Dec 14, 2022
GAN-generated image detection based on CNNs

GAN-image-detection This repository contains a GAN-generated image detector developed to distinguish real images from synthetic ones. The detector is

Image and Sound Processing Lab 17 Dec 15, 2022
Generative Models for Graph-Based Protein Design

Graph-Based Protein Design This repo contains code for Generative Models for Graph-Based Protein Design by John Ingraham, Vikas Garg, Regina Barzilay

John Ingraham 159 Dec 15, 2022
Image Segmentation Animation using Quadtree concepts.

QuadTree Image Segmentation Animation using QuadTree concepts. Usage usage: quad.py [-h] [-fps FPS] [-i ITERATIONS] [-ws WRITESTART] [-b] [-img] [-s S

Alex Eidt 29 Dec 25, 2022
Measuring and Improving Consistency in Pretrained Language Models

ParaRel 🤘 This repository contains the code and data for the paper: Measuring and Improving Consistency in Pretrained Language Models as well as the

Yanai Elazar 26 Dec 02, 2022
PyTorch implementation of MoCo: Momentum Contrast for Unsupervised Visual Representation Learning

MoCo: Momentum Contrast for Unsupervised Visual Representation Learning This is a PyTorch implementation of the MoCo paper: @Article{he2019moco, aut

Meta Research 3.7k Jan 02, 2023
Self-Learning - Books Papers, Courses & more I have to learn soon

Self-Learning This repository is intended to be used for personal use, all rights reserved to respective owners, please cite original authors and ask

Achint Chaudhary 968 Jan 02, 2022
TensorFlow implementation of Elastic Weight Consolidation

Elastic weight consolidation Introduction A TensorFlow implementation of elastic weight consolidation as presented in Overcoming catastrophic forgetti

James Stokes 67 Oct 11, 2022
Stitch it in Time: GAN-Based Facial Editing of Real Videos

STIT - Stitch it in Time [Project Page] Stitch it in Time: GAN-Based Facial Edit

1.1k Jan 04, 2023
UV matrix decompostion using movielens dataset

UV-matrix-decompostion-with-kfold UV matrix decompostion using movielens dataset upload the 'ratings.dat' file install the following python libraries

2 Oct 18, 2022
Collection of NLP model explanations and accompanying analysis tools

Thermostat is a large collection of NLP model explanations and accompanying analysis tools. Combines explainability methods from the captum library wi

126 Nov 22, 2022
A simple algorithm for extracting tree height in sparse scene from point cloud data.

TREE HEIGHT EXTRACTION IN SPARSE SCENES BASED ON UAV REMOTE SENSING This is the offical python implementation of the paper "Tree Height Extraction in

6 Oct 28, 2022
Official Pytorch implementation of the paper: "Locally Shifted Attention With Early Global Integration"

Locally-Shifted-Attention-With-Early-Global-Integration Pretrained models You can download all the models from here. Training Imagenet python -m torch

Shelly Sheynin 14 Apr 15, 2022