Trajectory Extraction of road users via Traffic Camera

Overview

Traffic Monitoring

Citation

The associated paper for this project will be published here as soon as possible. When using this software, please cite the following:

@software{Strosahl_TrafficMonitoring,
author = {Strosahl, Julian},
license = {Apache-2.0},
title = {{TrafficMonitoring}},
url = {https://github.com/EFS-OpenSource/TrafficMonitoring},
version = {0.9.0}
}

Trajectory Extraction from Traffic Camera

This project was developed by Julian Strosahl Elektronische Fahrwerksyteme GmbH within the scope of the research project SAVeNoW (Project Website SAVe:)

This repository includes the Code for my Master Thesis Project about Trajectory Extraction from a Traffic Camera at an existing traffic intersection in Ingolstadt

The project is separated in different parts, at first a toolkit for capturing the live RTSP videostream from the camera. see here

The main project part is in this folder which contains a python script for training, evaluating and running a neuronal network, a tracking algorithm and extraction the trajectories to a csv file.

The training results (logs and metrics) are provided here

Example videos are provided here. You need to use Git LFS for access the videos.

Installation

  1. Install Miniconda
  2. Create Conda environment from existing file
conda env create --file environment.yml --name 
   

   

This will create a conda environment with your env name which contains all necessary python dependencies and OpenCV.

detectron2 is also necessary. You have to install it with for CUDA 11.0 For other CUDA version have a look in the installation instruction of detectron2.

python -m pip install detectron2 -f \
  https://dl.fbaipublicfiles.com/detectron2/wheels/cu110/torch1.7/index.html
  1. Provide the Network Weights for the Mask R-CNN:
  • Use Git LFS to get the model_weights in the right folder and download them.
  • If you don't want to use GIT LFS, you can download the weights and store them in the model_weights folder. You can find two different versions of weights, one default model 4 cats is trained on segmentation 4 different categories (Truck, Car, Bicycle and Person) and the other model 16 cats is trained on 16 categories but with bad results in some categories.

Getting Started Video

If you don't have a video just capture one here Quick Start Capture Video from Stream

For extracting trajectories cd traffic_monitoring and run it on a specific video. If you don't have one, just use this provided demo video:

python run_on_video.py --video ./videos/2021-01-13_16-32-09.mp4

The annotated video with segmentations will be stored in videos_output and the trajectory file in trajectory_output. The both result folders will be created by the script.

The trajectory file provides following structure:

frame_id category track_id x y x_opt y_opt
11 car 1 678142.80 5405298.02 678142.28 5405298.20
11 car 3 678174.98 5405294.48 678176.03 5405295.02
... ... ... ... ... ... ...
19 car 15 678142.75 5405308.82 678142.33 5405308.84

x and y use detection and the middle point of the bounding box(Baseline, naive Approach), x_opt and y_opt are calculated by segmentation and estimation of a ground plate of each vehicle (Our Approach).

Georeferencing

The provided software is optimized for one specific research intersection. You can provide a intersection specific dataset for usage in this software by changing the points file in config.

Quality of Trajectories

14 Reference Measurements with a measurement vehicle with dGPS-Sensor over the intersection show a deviation of only 0.52 meters (Mean Absolute Error, MAE) and 0.69 meters (root-mean-square error, RMSE)

The following images show the georeferenced map of the intersection with the measurement ground truth (green), middle point of bounding box (blue) and estimation via bottom plate (concept of our work) (red)

right_intersection right_intersection left_intersection

The evaluation can be done by the script evaluation_measurement.py. The trajectory files for the measurement drives are prepared in the [data/measurement] folder. Just run

python evaluation_measurement.py 

for getting the error plots and the georeferenced images.

Own Training

The segmentation works with detectron2 and with an own training. If you want to use your own dataset to improve segmentation or detection you can retrain it with

python train.py

The dataset, which was created as part of this work, is not yet publicly available. You just need to provide training, validation and test data in data. The dataset needs the COCO-format. For labeling you can use CVAT which provides pre-labeling and interpolation

The data will be read by ReadCOCODataset. In line 323 is a mapping configuration which can be configured for remap the labeled categories in own specified categories.

If you want to have a look on my training experience explore Training Results

Quality of Tracking

If you want only evaluate the Tracking algorithm SORT vs. Deep SORT there is the script evaluation_tracking.py for evaluate only the tracking algorithm by py-motmetrics. You need the labeled dataset for this.

Acknowledgment

This work is supported by the German Federal Ministry of Transport and Digital Infrastructure (BMVI) within the Automated and Connected Driving funding program under Grant No. 01MM20012F (SAVeNoW).

License

TrafficMonitoring is distributed under the Apache License 2.0. See LICENSE for more information.

Owner
Julian Strosahl
Julian Strosahl
Code for the paper "Reinforced Active Learning for Image Segmentation"

Reinforced Active Learning for Image Segmentation (RALIS) Code for the paper Reinforced Active Learning for Image Segmentation Dependencies python 3.6

Arantxa Casanova 79 Dec 19, 2022
A short and easy PyTorch implementation of E(n) Equivariant Graph Neural Networks

Simple implementation of Equivariant GNN A short implementation of E(n) Equivariant Graph Neural Networks for HOMO energy prediction. Just 50 lines of

Arsenii Senya Ashukha 97 Dec 23, 2022
Continual learning with sketched Jacobian approximations

Continual learning with sketched Jacobian approximations This repository contains the code for reproducing figures and results in the paper ``Provable

Machine Learning and Information Processing Laboratory 1 Jun 30, 2022
Multi Agent Reinforcement Learning for ROS in 2D Simulation Environments

IROS21 information To test the code and reproduce the experiments, follow the installation steps in Installation.md. Afterwards, follow the steps in E

11 Oct 29, 2022
CVPR 2022 "Online Convolutional Re-parameterization"

OREPA: Online Convolutional Re-parameterization This repo is the PyTorch implementation of our paper to appear in CVPR2022 on "Online Convolutional Re

Mu Hu 121 Dec 21, 2022
Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit

CNTK Chat Windows build status Linux build status The Microsoft Cognitive Toolkit (https://cntk.ai) is a unified deep learning toolkit that describes

Microsoft 17.3k Dec 29, 2022
PyTorch Implementation of Spatially Consistent Representation Learning(SCRL)

Spatially Consistent Representation Learning (CVPR'21) Official PyTorch implementation of Spatially Consistent Representation Learning (SCRL). This re

Kakao Brain 102 Nov 03, 2022
An alarm clock coded in Python 3 with Tkinter

Tkinter-Alarm-Clock An alarm clock coded in Python 3 with Tkinter. Run python3 Tkinter Alarm Clock.py in a terminal if you have Python 3. NOTE: This p

CodeMaster7000 1 Dec 25, 2021
Unofficial implementation of One-Shot Free-View Neural Talking Head Synthesis

face-vid2vid Usage Dataset Preparation cd datasets wget https://yt-dl.org/downloads/latest/youtube-dl -O youtube-dl chmod a+rx youtube-dl python load_

worstcoder 68 Dec 30, 2022
MEND: Model Editing Networks using Gradient Decomposition

MEND: Model Editing Networks using Gradient Decomposition Setup Environment This codebase uses Python 3.7.9. Other versions may work as well. Create a

Eric Mitchell 141 Dec 02, 2022
This repo generates the training data and the model for Morpheus-Deblend

Morpheus-Deblend This repo generates the training data and the model for Morpheus-Deblend. This is the active development repo for the project and as

Ryan Hausen 2 Apr 18, 2022
Swin-Transformer is basically a hierarchical Transformer whose representation is computed with shifted windows.

Swin-Transformer Swin-Transformer is basically a hierarchical Transformer whose representation is computed with shifted windows. For more details, ple

旷视天元 MegEngine 9 Mar 14, 2022
CvT-ASSD: Convolutional vision-Transformerbased Attentive Single Shot MultiBox Detector (ICTAI 2021 CCF-C 会议)The 33rd IEEE International Conference on Tools with Artificial Intelligence

CvT-ASSD including extra CvT, CvT-SSD, VGG-ASSD models original-code-website: https://github.com/albert-jin/CvT-SSD new-code-website: https://github.c

金伟强 -上海大学人工智能小渣渣~ 5 Mar 07, 2022
Hyperbolic Procrustes Analysis Using Riemannian Geometry

Hyperbolic Procrustes Analysis Using Riemannian Geometry The code in this repository creates the figures presented in this article: Please notice that

Ronen Talmon's Lab 2 Jan 08, 2023
This repository is the official implementation of Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regularized Fine-Tuning (NeurIPS21).

Core-tuning This repository is the official implementation of ``Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regular

vanint 18 Dec 17, 2022
Implementation of the GBST block from the Charformer paper, in Pytorch

Charformer - Pytorch Implementation of the GBST (gradient-based subword tokenization) module from the Charformer paper, in Pytorch. The paper proposes

Phil Wang 105 Dec 26, 2022
Neural Re-rendering for Full-frame Video Stabilization

NeRViS: Neural Re-rendering for Full-frame Video Stabilization Project Page | Video | Paper | Google Colab Setup Setup environment for [Yu and Ramamoo

Yu-Lun Liu 9 Jun 17, 2022
VSR-Transformer - This paper proposes a new Transformer for video super-resolution (called VSR-Transformer).

VSR-Transformer By Jiezhang Cao, Yawei Li, Kai Zhang, Luc Van Gool This paper proposes a new Transformer for video super-resolution (called VSR-Transf

Jiezhang Cao 225 Nov 13, 2022
x-transformers-paddle 2.x version

x-transformers-paddle x-transformers-paddle 2.x version paddle 2.x版本 https://github.com/lucidrains/x-transformers 。 requirements paddlepaddle-gpu==2.2

yujun 7 Dec 08, 2022
(ImageNet pretrained models) The official pytorch implemention of the TPAMI paper "Res2Net: A New Multi-scale Backbone Architecture"

Res2Net The official pytorch implemention of the paper "Res2Net: A New Multi-scale Backbone Architecture" Our paper is accepted by IEEE Transactions o

Res2Net Applications 928 Dec 29, 2022