Trajectory Extraction of road users via Traffic Camera

Overview

Traffic Monitoring

Citation

The associated paper for this project will be published here as soon as possible. When using this software, please cite the following:

@software{Strosahl_TrafficMonitoring,
author = {Strosahl, Julian},
license = {Apache-2.0},
title = {{TrafficMonitoring}},
url = {https://github.com/EFS-OpenSource/TrafficMonitoring},
version = {0.9.0}
}

Trajectory Extraction from Traffic Camera

This project was developed by Julian Strosahl Elektronische Fahrwerksyteme GmbH within the scope of the research project SAVeNoW (Project Website SAVe:)

This repository includes the Code for my Master Thesis Project about Trajectory Extraction from a Traffic Camera at an existing traffic intersection in Ingolstadt

The project is separated in different parts, at first a toolkit for capturing the live RTSP videostream from the camera. see here

The main project part is in this folder which contains a python script for training, evaluating and running a neuronal network, a tracking algorithm and extraction the trajectories to a csv file.

The training results (logs and metrics) are provided here

Example videos are provided here. You need to use Git LFS for access the videos.

Installation

  1. Install Miniconda
  2. Create Conda environment from existing file
conda env create --file environment.yml --name 
   

   

This will create a conda environment with your env name which contains all necessary python dependencies and OpenCV.

detectron2 is also necessary. You have to install it with for CUDA 11.0 For other CUDA version have a look in the installation instruction of detectron2.

python -m pip install detectron2 -f \
  https://dl.fbaipublicfiles.com/detectron2/wheels/cu110/torch1.7/index.html
  1. Provide the Network Weights for the Mask R-CNN:
  • Use Git LFS to get the model_weights in the right folder and download them.
  • If you don't want to use GIT LFS, you can download the weights and store them in the model_weights folder. You can find two different versions of weights, one default model 4 cats is trained on segmentation 4 different categories (Truck, Car, Bicycle and Person) and the other model 16 cats is trained on 16 categories but with bad results in some categories.

Getting Started Video

If you don't have a video just capture one here Quick Start Capture Video from Stream

For extracting trajectories cd traffic_monitoring and run it on a specific video. If you don't have one, just use this provided demo video:

python run_on_video.py --video ./videos/2021-01-13_16-32-09.mp4

The annotated video with segmentations will be stored in videos_output and the trajectory file in trajectory_output. The both result folders will be created by the script.

The trajectory file provides following structure:

frame_id category track_id x y x_opt y_opt
11 car 1 678142.80 5405298.02 678142.28 5405298.20
11 car 3 678174.98 5405294.48 678176.03 5405295.02
... ... ... ... ... ... ...
19 car 15 678142.75 5405308.82 678142.33 5405308.84

x and y use detection and the middle point of the bounding box(Baseline, naive Approach), x_opt and y_opt are calculated by segmentation and estimation of a ground plate of each vehicle (Our Approach).

Georeferencing

The provided software is optimized for one specific research intersection. You can provide a intersection specific dataset for usage in this software by changing the points file in config.

Quality of Trajectories

14 Reference Measurements with a measurement vehicle with dGPS-Sensor over the intersection show a deviation of only 0.52 meters (Mean Absolute Error, MAE) and 0.69 meters (root-mean-square error, RMSE)

The following images show the georeferenced map of the intersection with the measurement ground truth (green), middle point of bounding box (blue) and estimation via bottom plate (concept of our work) (red)

right_intersection right_intersection left_intersection

The evaluation can be done by the script evaluation_measurement.py. The trajectory files for the measurement drives are prepared in the [data/measurement] folder. Just run

python evaluation_measurement.py 

for getting the error plots and the georeferenced images.

Own Training

The segmentation works with detectron2 and with an own training. If you want to use your own dataset to improve segmentation or detection you can retrain it with

python train.py

The dataset, which was created as part of this work, is not yet publicly available. You just need to provide training, validation and test data in data. The dataset needs the COCO-format. For labeling you can use CVAT which provides pre-labeling and interpolation

The data will be read by ReadCOCODataset. In line 323 is a mapping configuration which can be configured for remap the labeled categories in own specified categories.

If you want to have a look on my training experience explore Training Results

Quality of Tracking

If you want only evaluate the Tracking algorithm SORT vs. Deep SORT there is the script evaluation_tracking.py for evaluate only the tracking algorithm by py-motmetrics. You need the labeled dataset for this.

Acknowledgment

This work is supported by the German Federal Ministry of Transport and Digital Infrastructure (BMVI) within the Automated and Connected Driving funding program under Grant No. 01MM20012F (SAVeNoW).

License

TrafficMonitoring is distributed under the Apache License 2.0. See LICENSE for more information.

Owner
Julian Strosahl
Julian Strosahl
Scikit-event-correlation - Event Correlation and Forecasting over High Dimensional Streaming Sensor Data algorithms

scikit-event-correlation Event Correlation and Changing Detection Algorithm Theo

Intellia ICT 5 Oct 30, 2022
Recognize Handwritten Digits using Deep Learning on the browser itself.

MNIST on the Web An attempt to predict MNIST handwritten digits from my PyTorch model from the browser (client-side) and not from the server, with the

Harjyot Bagga 7 May 28, 2022
💊 A 3D Generative Model for Structure-Based Drug Design (NeurIPS 2021)

A 3D Generative Model for Structure-Based Drug Design Coming soon... Citation @inproceedings{luo2021sbdd, title={A 3D Generative Model for Structu

Shitong Luo 118 Jan 05, 2023
This repository contains the source code and data for reproducing results of Deep Continuous Clustering paper

Deep Continuous Clustering Introduction This is a Pytorch implementation of the DCC algorithms presented in the following paper (paper): Sohil Atul Sh

Sohil Shah 197 Nov 29, 2022
StyleGAN2-ada for practice

This version of the newest PyTorch-based StyleGAN2-ada is intended mostly for fellow artists, who rarely look at scientific metrics, but rather need a working creative tool. Tested on Python 3.7 + Py

vadim epstein 170 Nov 16, 2022
Make your AirPlay devices as TTS speakers

Apple AirPlayer Home Assistant integration component, make your AirPlay devices as TTS speakers. Before Use 2021.6.X or earlier Apple Airplayer compon

George Zhao 117 Dec 15, 2022
LibFewShot: A Comprehensive Library for Few-shot Learning.

LibFewShot Make few-shot learning easy. Supported Methods Meta MAML(ICML'17) ANIL(ICLR'20) R2D2(ICLR'19) Versa(NeurIPS'18) LEO(ICLR'19) MTL(CVPR'19) M

<a href=[email protected]&L"> 603 Jan 05, 2023
OCR-D wrapper for detectron2 based segmentation models

ocrd_detectron2 OCR-D wrapper for detectron2 based segmentation models Introduction Installation Usage OCR-D processor interface ocrd-detectron2-segm

Robert Sachunsky 13 Dec 06, 2022
Unofficial reimplementation of ECAPA-TDNN for speaker recognition (EER=0.86 for Vox1_O when train only in Vox2)

Introduction This repository contains my unofficial reimplementation of the standard ECAPA-TDNN, which is the speaker recognition in VoxCeleb2 dataset

Tao Ruijie 277 Dec 31, 2022
Python project to take sound as input and output as RGB + Brightness values suitable for DMX

sound-to-light Python project to take sound as input and output as RGB + Brightness values suitable for DMX Current goals: Get one pixel working: Vary

Bobby Cox 1 Nov 17, 2021
PyTorch implementation of the paper Dynamic Data Augmentation with Gating Networks

Dynamic Data Augmentation with Gating Networks This is an official PyTorch implementation of the paper Dynamic Data Augmentation with Gating Networks

九州大学 ヒューマンインタフェース研究室 3 Oct 26, 2022
Official implementation of "Can You Spot the Chameleon? Adversarially Camouflaging Images from Co-Salient Object Detection" in CVPR 2022.

Jadena Official implementation of "Can You Spot the Chameleon? Adversarially Camouflaging Images from Co-Salient Object Detection" in CVPR 2022. arXiv

Qing Guo 13 Nov 29, 2022
This is a tensorflow-based rotation detection benchmark, also called AlphaRotate.

AlphaRotate: A Rotation Detection Benchmark using TensorFlow Abstract AlphaRotate is maintained by Xue Yang with Shanghai Jiao Tong University supervi

yangxue 972 Jan 05, 2023
Source code for the paper: Variance-Aware Machine Translation Test Sets (NeurIPS 2021 Datasets and Benchmarks Track)

Variance-Aware-MT-Test-Sets Variance-Aware Machine Translation Test Sets License See LICENSE. We follow the data licensing plan as the same as the WMT

NLP2CT Lab, University of Macau 5 Dec 21, 2021
ToFFi - Toolbox for Frequency-based Fingerprinting of Brain Signals

ToFFi Toolbox This repository contains "before peer review" version of the software related to the preprint of the publication ToFFi - Toolbox for Fre

4 Aug 31, 2022
Python scripts performing class agnostic object localization using the Object Localization Network model in ONNX.

ONNX Object Localization Network Python scripts performing class agnostic object localization using the Object Localization Network model in ONNX. Ori

Ibai Gorordo 15 Oct 14, 2022
Adaptive FNO transformer - official Pytorch implementation

Adaptive Fourier Neural Operators: Efficient Token Mixers for Transformers This repository contains PyTorch implementation of the Adaptive Fourier Neu

NVIDIA Research Projects 77 Dec 29, 2022
Predict stock movement with Machine Learning and Deep Learning algorithms

Project Overview Stock market movement prediction using LSTM Deep Neural Networks and machine learning algorithms Software and Library Requirements Th

Naz Delam 46 Sep 13, 2022
RLBot Python bindings for the Rust crate rl_ball_sym

RLBot Python bindings for rl_ball_sym 0.6 Prerequisites: Rust & Cargo Build Tools for Visual Studio RLBot - Verify that the file %localappdata%\RLBotG

Eric Veilleux 2 Nov 25, 2022
[CVPR 2022 Oral] Versatile Multi-Modal Pre-Training for Human-Centric Perception

Versatile Multi-Modal Pre-Training for Human-Centric Perception Fangzhou Hong1  Liang Pan1  Zhongang Cai1,2,3  Ziwei Liu1* 1S-Lab, Nanyang Technologic

Fangzhou Hong 96 Jan 03, 2023