Official PyTorch implementation of Joint Object Detection and Multi-Object Tracking with Graph Neural Networks

Related tags

Deep LearningGSDT
Overview

GSDT

Joint Object Detection and Multi-Object Tracking with Graph Neural Networks

This is the official PyTorch implementation of our paper: "Joint Object Detection and Multi-Object Tracking with Graph Neural Networks". Our project website and video demos are here. If you find our work useful, we'd appreciate you citing our paper as follows:

@article{Wang2020_GSDT, 
author = {Wang, Yongxin and Kitani, Kris and Weng, Xinshuo}, 
journal = {arXiv:2006.13164}, 
title = {{Joint Object Detection and Multi-Object Tracking with Graph Neural Networks}}, 
year = {2020} 
}

Introduction

Object detection and data association are critical components in multi-object tracking (MOT) systems. Despite the fact that the two components are dependent on each other, prior work often designs detection and data association modules separately which are trained with different objectives. As a result, we cannot back-propagate the gradients and optimize the entire MOT system, which leads to sub-optimal performance. To address this issue, recent work simultaneously optimizes detection and data association modules under a joint MOT framework, which has shown improved performance in both modules. In this work, we propose a new instance of joint MOT approach based on Graph Neural Networks (GNNs). The key idea is that GNNs can model relations between variable-sized objects in both the spatial and temporal domains, which is essential for learning discriminative features for detection and data association. Through extensive experiments on the MOT15/16/17/20 datasets, we demonstrate the effectiveness of our GNN-based joint MOT approach and show the state-of-the-art performance for both detection and MOT tasks.

Usage

Dependencies

We recommend using anaconda for managing dependency and environments. You may follow the commands below to setup your environment.

conda create -n dev python=3.6
conda activate dev
pip install -r requirements.txt

We use the PyTorch Geometric package for the implementation of our Graph Neural Network based architecture.

bash install_pyg.sh   # we used CUDA_version=cu101 

Build Deformable Convolutional Networks V2 (DCNv2)

cd ./src/lib/models/networks/DCNv2
bash make.sh

To automatically generate output tracking as videos, please install ffmpeg

conda install ffmpeg=4.2.2

Data preperation

We follow the same dataset setup as in JDE. Please refer to their DATA ZOO for data download and preperation.

To prepare 2DMOT15 and MOT20 data, you can directly download from the MOT Challenge website, and format each directory as follows:

MOT15
   |——————images
   |        └——————train
   |        └——————test
   └——————labels_with_ids
            └——————train(empty)
MOT20
   |——————images
   |        └——————train
   |        └——————test
   └——————labels_with_ids
            └——————train(empty)

Then change the seq_root and label_root in src/gen_labels_15.py and src/gen_labels_20.py accordingly, and run:

cd src
python gen_labels_15.py
python gen_labels_20.py

This will generate the desired label format of 2DMOT15 and MOT20. The seqinfo.ini files are required for 2DMOT15 and can be found here [Google], [Baidu],code:8o0w.

Inference

Download and save the pretrained weights for each dataset by following the links below:

Dataset Model
2DMOT15 model_mot15.pth
MOT17 model_mot17.pth
MOT20 model_mot20.pth

Run one of the following command to reproduce our paper's tracking performance on the MOT Challenge.

cd ./experiments
track_gnn_mot_AGNNConv_RoIAlign_mot15.sh 
track_gnn_mot_AGNNConv_RoIAlign_mot17.sh 
track_gnn_mot_AGNNConv_RoIAlign_mot20.sh 

To clarify, currently we directly used the MOT17 results as MOT16 results for submission. That is, our MOT16 and MOT17 results and models are identical.

Training

We are currently in the process of cleaning the training code. We'll release as soon as we can. Stay tuned!

Performance on MOT Challenge

You can refer to MOTChallenge website for performance of our method. For your convenience, we summarize results below:

Dataset MOTA IDF1 MT ML IDS
2DMOT15 60.7 64.6 47.0% 10.5% 477
MOT16 66.7 69.2 38.6% 19.0% 959
MOT17 66.2 68.7 40.8% 18.3% 3318
MOT20 67.1 67.5 53.1% 13.2% 3133

Acknowledgement

A large part of the code is borrowed from FairMOT. We appreciate their great work!

Owner
Richard Wang
Richard Wang
Road Crack Detection Using Deep Learning Methods

Road-Crack-Detection-Using-Deep-Learning-Methods This is my Diploma Thesis ¨Road Crack Detection Using Deep Learning Methods¨ under the supervision of

Aggelos Katsaliros 3 May 03, 2022
Arabic Car License Recognition. A solution to the kaggle competition Machathon 3.0.

Transformers Arabic licence plate recognition 🚗 Solution to the kaggle competition Machathon 3.0. Ranked in the top 6️⃣ at the final evaluation phase

Noran Hany 17 Dec 04, 2022
Dataset para entrenamiento de yoloV3 para 4 clases

Deteccion de objetos en video Este repo basado en el proyecto PyTorch YOLOv3 para correr detección de objetos sobre video. Construí sobre este proyect

1 Nov 01, 2021
Implementation for Shape from Polarization for Complex Scenes in the Wild

sfp-wild Implementation for Shape from Polarization for Complex Scenes in the Wild project website | paper Code and dataset will be released soon. Int

Chenyang LEI 41 Dec 23, 2022
a reimplementation of LiteFlowNet in PyTorch that matches the official Caffe version

pytorch-liteflownet This is a personal reimplementation of LiteFlowNet [1] using PyTorch. Should you be making use of this work, please cite the paper

Simon Niklaus 365 Dec 31, 2022
Code for the paper "Reinforcement Learning as One Big Sequence Modeling Problem"

Trajectory Transformer Code release for Reinforcement Learning as One Big Sequence Modeling Problem. Installation All python dependencies are in envir

Michael Janner 269 Jan 05, 2023
PyTorch implementation of probabilistic deep forecast applied to air quality.

Probabilistic Deep Forecast PyTorch implementation of a paper, titled: Probabilistic Deep Learning to Quantify Uncertainty in Air Quality Forecasting

Abdulmajid Murad 13 Nov 16, 2022
Charsiu: A transformer-based phonetic aligner

Charsiu: A transformer-based phonetic aligner [arXiv] Note. This is a preview version. The aligner is under active development. New functions, new lan

jzhu 166 Dec 09, 2022
Code for our EMNLP 2021 paper “Heterogeneous Graph Neural Networks for Keyphrase Generation”

GATER This repository contains the code for our EMNLP 2021 paper “Heterogeneous Graph Neural Networks for Keyphrase Generation”. Our implementation is

Jiacheng Ye 12 Nov 24, 2022
A new video text spotting framework with Transformer

TransVTSpotter: End-to-end Video Text Spotter with Transformer Introduction A Multilingual, Open World Video Text Dataset and End-to-end Video Text Sp

weijiawu 67 Jan 03, 2023
Y. Zhang, Q. Yao, W. Dai, L. Chen. AutoSF: Searching Scoring Functions for Knowledge Graph Embedding. IEEE International Conference on Data Engineering (ICDE). 2020

AutoSF The code for our paper "AutoSF: Searching Scoring Functions for Knowledge Graph Embedding" and this paper has been accepted by ICDE2020. News:

AutoML Research 64 Dec 17, 2022
A tutorial on training a DarkNet YOLOv4 model for the CrowdHuman dataset

YOLOv4 CrowdHuman Tutorial This is a tutorial demonstrating how to train a YOLOv4 people detector using Darknet and the CrowdHuman dataset. Table of c

JK Jung 118 Nov 10, 2022
TensorFlow-LiveLessons - "Deep Learning with TensorFlow" LiveLessons

TensorFlow-LiveLessons Note that the second edition of this video series is now available here. The second edition contains all of the content from th

Deep Learning Study Group 830 Jan 03, 2023
Add-on for importing and auto setup of character creator 3 character exports.

CC3 Blender Tools An add-on for importing and automatically setting up materials for Character Creator 3 character exports. Using Blender in the Chara

260 Jan 05, 2023
Betafold - AlphaFold with tunings

BetaFold We (hegelab.org) craeted this standalone AlphaFold (AlphaFold-Multimer,

2 Aug 11, 2022
Tensorflow implementation of Fully Convolutional Networks for Semantic Segmentation

FCN.tensorflow Tensorflow implementation of Fully Convolutional Networks for Semantic Segmentation (FCNs). The implementation is largely based on the

Sarath Shekkizhar 1.3k Dec 25, 2022
A simplified framework and utilities for PyTorch

Here is Poutyne. Poutyne is a simplified framework for PyTorch and handles much of the boilerplating code needed to train neural networks. Use Poutyne

GRAAL/GRAIL 534 Dec 17, 2022
Ankou: Guiding Grey-box Fuzzing towards Combinatorial Difference

Ankou Ankou is a source-based grey-box fuzzer. It intends to use a more rich fitness function by going beyond simple branch coverage and considering t

SoftSec Lab 54 Dec 24, 2022
Single object tracking and segmentation.

Single/Multiple Object Tracking and Segmentation Codes and comparison of recent single/multiple object tracking and segmentation. News 💥 AutoMatch is

ZP ZHANG 385 Jan 02, 2023
Collection of NLP model explanations and accompanying analysis tools

Thermostat is a large collection of NLP model explanations and accompanying analysis tools. Combines explainability methods from the captum library wi

126 Nov 22, 2022