[ICCV2021] Learning to Track Objects from Unlabeled Videos

Related tags

Deep LearningUSOT
Overview

Unsupervised Single Object Tracking (USOT)

🌿 Learning to Track Objects from Unlabeled Videos

Jilai Zheng, Chao Ma, Houwen Peng and Xiaokang Yang

2021 IEEE/CVF International Conference on Computer Vision (ICCV)

Introduction

This repository implements unsupervised deep tracker USOT, which learns to track objects from unlabeled videos.

Main ideas of USOT are listed as follows.

  • Coarsely discovering moving objects from videos, with pseudo boxes precise enough for bbox regression.
  • Training a naive Siamese tracker from single-frame pairs, then gradually extending it to longer temporal spans.
  • Following cycle memory training paradigm, enabling unsupervised tracker to update online.

Results

Results of USOT and USOT* on recent tracking benchmarks.

Model VOT2016
EAO
VOT2018
EAO
VOT2020
EAO
LaSOT
AUC (%)
TrackingNet
AUC (%)
OTB100
AUC (%)
USOT 0.351 0.290 0.222 33.7 59.9 58.9
USOT* 0.402 0.344 0.219 35.8 61.5 57.4

Raw result files can be found in folder result from Google Drive.

Tutorial

Environments

The environment we utilize is listed as follows.

  • Preprocessing: Pytorch 1.1.0 + CUDA-9.0 / 10.0 (following ARFlow)
  • Train / Test / Eval: Pytorch 1.7.1 + CUDA-10.0 / 10.2 / 11.1

If you have problems for preprocessing, you can actually skip it by downloading off-the-shelf preprocessed materials.

Preparations

Assume the project root path is $USOT_PATH. You can build an environment for development with the provided script, where $CONDA_PATH denotes your anaconda path.

cd $USOT_PATH
bash ./preprocessing/install_model.sh $CONDA_PATH USOT
source activate USOT && export PYTHONPATH=$(pwd)

You can revise the CUDA toolkit version for pytorch in install_model.sh (by default 10.0).

Test and Eval

First, we provide both models utilized in our paper (USOT.pth and USOT_star.pth). You can download them in folder snapshot from Google Drive, and place them in $USOT_PATH/var/snapshot.

Next, you can link your wanted benchmark dataset (e.g. VOT2018) to $USOT_PATH/datasets_test as follows. The ground truth json files for some benchmarks (e.g VOT2018.json) can be downloaded in folder test from Google Drive, and placed also in $USOT_PATH/datasets_test.

cd $USOT_PATH && mkdir datasets_test
ln -s $your_benchmark_path ./datasets_test/VOT2018

After that, you can test the tracker on these benchmarks (e.g. VOT2018) as follows. The raw results will be placed in $USOT_PATH/var/result/VOT2018/USOT.

cd $USOT_PATH
python -u ./scripts/test_usot.py --dataset VOT2018 --resume ./var/snapshot/USOT_star.pth

The inference result can be evaluated with pysot-toolkit. Install pysot-toolkit before evaluation.

cd $USOT_PATH/lib/eval_toolkit/pysot/utils
python setup.py build_ext --inplace

Then the evaluation can be conducted as follows.

cd $USOT_PATH
python ./lib/eval_toolkit/bin/eval.py --dataset_dir datasets_test \
        --dataset VOT2018 --tracker_result_dir var/result/VOT2018 --trackers USOT

Train

First, download the pretrained backbone in folder pretrain from Google Drive into $USOT_PATH/pretrain. Note that USOT* and USOT are respectively trained from imagenet_pretrain.model and moco_v2_800.model.

Second, preprocess the raw datasets with the paradigm of DP + Flow. Refer to $USOT_PATH/preprocessing/datasets_train for details.

In fact, we have provided two shortcuts for skipping this preprocessing procedure.

  • You can directly download the generated pseudo box files (e.g. got10k_flow.json) in folder train/box_sample_result from Google Drive, and place them into the corresponding dataset preprocessing path (e.g. $USOT_PATH/preprocessing/datasets_train/got10k), in order to skip the box generation procedure.
  • You can directly download the whole cropped training dataset (e.g. got10k_flow.tar) in dataset folder from Google Drive (Coming soon) (e.g. train/GOT-10k), which enables you to skip all procedures in preprocessing.

Third, revise the config file for training as $USOT_PATH/experiments/train/USOT.yaml. Very important options are listed as follows.

  • GPUS: the gpus for training, e.g. '0,1,2,3'
  • TRAIN/PRETRAIN: the pretrained backbone, e.g. 'imagenet_pretrain.model'
  • DATASET: the folder for your cropped training instances and their pseudo annotation files, e.g. PATH: '/data/got10k_flow/crop511/', ANNOTATION: '/data/got10k_flow/train.json'

Finally, you can start the training phase with the following script. The training checkpoints will also be placed automatically in $USOT_PATH/var/snapshot.

cd $USOT_PATH
python -u ./scripts/train_usot.py --cfg experiments/train/USOT.yaml --gpus 0,1,2,3 --workers 32

We also provide a onekey script for train, test and eval.

cd $USOT_PATH
python ./scripts/onekey_usot.py --cfg experiments/train/USOT.yaml

Citation

If any parts of our paper and codes are helpful to your work, please generously citing:

@inproceedings{zheng-iccv2021-usot,
   title={Learning to Track Objects from Unlabeled Videos},
   author={Jilai Zheng and Chao Ma and Houwen Peng and Xiaokang Yang},
   booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
   year={2021}
}

Reference

We refer to the following repositories when implementing our unsupervised tracker. Thanks for their great work.

Contact

Feel free to contact me if you have any questions.

The author's officially unofficial PyTorch BigGAN implementation.

BigGAN-PyTorch The author's officially unofficial PyTorch BigGAN implementation. This repo contains code for 4-8 GPU training of BigGANs from Large Sc

Andy Brock 2.6k Jan 02, 2023
Source for the paper "Universal Activation Function for machine learning"

Universal Activation Function Tensorflow and Pytorch source code for the paper Yuen, Brosnan, Minh Tu Hoang, Xiaodai Dong, and Tao Lu. "Universal acti

4 Dec 03, 2022
PyTorch code for Vision Transformers training with the Self-Supervised learning method DINO

Self-Supervised Vision Transformers with DINO PyTorch implementation and pretrained models for DINO. For details, see Emerging Properties in Self-Supe

Facebook Research 4.2k Jan 03, 2023
Lucid Sonic Dreams syncs GAN-generated visuals to music.

Lucid Sonic Dreams Lucid Sonic Dreams syncs GAN-generated visuals to music. By default, it uses NVLabs StyleGAN2, with pre-trained models lifted from

731 Jan 02, 2023
Public repository of the 3DV 2021 paper "Generative Zero-Shot Learning for Semantic Segmentation of 3D Point Clouds"

Generative Zero-Shot Learning for Semantic Segmentation of 3D Point Clouds Björn Michele1), Alexandre Boulch1), Gilles Puy1), Maxime Bucher1) and Rena

valeo.ai 15 Dec 22, 2022
[CVPR'2020] DeepDeform: Learning Non-rigid RGB-D Reconstruction with Semi-supervised Data

DeepDeform (CVPR'2020) DeepDeform is an RGB-D video dataset containing over 390,000 RGB-D frames in 400 videos, with 5,533 optical and scene flow imag

Aljaz Bozic 165 Jan 09, 2023
Syntax-Aware Action Targeting for Video Captioning

Syntax-Aware Action Targeting for Video Captioning Code for SAAT from "Syntax-Aware Action Targeting for Video Captioning" (Accepted to CVPR 2020). Th

59 Oct 13, 2022
Official PyTorch code of DeepPanoContext: Panoramic 3D Scene Understanding with Holistic Scene Context Graph and Relation-based Optimization (ICCV 2021 Oral).

DeepPanoContext (DPC) [Project Page (with interactive results)][Paper] DeepPanoContext: Panoramic 3D Scene Understanding with Holistic Scene Context G

Cheng Zhang 66 Nov 16, 2022
The codes and related files to reproduce the results for Image Similarity Challenge Track 2.

ISC-Track2-Submission The codes and related files to reproduce the results for Image Similarity Challenge Track 2. Required dependencies To begin with

Wenhao Wang 89 Jan 02, 2023
Python Classes: Medical Insurance Project using Object Oriented Programming Concepts

Medical-Insurance-Project-OOP Python Classes: Medical Insurance Project using Object Oriented Programming Concepts Classes are an incredibly useful pr

Hugo B. 0 Feb 04, 2022
Flexible Option Learning - NeurIPS 2021

Flexible Option Learning This repository contains code for the paper Flexible Option Learning presented as a Spotlight at NeurIPS 2021. The implementa

Martin Klissarov 7 Nov 09, 2022
[3DV 2020] PeeledHuman: Robust Shape Representation for Textured 3D Human Body Reconstruction

PeeledHuman: Robust Shape Representation for Textured 3D Human Body Reconstruction International Conference on 3D Vision, 2020 Sai Sagar Jinka1, Rohan

Rohan Chacko 39 Oct 12, 2022
The implementation of 'Image synthesis via semantic composition'.

Image synthesis via semantic synthesis [Project Page] by Yi Wang, Lu Qi, Ying-Cong Chen, Xiangyu Zhang, Jiaya Jia. Introduction This repository gives

DV Lab 71 Jan 06, 2023
Code for the paper "Reinforcement Learning as One Big Sequence Modeling Problem"

Trajectory Transformer Code release for Reinforcement Learning as One Big Sequence Modeling Problem. Installation All python dependencies are in envir

Michael Janner 269 Jan 05, 2023
Official code implementation for "Personalized Federated Learning using Hypernetworks"

Personalized Federated Learning using Hypernetworks This is an official implementation of Personalized Federated Learning using Hypernetworks paper. [

Aviv Shamsian 121 Dec 25, 2022
Joint Unsupervised Learning (JULE) of Deep Representations and Image Clusters.

Joint Unsupervised Learning (JULE) of Deep Representations and Image Clusters. Overview This project is a Torch implementation for our CVPR 2016 paper

Jianwei Yang 278 Dec 25, 2022
In this project we use both Resnet and Self-attention layer for cat, dog and flower classification.

cdf_att_classification classes = {0: 'cat', 1: 'dog', 2: 'flower'} In this project we use both Resnet and Self-attention layer for cdf-Classification.

3 Nov 23, 2022
RaftMLP: How Much Can Be Done Without Attention and with Less Spatial Locality?

RaftMLP RaftMLP: How Much Can Be Done Without Attention and with Less Spatial Locality? By Yuki Tatsunami and Masato Taki (Rikkyo University) [arxiv]

Okojo 20 Aug 31, 2022
An implementation for the ICCV 2021 paper Deep Permutation Equivariant Structure from Motion.

Deep Permutation Equivariant Structure from Motion Paper | Poster This repository contains an implementation for the ICCV 2021 paper Deep Permutation

72 Dec 27, 2022
GUI for TOAD-GAN, a PCG-ML algorithm for Token-based Super Mario Bros. Levels.

If you are using this code in your own project, please cite our paper: @inproceedings{awiszus2020toadgan, title={TOAD-GAN: Coherent Style Level Gene

Maren A. 13 Dec 14, 2022