IDM: An Intermediate Domain Module for Domain Adaptive Person Re-ID,

Related tags

Deep LearningIDM
Overview

Python >=3.7 PyTorch >=1.1

Intermediate Domain Module (IDM)

This repository is the official implementation for IDM: An Intermediate Domain Module for Domain Adaptive Person Re-ID, which is accepted by ICCV 2021 (Oral).

IDM achieves state-of-the-art performances on the unsupervised domain adaptation task for person re-ID.

Requirements

Installation

git clone https://github.com/SikaStar/IDM.git
cd IDM/idm/evaluation_metrics/rank_cylib && make all

Prepare Datasets

cd examples && mkdir data

Download the person re-ID datasets Market-1501, DukeMTMC-ReID, MSMT17, PersonX, and UnrealPerson. Then unzip them under the directory like

IDM/examples/data
├── dukemtmc
│   └── DukeMTMC-reID
├── market1501
│   └── Market-1501-v15.09.15
├── msmt17
│   └── MSMT17_V1
├── personx
│   └── PersonX
└── unreal
    ├── list_unreal_train.txt
    └── unreal_vX.Y

Prepare ImageNet Pre-trained Models for IBN-Net

When training with the backbone of IBN-ResNet, you need to download the ImageNet-pretrained model from this link and save it under the path of logs/pretrained/.

mkdir logs && cd logs
mkdir pretrained

The file tree should be

IDM/logs
└── pretrained
    └── resnet50_ibn_a.pth.tar

ImageNet-pretrained models for ResNet-50 will be automatically downloaded in the python script.

Training

We utilize 4 GTX-2080TI GPUs for training. Note that

  • The source and target domains are trained jointly.
  • For baseline methods, use -a resnet50 for the backbone of ResNet-50, and -a resnet_ibn50a for the backbone of IBN-ResNet.
  • For IDM, use -a resnet50_idm to insert IDM into the backbone of ResNet-50, and -a resnet_ibn50a_idm to insert IDM into the backbone of IBN-ResNet.
  • For strong baseline, use --use-xbm to implement XBM (a variant of Memory Bank).

Baseline Methods

To train the baseline methods in the paper, run commands like:

# Naive Baseline
CUDA_VISIBLE_DEVICES=0,1,2,3 sh scripts/run_naive_baseline.sh ${source} ${target} ${arch}

# Strong Baseline
CUDA_VISIBLE_DEVICES=0,1,2,3 sh scripts/run_strong_baseline.sh ${source} ${target} ${arch}

Some examples:

### market1501 -> dukemtmc ###

# ResNet-50
CUDA_VISIBLE_DEVICES=0,1,2,3 sh scripts/run_strong_baseline.sh market1501 dukemtmc resnet50 

# IBN-ResNet-50
CUDA_VISIBLE_DEVICES=0,1,2,3 sh scripts/run_strong_baseline.sh market1501 dukemtmc resnet_ibn50a

Training with IDM

To train the models with our IDM, run commands like:

# Naive Baseline + IDM
CUDA_VISIBLE_DEVICES=0,1,2,3 \
sh scripts/run_idm.sh ${source} ${target} ${arch} ${stage} ${mu1} ${mu2} ${mu3}

# Strong Baseline + IDM
CUDA_VISIBLE_DEVICES=0,1,2,3 \
sh scripts/run_idm_xbm.sh ${source} ${target} ${arch} ${stage} ${mu1} ${mu2} ${mu3}
  • Defaults: --stage 0 --mu1 0.7 --mu2 0.1 --mu3 1.0

Some examples:

### market1501 -> dukemtmc ###

# ResNet-50 + IDM
CUDA_VISIBLE_DEVICES=0,1,2,3 \
sh scripts/run_idm_xbm.sh market1501 dukemtmc resnet50_idm 0 0.7 0.1 1.0 

# IBN-ResNet-50 + IDM
CUDA_VISIBLE_DEVICES=0,1,2,3 \
sh scripts/run_idm_xbm.sh market1501 dukemtmc resnet_ibn50a_idm 0 0.7 0.1 1.0

Evaluation

We utilize 1 GTX-2080TI GPU for testing. Note that

  • use --dsbn for domain adaptive models, and add --test-source if you want to test on the source domain;
  • use -a resnet50 for the backbone of ResNet-50, and -a resnet_ibn50a for the backbone of IBN-ResNet.
  • use -a resnet50_idm for ResNet-50 + IDM, and -a resnet_ibn50a_idm for IBN-ResNet + IDM.

To evaluate the baseline model on the target-domain dataset, run:

CUDA_VISIBLE_DEVICES=0 \
python3 examples/test.py --dsbn -d ${dataset} -a ${arch} --resume ${resume} 

To evaluate the baseline model on the source-domain dataset, run:

CUDA_VISIBLE_DEVICES=0 \
python3 examples/test.py --dsbn --test-source -d ${dataset} -a ${arch} --resume ${resume} 

To evaluate the IDM model on the target-domain dataset, run:

CUDA_VISIBLE_DEVICES=0 \
python3 examples/test.py --dsbn-idm -d ${dataset} -a ${arch} --resume ${resume} --stage ${stage} 

To evaluate the IDM model on the source-domain dataset, run:

CUDA_VISIBLE_DEVICES=0 \
python3 examples/test.py --dsbn-idm --test-source -d ${dataset} -a ${arch} --resume ${resume} --stage ${stage} 

Some examples:

### market1501 -> dukemtmc ###

# evaluate the target domain "dukemtmc" on the strong baseline model
CUDA_VISIBLE_DEVICES=0 \
python3 examples/test.py --dsbn  -d dukemtmc -a resnet50 \
--resume logs/resnet50_strong_baseline/market1501-TO-dukemtmc/model_best.pth.tar 

# evaluate the source domain "market1501" on the strong baseline model
CUDA_VISIBLE_DEVICES=0 \
python3 examples/test.py --dsbn --test-source  -d market1501 -a resnet50 \
--resume logs/resnet50_strong_baseline/market1501-TO-dukemtmc/model_best.pth.tar 

# evaluate the target domain "dukemtmc" on the IDM model (after stage-0)
python3 examples/test.py --dsbn-idm  -d dukemtmc -a resnet50_idm \
--resume logs/resnet50_idm_xbm/market1501-TO-dukemtmc/model_best.pth.tar --stage 0

# evaluate the target domain "dukemtmc" on the IDM model (after stage-0)
python3 examples/test.py --dsbn-idm --test-source  -d market1501 -a resnet50_idm \
--resume logs/resnet50_idm_xbm/market1501-TO-dukemtmc/model_best.pth.tar --stage 0

Acknowledgement

Our code is based on MMT and SpCL. Thanks for Yixiao's wonderful works.

Citation

If you find our work is useful for your research, please kindly cite our paper

@inproceedings{dai2021idm,
  title={IDM: An Intermediate Domain Module for Domain Adaptive Person Re-ID},
  author={Dai, Yongxing and Liu, Jun and Sun, Yifan and Tong, Zekun and Zhang, Chi and Duan, Ling-Yu},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  year={2021}
}

If you have any questions, please leave an issue or contact me: [email protected]

Owner
Yongxing Dai
I am now a fourth-year PhD student at National Engineering Lab for Video Technology in Peking University, Beijing, China
Yongxing Dai
NeurIPS'21 Tractable Density Estimation on Learned Manifolds with Conformal Embedding Flows

NeurIPS'21 Tractable Density Estimation on Learned Manifolds with Conformal Embedding Flows This repo contains the code for the paper Tractable Densit

Layer6 Labs 4 Dec 12, 2022
Code for "Continuous-Time Meta-Learning with Forward Mode Differentiation" (ICLR 2022)

Continuous-Time Meta-Learning with Forward Mode Differentiation ICLR 2022 (Spotlight) - Installation - Example - Citation This repository contains the

Tristan Deleu 25 Oct 20, 2022
An Implementation of Transformer in Transformer in TensorFlow for image classification, attention inside local patches

Transformer-in-Transformer An Implementation of the Transformer in Transformer paper by Han et al. for image classification, attention inside local pa

Rishit Dagli 40 Jul 25, 2022
nextPARS, a novel Illumina-based implementation of in-vitro parallel probing of RNA structures.

nextPARS, a novel Illumina-based implementation of in-vitro parallel probing of RNA structures. Here you will find the scripts necessary to produce th

Jesse Willis 0 Jan 20, 2022
Deep Learning to Create StepMania SM FIles

StepCOVNet Running Audio to SM File Generator Currently only produces .txt files. Use SMDataTools to convert .txt to .sm python stepmania_note_generat

Chimezie Iwuanyanwu 8 Jan 08, 2023
Official implementation for NIPS'17 paper: PredRNN: Recurrent Neural Networks for Predictive Learning Using Spatiotemporal LSTMs.

PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive Learning The predictive learning of spatiotemporal sequences aims to generate future

THUML: Machine Learning Group @ THSS 243 Dec 26, 2022
Pytorch reimplementation of PSM-Net: "Pyramid Stereo Matching Network"

This is a Pytorch Lightning version PSMNet which is based on JiaRenChang/PSMNet. use python main.py to start training. PSM-Net Pytorch reimplementatio

XIAOTIAN LIU 1 Nov 25, 2021
Mesh Graphormer is a new transformer-based method for human pose and mesh reconsruction from an input image

MeshGraphormer ✨ ✨ This is our research code of Mesh Graphormer. Mesh Graphormer is a new transformer-based method for human pose and mesh reconsructi

Microsoft 251 Jan 08, 2023
Learning Features with Parameter-Free Layers (ICLR 2022)

Learning Features with Parameter-Free Layers (ICLR 2022) Dongyoon Han, YoungJoon Yoo, Beomyoung Kim, Byeongho Heo | Paper NAVER AI Lab, NAVER CLOVA Up

NAVER AI 65 Dec 07, 2022
Towards Long-Form Video Understanding

Towards Long-Form Video Understanding Chao-Yuan Wu, Philipp Krähenbühl, CVPR 2021 [Paper] [Project Page] [Dataset] Citation @inproceedings{lvu2021,

Chao-Yuan Wu 69 Dec 26, 2022
PyTorch implementation of SIFT descriptor

This is an differentiable pytorch implementation of SIFT patch descriptor. It is very slow for describing one patch, but quite fast for batch. It can

Dmytro Mishkin 150 Dec 24, 2022
Adversarial Learning for Modeling Human Motion

Adversarial Learning for Modeling Human Motion This repository contains the open source code which reproduces the results for the paper: Adversarial l

wangqi 6 Jun 15, 2021
An efficient and effective learning to rank algorithm by mining information across ranking candidates. This repository contains the tensorflow implementation of SERank model. The code is developed based on TF-Ranking.

SERank An efficient and effective learning to rank algorithm by mining information across ranking candidates. This repository contains the tensorflow

Zhihu 44 Oct 20, 2022
Detecting Potentially Harmful and Protective Suicide-related Content on Twitter

TwitterSuicideML Scripts for reproducing the Machine Learning analysis of the paper: Detecting Potentially Harmful and Protective Suicide-related Cont

3 Oct 17, 2022
This repo is duplication of jwyang/faster-rcnn.pytorch

Faster RCNN Pytorch This repo is duplication of jwyang/faster-rcnn.pytorch C/C++ code are removed and easier to study. Python 3.8.5 Ubuntu 20.04.1 LTS

Kim Jihwan 1 Jan 14, 2022
PatrickStar enables Larger, Faster, Greener Pretrained Models for NLP. Democratize AI for everyone.

PatrickStar: Parallel Training of Large Language Models via a Chunk-based Memory Management Meeting PatrickStar Pre-Trained Models (PTM) are becoming

Tencent 633 Dec 28, 2022
VSR-Transformer - This paper proposes a new Transformer for video super-resolution (called VSR-Transformer).

VSR-Transformer By Jiezhang Cao, Yawei Li, Kai Zhang, Luc Van Gool This paper proposes a new Transformer for video super-resolution (called VSR-Transf

Jiezhang Cao 225 Nov 13, 2022
PyTorch implementation of "MLP-Mixer: An all-MLP Architecture for Vision" Tolstikhin et al. (2021)

mlp-mixer-pytorch PyTorch implementation of "MLP-Mixer: An all-MLP Architecture for Vision" Tolstikhin et al. (2021) Usage import torch from mlp_mixer

isaac 27 Jul 09, 2022
Latent Network Models to Account for Noisy, Multiply-Reported Social Network Data

VIMuRe Latent Network Models to Account for Noisy, Multiply-Reported Social Network Data. If you use this code please cite this article (preprint). De

6 Dec 15, 2022
Pun Detection and Location

Pun Detection and Location “The Boating Store Had Its Best Sail Ever”: Pronunciation-attentive Contextualized Pun Recognition Yichao Zhou, Jyun-yu Jia

lawson 3 May 13, 2022