Interpretable and Generalizable Person Re-Identification with Query-Adaptive Convolution and Temporal Lifting

Overview

QAConv

Interpretable and Generalizable Person Re-Identification with Query-Adaptive Convolution and Temporal Lifting

This PyTorch code is proposed in our paper [1]. A Chinese blog is available in 再见,迁移学习?可解释和泛化的行人再辨识.

Updates

  • 9/19/2021: Include TransMatcher, a transformer based deep image matching method based on QAConv 2.0.
  • 9/16/2021: QAConv 2.1: simplify graph sampling, implement the Einstein summation for QAConv, use the batch hard triplet loss, design an adaptive epoch and learning rate scheduling method, and apply the automatic mixed precision training.
  • 4/1/2021: QAConv 2.0 [2]: include a new sampler called Graph Sampler (GS), and remove the class memory. This version is much more efficient in learning. See the updated results.
  • 3/31/2021: QAConv 1.2: include some popular data augmentation methods, and change the ranking.py implementation to the original open-reid version, so that it is more consistent to most other implementations (e.g. open-reid, torch-reid, fast-reid).
  • 2/7/2021: QAConv 1.1: an important update, which includes a pre-training function for a better initialization, so that the results are now more stable.
  • 11/26/2020: Include the IBN-Net as backbone, and the RandPerson dataset.

Requirements

  • Pytorch (>1.0)
  • sklearn
  • scipy

Usage

Download some public datasets (e.g. Market-1501, CUHK03-NP, MSMT) on your own, extract them in some folder, and then run the followings.

Training and test

python main.py --dataset market --testset cuhk03_np_detected[,msmt] [--data-dir ./data] [--exp-dir ./Exp]

For more options, run "python main.py --help". For example, if you want to use the ResNet-152 as backbone, specify "-a resnet152". If you want to train on the whole dataset (as done in our paper for the MSMT17), specify "--combine_all".

With the GS sampler and pairwise matching loss, run the following:

python main_gs.py --dataset market --testset cuhk03_np_detected[,msmt] [--data-dir ./data] [--exp-dir ./Exp]

Test only

python main.py --dataset market --testset duke[,market,msmt] [--data-dir ./data] [--exp-dir ./Exp] --evaluate

Performance

Performance (%) of QAConv 2.1 under direct cross-dataset evaluation without transfer learning or domain adaptation:

Training Data Version Training Hours CUHK03-NP Market-1501 MSMT17
Rank-1 mAP Rank-1 mAP Rank-1 mAP
Market QAConv 1.0 1.33 9.9 8.6 - - 22.6 7.0
QAConv 2.1 0.25 19.1 18.1 - - 45.9 17.2
MSMT QAConv 2.1 0.73 20.9 20.6 79.1 49.5 - -
MSMT (all) QAConv 1.0 26.90 25.3 22.6 72.6 43.1 - -
QAConv 2.1 3.42 27.6 28.0 82.4 56.9 - -
RandPerson QAConv 2.1 2.33 17.9 16.1 75.9 46.3 44.1 15.2

Contacts

Shengcai Liao
Inception Institute of Artificial Intelligence (IIAI)
[email protected]

Citation

[1] Shengcai Liao and Ling Shao, "Interpretable and Generalizable Person Re-Identification with Query-Adaptive Convolution and Temporal Lifting." In the 16th European Conference on Computer Vision (ECCV), 23-28 August, 2020.

[2] Shengcai Liao and Ling Shao, "Graph Sampling Based Deep Metric Learning for Generalizable Person Re-Identification." In arXiv preprint, arXiv:2104.01546, 2021.

@inproceedings{Liao-ECCV2020-QAConv,  
  title={{Interpretable and Generalizable Person Re-Identification with Query-Adaptive Convolution and Temporal Lifting}},  
  author={Shengcai Liao and Ling Shao},  
  booktitle={European Conference on Computer Vision (ECCV)},  
  year={2020}  
}

@article{Liao-arXiv2021-GS,
  author    = {Shengcai Liao and Ling Shao},
  title     = {{Graph Sampling Based Deep Metric Learning for Generalizable Person Re-Identification}},
  journal   = {CoRR},
  volume    = {abs/2104.01546},
  year      = {April 4, 2021},
  url       = {http://arxiv.org/abs/2104.01546},
  archivePrefix = {arXiv},
  eprint    = {2104.01546}
}
Comments
  • Out of memory,--test_fea_batch --test_gal_batch --test_prob_batch all had seted to 128

    Out of memory,--test_fea_batch --test_gal_batch --test_prob_batch all had seted to 128

    main.py --dataset market --testset msmt --data-dir ./reid/datasets/ --exp-dir ./Exp

    fpaths:./reid/datasets/market/bounding_box_train/1500_c6s3_086567_01.jpg fpaths:./reid/datasets/market/bounding_box_test/1501_c6s4_001902_01.jpg fpaths:./reid/datasets/market/query/1501_c6s4_001877_00.jpg Market dataset loaded subset | # ids | # images

    train | 751 | 12935 query | 750 | 3367 gallery | 751 | 15912

    • Finished epoch 1 at lr=[0.0005, 0.005, 0.005]. Loss: 14.812. Acc: 54.97%. Training time: 174 seconds.

    • Finished epoch 2 at lr=[0.0005, 0.005, 0.005]. Loss: 13.333. Acc: 61.35%. Training time: 344 seconds.

    • Finished epoch 3 at lr=[0.0005, 0.005, 0.005]. Loss: 11.447. Acc: 68.55%. Training time: 514 seconds.

    • Finished epoch 4 at lr=[0.0005, 0.005, 0.005]. Loss: 10.338. Acc: 72.09%. Training time: 684 seconds.

    • Finished epoch 5 at lr=[0.0005, 0.005, 0.005]. Loss: 9.319. Acc: 75.31%. Training time: 855 seconds.

    Decay the learning rate by a factor of 0.1. Final epochs: 7.

    • Finished epoch 6 at lr=[5e-05, 0.0005, 0.0005]. Loss: 8.566. Acc: 77.75%. Training time: 1025 seconds.

    • Finished epoch 7 at lr=[5e-05, 0.0005, 0.0005]. Loss: 7.732. Acc: 80.22%. Training time: 1195 seconds.

    The learning converges at epoch 7.

    Evaluate the learned model: test_names: ['msmt'] MSMT dataset loaded subset | # ids | # images

    train | 1041 | 32621 query | 3060 | 11659 gallery | 3060 | 82161 /home/luotao/anaconda3/envs/QAConv/lib/python3.6/site-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum. "Argument interpolation should be of type InterpolationMode instead of int. " Time: 2690.337 seconds. / 1284. similarity 1 / 1284.
    已杀死

    run: python main.py --dataset market --testset msmt --data-dir ./reid/datasets/ --exp-dir ./Exp
    --test_fea_batch --test_gal_batch --test_prob_batch all set to 128. Time: xx seconds, /xx similarity xx/xx. 已杀死.
    Those three parameters set to 64, the errer : Time: 2690.337 seconds. / 1284. similarity 1 / 1284. 已杀死.

    opened by huangpan2507 16
  • Unstable results

    Unstable results

    Hi, Thanks for sharing your code. However, I ran your code twice and get quite different results. Maybe due to random seed? So did you set a fixed random seed when you train the model?

    good first issue 
    opened by HeliosZhao 10
  • 训练非常慢!

    训练非常慢!

    你好,我使用2块2080TI训练30W数据,64 batch_size 并且使用了fp16来加速训练,但是一个epoch训练了半个多小时才到511 iter,这正常吗? Epoch: [1][511/4714] 455Time 2.620 (2.646)ec 0.0Data 0.001 (0.002) Loss 456.984 (520.544) Prec 0.00% (0.00%)

    opened by zengwb-lx 6
  • Unable to use ClassMemoryLoss to train the model

    Unable to use ClassMemoryLoss to train the model

    In the QAConv codes, I tried to modify the loss function to ClassMemoryLoss as the criterion but the acc is nearly zero. Is the ClassMemoryLoss available to use? Are ClassMemoryLoss and Focal Loss in the paper the same? The code is shown below.

    criterion = ClassMemoryLoss(matcher, num_classes, num_features, hei, wid).cuda()

    opened by ArminLee 4
  • Question about backbone

    Question about backbone

    Hi Mr.Liao, I appreciate much your novel idea and your code, and i notice that you choose ResNet as the backbone. ResNet152 has shown great results in the paper and in my own experiments , but it seems that it takes quite some time to train, even if we choose layer 3 of the model. Have you tried some lightweight backbone such as MobileNet? Is there any specific reason for choosing ResNet as feature extractor? Thanks in advance.

    opened by jingyut 4
  • graph sampling的疑问

    graph sampling的疑问

    廖老师您好,想了解一下为什么graph sampling对于domain generalized re-id能够有很好的提升效果?以往的domain generalized re-id方法往往是采用domain invariant learning, style normalization等方式来解决这一任务,但graph sampling好像跟以往的方法思路不同,是通过加强hard mining的方式来改善domain generalization;对这一点有些不太理解,期待您的回复,谢谢!

    opened by Terminator8758 3
  • Graph Sampler

    Graph Sampler

    thank u for ur work! I got 2 questions about Graph Sampling:

    1. intuitively, it should work on normal ReID task.
    2. The whole process is like: before training one epoch, the proposed sampler randomly select one img for each class, then computes a distmat for each img. The distmat represents distances between classes. So we can mine hardest samples in entire dataset, not a batch. But I didn't get where does "Graph" have connection to the process above. Looking forward to your help
    opened by liyuke65535 3
  • 关于s=1

    关于s=1

    廖老师您好,我想问一下关于s的取值问题。 您论文提到为了效率选择了s=1, 我是这么理解的,在不使用classmemory 而是使用pair wise match的情况下, 做一次QAconv的时间复杂度为O( B^2 * (HW)^2 * s )。 按照时间复杂度来的化, s取值的大一点或者小小一点感觉没有多影响。 但是,当s=1的时候,可以直接使用矩阵乘法,然后又因为矩阵乘法做了大量的优化,所以实际的时间大大缩短了。所以最终s=1. 不知我的理解是否有问题,望老师您赐教!

    opened by pSGAme 2
  • The Graph Sampling work 相关问题

    The Graph Sampling work 相关问题

    廖老师您好,读了您最近的The Graph Sampling work 论文,有两个问题不太清楚,想请教您一下,望您指点:

    1. 在每一个epoch建图的时候,随机采样每个类的一张图片会不会造成比较大的偏差?
    2. K=2处理梯度太小的问题时,会不会遇到完全采样不到的情况(以前在远大于学术数据集的业务数据集上遇到过着种问题,hardcase采样不到)
    opened by zhustrong 2
  • Issues about evaluators.py

    Issues about evaluators.py

    I use Market as the training dataset and Duke as the test dataset, when I use --do_tlift, it shows that the size of tensor are not match. image

    In the evaluators.py document, line 212, the original dist size is 222817661 in Market dataset, and the size of dist_rerank is 2228253 because the num_gal is not the same. The value of num_gal is the length of gallery images in the definition of line 189. However, it is redefined in line 204 as the size of gallery feature.

    opened by ArminLee 2
  • self.model.eval()

    self.model.eval()

    Recently, I have read your code for QAConv. Now, I have a question to consult you. In the train() method in trainer.py, the following codes class BaseTrainer(object): for i, inputs in enumerate(data_loader): self.model.eval() self.criterion.train() Why you don't set the model in train mode by using self.mode.train(), instead of using model.eval(). And, in the whole code of your project, I also found that there is no other place to use model. train().

    opened by xiaopanchen 2
  • Can't find qaconv_loss

    Can't find qaconv_loss

    Hello,

    First of all, thanks so much for your good work!

    Here is a question: inside the test_matching.py, you import from reid.loss.qaconv_loss import QAConvLoss, however, it seems that qaconv_loss is no longer here, so I change to other loss functions. Will it influence the performance?

    Thanks!

    opened by xyimaging 5
Releases(v2.1)
  • v2.1(Sep 16, 2021)

    • Simplified graph sampling
    • Einstein summation for QAConv
    • Hard triplet loss
    • Adaptive epoch and learning rate scheduling
    • Automatic mixed precision training
    Source code(tar.gz)
    Source code(zip)
  • v2.0(Apr 1, 2021)

    • Include a new sampler called Graph Sampler (GS).
    • Remove the class memory based loss. Instead, a pairwise matching loss is implemented.
    • This version is much more efficient in learning.
    Source code(tar.gz)
    Source code(zip)
  • v1.2(Mar 31, 2021)

    Include some popular data augmentation methods, and change the ranking.py implementation to the original open-reid version, so that it is more consistent to most other implementations (e.g. open-reid, torch-reid, fast-reid).

    Source code(tar.gz)
    Source code(zip)
  • v1.1(Mar 30, 2021)

    • Include the IBN-Net as backbone, and the RandPerson dataset.
    • Include a pre-training function for a better initialization, so that the results are now more stable.
    Source code(tar.gz)
    Source code(zip)
  • v1.0-eccv(Aug 12, 2020)

Owner
Shengcai Liao
Lead Scientist, Ph.D. Inception Institute of Artificial Intelligence
Shengcai Liao
Contrastive Learning Inverts the Data Generating Process

Official code to reproduce the results and data presented in the paper Contrastive Learning Inverts the Data Generating Process.

71 Nov 25, 2022
Code for "On Memorization in Probabilistic Deep Generative Models"

On Memorization in Probabilistic Deep Generative Models This repository contains the code necessary to reproduce the experiments in On Memorization in

The Alan Turing Institute 3 Jun 09, 2022
PyTorch Implementation of our paper Explain Me the Painting: Multi-Topic Knowledgeable Art Description Generation

PyTorch Implementation of our paper Explain Me the Painting: Multi-Topic Knowledgeable Art Description Generation

Zechen Bai 12 Jul 08, 2022
Pytorch implementation for "Density-aware Chamfer Distance as a Comprehensive Metric for Point Cloud Completion" (NeurIPS 2021)

Density-aware Chamfer Distance This repository contains the official PyTorch implementation of our paper: Density-aware Chamfer Distance as a Comprehe

Tong WU 93 Dec 15, 2022
[ACL 2022] LinkBERT: A Knowledgeable Language Model 😎 Pretrained with Document Links

LinkBERT: A Knowledgeable Language Model Pretrained with Document Links This repo provides the model, code & data of our paper: LinkBERT: Pretraining

Michihiro Yasunaga 264 Jan 01, 2023
Pytorch implementation of “Recursive Non-Autoregressive Graph-to-Graph Transformer for Dependency Parsing with Iterative Refinement”

Graph-to-Graph Transformers Self-attention models, such as Transformer, have been hugely successful in a wide range of natural language processing (NL

Idiap Research Institute 40 Aug 14, 2022
Implementation of Advantage-Weighted Regression: Simple and Scalable Off-Policy Reinforcement Learning

advantage-weighted-regression Implementation of Advantage-Weighted Regression: Simple and Scalable Off-Policy Reinforcement Learning, by Peng et al. (

Omar D. Domingues 1 Dec 02, 2021
PyTorch code for Composing Partial Differential Equations with Physics-Aware Neural Networks

FInite volume Neural Network (FINN) This repository contains the PyTorch code for models, training, and testing, and Python code for data generation t

Cognitive Modeling 20 Dec 18, 2022
DeeBERT: Dynamic Early Exiting for Accelerating BERT Inference

DeeBERT This is the code base for the paper DeeBERT: Dynamic Early Exiting for Accelerating BERT Inference. Code in this repository is also available

Castorini 132 Nov 14, 2022
Tool for live presentations using manim

manim-presentation Tool for live presentations using manim Install pip install manim-presentation opencv-python Usage Use the class Slide as your sce

Federico Galatolo 146 Jan 06, 2023
Fake-user-agent-traffic-geneator - Python CLI Tool to generate fake traffic against URLs with configurable user-agents

Fake traffic generator for Gartner Demo Generate fake traffic to URLs with custo

New Relic Experimental 3 Oct 31, 2022
Code for Subgraph Federated Learning with Missing Neighbor Generation (NeurIPS 2021)

To run the code Unzip the package to your local directory; Run 'pip install -r requirements.txt' to download required packages; Open file ~/nips_code/

32 Dec 26, 2022
An extremely simple, intuitive, hardware-friendly, and well-performing network structure for LiDAR semantic segmentation on 2D range image. IROS21

FIDNet_SemanticKITTI Motivation Implementing complicated network modules with only one or two points improvement on hardware is tedious. So here we pr

YimingZhao 54 Dec 12, 2022
This repository contains the accompanying code for Deep Virtual Markers for Articulated 3D Shapes, ICCV'21

Deep Virtual Markers This repository contains the accompanying code for Deep Virtual Markers for Articulated 3D Shapes, ICCV'21 Getting Started Get sa

KimHyomin 45 Oct 07, 2022
Julia package for multiway (inverse) covariance estimation.

TensorGraphicalModels TensorGraphicalModels.jl is a suite of Julia tools for estimating high-dimensional multiway (tensor-variate) covariance and inve

Wayne Wang 3 Sep 23, 2022
An open source library for face detection in images. The face detection speed can reach 1000FPS.

libfacedetection This is an open source library for CNN-based face detection in images. The CNN model has been converted to static variables in C sour

Shiqi Yu 11.4k Dec 27, 2022
Keras implementation of AdaBound

AdaBound for Keras Keras port of AdaBound Optimizer for PyTorch, from the paper Adaptive Gradient Methods with Dynamic Bound of Learning Rate. Usage A

Somshubra Majumdar 132 Sep 23, 2022
functorch is a prototype of JAX-like composable function transforms for PyTorch.

functorch is a prototype of JAX-like composable function transforms for PyTorch.

Facebook Research 1.2k Jan 09, 2023
Space-event-trace - Tracing service for spaceteam events

space-event-trace Tracing service for TU Wien Spaceteam events. This service is

TU Wien Space Team 2 Jan 04, 2022
OCRA (Object-Centric Recurrent Attention) source code

OCRA (Object-Centric Recurrent Attention) source code Hossein Adeli and Seoyoung Ahn Please cite this article if you find this repository useful: For

Hossein Adeli 2 Jun 18, 2022