Official PyTorch implementation for paper "Efficient Two-Stage Detection of Human–Object Interactions with a Novel Unary–Pairwise Transformer"

Overview

UPT: Unary–Pairwise Transformers

PWC PWC

This repository contains the official PyTorch implementation for the paper

Frederic Z. Zhang, Dylan Campbell and Stephen Gould. Efficient Two-Stage Detection of Human–Object Interactions with a Novel Unary–Pairwise Transformer. arXiv preprint arXiv:2112.01838.

[project page] [preprint]

Abstract

...
However, the success of such one-stage HOI detectors can largely be attributed to the representation power of transformers. We discovered that when equipped with the same transformer, their two-stage counterparts can be more performant and memory-efficient, while taking a fraction of the time to train. In this work, we propose the Unary–Pairwise Transformer, a two-stage detector that exploits unary and pairwise representa-tions for HOIs. We observe that the unary and pairwise parts of our transformer network specialise, with the former preferentially increasing the scores of positive examples and the latter decreasing the scores of negative examples. We evaluate our method on the HICO-DET and V-COCO datasets, and significantly outperform state-of-the-art approaches. At inference time, our model with ResNet50 approaches real-time performance on a single GPU.

Demonstration on data in the wild

Model Zoo

We provide weights for UPT models pre-trained on HICO-DET and V-COCO for potential downstream applications. In addition, we also provide weights for fine-tuned DETR models to facilitate reproducibility. To attempt fine-tuning the DETR model yourself, refer to this repository.

Model Dataset Default Settings Inference UPT Weights DETR Weights
UPT-R50 HICO-DET (31.66, 25.94, 33.36) 0.042s weights weights
UPT-R101 HICO-DET (32.31, 28.55, 33.44) 0.061s weights weights
UPT-R101-DC5 HICO-DET (32.62, 28.62, 33.81) 0.124s weights weights
Model Dataset Scenario 1 Scenario 2 Inference UPT Weights DETR Weights
UPT-R50 V-COCO 59.0 64.5 0.043s weights weights
UPT-R101 V-COCO 60.7 66.2 0.064s weights weights
UPT-R101-DC5 V-COCO 61.3 67.1 0.131s weights weights

The inference speed was benchmarked on a GeForce RTX 3090. Note that weights of the UPT model include those of the detector (DETR). You do not need to download the DETR weights, unless you want to train the UPT model from scratch. Training UPT-R50 with 8 GeForce GTX TITAN X GPUs takes around 5 hours on HICO-DET and 40 minutes on V-COCO, almost a tenth of the time compared to other one-stage models such as QPIC.

Contact

For general inquiries regarding the paper and code, please post them in Discussions. For bug reports and feature requests, please post them in Issues. You can also contact me at [email protected].

Prerequisites

  1. Install the lightweight deep learning library Pocket. The recommended PyTorch version is 1.9.0.
  2. Download the repository and the submodules.
git clone https://github.com/fredzzhang/upt.git
git submodule init
git submodule update
  1. Prepare the HICO-DET dataset.
    1. If you have not downloaded the dataset before, run the following script.
    cd /path/to/upt/hicodet
    bash download.sh
    1. If you have previously downloaded the dataset, simply create a soft link.
    cd /path/to/upt/hicodet
    ln -s /path/to/hicodet_20160224_det ./hico_20160224_det
  2. Prepare the V-COCO dataset (contained in MS COCO).
    1. If you have not downloaded the dataset before, run the following script
    cd /path/to/upt/vcoco
    bash download.sh
    1. If you have previously downloaded the dataset, simply create a soft link
    cd /path/to/upt/vcoco
    ln -s /path/to/coco ./mscoco2014

License

UPT is released under the BSD-3-Clause License.

Inference

We have implemented inference utilities with different visualisation options. Provided you have downloaded the model weights to checkpoints/, run the following command to visualise detected instances together with the attention maps from the cooperative and competitive layers. Use the flag --index to select images, and --box-score-thresh to modify the filtering threshold on object boxes.

python inference.py --resume checkpoints/upt-r50-hicodet.pt --index 8789

Here is the sample output. Note that we manually selected some informative attention maps to display. The predicted scores for each action will be printed by the script as well.

To select the V-COCO dataset and V-COCO models, use the flag --dataset vcoco, and then load the corresponding weights. To visualise interactive human-object pairs for a particular action class, use the flag --action to specify the action index. Here is a lookup table for the action indices.

Additionally, to cater for different needs, we implemented an option to run inference on custom images, using the flag --image-path. The following is an example for interaction holding an umbrella.

python inference.py --resume checkpoints/upt-r50-hicodet.pt --image-path ./assets/umbrella.jpeg --action 36

Training and Testing

Refer to launch_template.sh for training and testing commands with different options. To train the UPT model from scratch, you need to download the weights for the corresponding DETR model, and place them under /path/to/upt/checkpoints/. Adjust --world-size based on the number of GPUs available.

To test the UPT model on HICO-DET, you can either use the Python utilities we implemented or the Matlab utilities provided by Chao et al.. For V-COCO, we did not implement evaluation utilities, and instead use the utilities provided by Gupta et al.. Refer to these instructions for more details.

Citation

If you find our work useful for your research, please consider citing us:

@article{zhang2021upt,
  author    = {Frederic Z. Zhang and Dylan Campbell and Stephen Gould},
  title     = {Efficient Two-Stage Detection of Human-Object Interactions with a Novel Unary-Pairwise Transformer},
  journal   = {arXiv preprint arXiv:2112.01838},
  year      = {2021}
}

@inproceedings{zhang2021scg,
  author    = {Frederic Z. Zhang, Dylan Campbell and Stephen Gould},
  title     = {Spatially Conditioned Graphs for Detecting Human–Object Interactions},
  booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
  month     = {October},
  year      = {2021},
  pages     = {13319-13327}
}
Comments
  • error when test vcoco

    error when test vcoco

    I use python main.py --cache --dataset vcoco --data-root vcoco/ --partitions trainval test --output-dir vcoco-r50 --resume checkpoints/upt-r50-vcoco.pt to generate cache.pkl. But report a error when eval it.

    The eval code is:

    from vsrl_eval import VCOCOeval
    
    vsrl_annot_file = 'data/vcoco/vcoco_val.json'
    coco_file = 'data/instances_vcoco_all_2014.json'
    split_file = 'data/splits/vcoco_val.ids'
    
    vcocoeval = VCOCOeval(vsrl_annot_file, coco_file, split_file)
    
    det_file = '/media/ming-t/Deng/relation_mppe/HOI-UPT/vcoco-r50/cache.pkl'
    vcocoeval._do_eval(det_file, ovr_thresh=0.5)
    

    The error is:

    loading annotations into memory...
    Done (t=0.74s)
    creating index...
    index created!
    loading vcoco annotations...
    Traceback (most recent call last):
      File "test.py", line 14, in <module>
        vcocoeval._do_eval(det_file, ovr_thresh=0.5)
      File "/media/ming-t/Deng/relation_mppe/HOI-UPT/lib/vcoco/vsrl_eval.py", line 194, in _do_eval
        self._do_agent_eval(vcocodb, detections_file, ovr_thresh=ovr_thresh)
      File "/media/ming-t/Deng/relation_mppe/HOI-UPT/lib/vcoco/vsrl_eval.py", line 417, in _do_agent_eval
        assert(np.amax(rec) <= 1)
      File "<__array_function__ internals>", line 180, in amax
      File "/home/ming-t/anaconda3/envs/pocket/lib/python3.8/site-packages/numpy/core/fromnumeric.py", line 2793, in amax
        return _wrapreduction(a, np.maximum, 'max', axis, None, out,
      File "/home/ming-t/anaconda3/envs/pocket/lib/python3.8/site-packages/numpy/core/fromnumeric.py", line 86, in _wrapreduction
        return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
    ValueError: zero-size array to reduction operation maximum which has no identity
    

    How to solve it?

    opened by leijue222 14
  • The HOI loss is NaN for rank 0

    The HOI loss is NaN for rank 0

    Dir sir, I followed with readme to build this UPT network,but when i use the instruction python main.py --world-size 1 --dataset vcoco --data-root ./v-coco --partitions trainval test --pretrained ../detr-r50-vcoco.pth --output-dir ./upt-r50-vcoco.pt

    i got an error

    `Traceback (most recent call last): File "main.py", line 208, in mp.spawn(main, nprocs=args.world_size, args=(args,)) File "/root/miniconda3/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 230, in spawn return start_processes(fn, args, nprocs, join, daemon, start_method='spawn') File "/root/miniconda3/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 188, in start_processes while not context.join(): File "/root/miniconda3/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 150, in join raise ProcessRaisedException(msg, error_index, failed_process.pid) torch.multiprocessing.spawn.ProcessRaisedException:

    -- Process 0 terminated with the following error: Traceback (most recent call last): File "/root/miniconda3/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 59, in _wrap fn(i, *args) File "/root/autodl-tmp/upload/main.py", line 125, in main engine(args.epochs) File "/root/pocket/pocket/pocket/core/distributed.py", line 139, in call self._on_each_iteration() File "/root/autodl-tmp/upload/utils.py", line 138, in _on_each_iteration raise ValueError(f"The HOI loss is NaN for rank {self._rank}") ValueError: The HOI loss is NaN for rank 0`

    I tried to train without pretrain model it works the same error.I tried to print the loss but it shown an empty tensor.As a beginner , i have no idea what it happened.If you could give me any help,i would be appreciated. I look forward to receiving your reply.Thank you for a lot.

    Inactive 
    opened by OBVIOUSDAWN 11
  • Generate the results on the friends.gif

    Generate the results on the friends.gif

    Hello! Thank you for this amazing work! I am curious to know how you got the inference results showing the names of the objects and the activities on the demo_friends.gif. Can you please tell how you achieved that? Thanks in advance.

    opened by Andre1998Shuvam 8
  • Predicted object class instead of just number?

    Predicted object class instead of just number?

    Hi,

    thanks for the amazing work! I would like to ask where can we see the predicted object class of each bounding box, or the prediction result for a triplet form? Thank you!

    enhancement 
    opened by xiaoxiaoczw 7
  • code bug?

    code bug?

    opened by ltttpku 6
  • There is still a problem.

    There is still a problem.

    Traceback (most recent call last): File "inference.py", line 225, in main(args) File "C:\Users\User\anaconda3\envs\colab\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "inference.py", line 150, in main upt = build_detector(args, conversion) File "C:\Users\User\PycharmProjects\hoi\UPT\upt.py", line 276, in build_detector detr.backbone[0].num_channels, File "C:\Users\User\anaconda3\envs\colab\lib\site-packages\torch\nn\modules\module.py", line 1207, in getattr raise AttributeError("'{}' object has no attribute '{}'".format( AttributeError: 'DETRsegm' object has no attribute 'backbone'


    I changed the torch version and tried it in the collab environment, but the problem still occurs in the same place.

    If possible, can you tell me all libraries using "pip freeze > requirements.txt"?

    If it is not possible to disclose it externally, I would appreciate it if you could send it to [email protected].

    opened by ghzmwhdk777 4
  • Here is problem

    Here is problem

    Traceback (most recent call last): File "inference.py", line 225, in main(args) File "C:\Users\User\anaconda3\envs\colab\lib\site-packages\torch\autograd\grad_mode.py", line 28, in decorate_context return func(*args, **kwargs) File "inference.py", line 150, in main upt = build_detector(args, conversion) File "C:\Users\User\PycharmProjects\hoi\UPT\upt.py", line 268, in build_detector detr, , postprocessors = build_model(args) File "C:\Users\User\PycharmProjects\hoi\UPT\detr\models_init.py", line 6, in build_model return build(args) File "C:\Users\User\PycharmProjects\hoi\UPT\detr\models\detr.py", line 313, in build num_classes = 20 if args.dataset_file != 'coco' else 91 AttributeError: 'Namespace' object has no attribute 'dataset_file'

    opened by ghzmwhdk777 4
  • How about training a DETR model on the VCOCO dataset

    How about training a DETR model on the VCOCO dataset

    Thank you for your excellent work. you have provided a tutorial on training DETR models on the HICO-DET dataset, could you tell us how you trained the DETR on the VCOCO dataset?

    moved to discussion 
    opened by ddwhzh 4
  • confused about the vcoco dataset

    confused about the vcoco dataset

    There're some cool properties of VCOCO dataset you implemented: "object_to_action" gives me the list of actions for each object, i.e. {1: [0, 3, 11, 15], 2: [0, 1, 2, 3, 11], ......} "objects" return the list of objects, i.e. ['background', 'person', 'bicycle', .......] "actions" return the list of actions, i.e. ['hold obj', 'sit instr', 'ride instr', .......]

    However, I'm confused about the relationships among them:

    1. Which object does the key 1 of "1: [0, 3, 11, 15]", which is the first item of object_to_action, represent?
    2. Which action does the values [0, 3, 11, 15] of "1: [0, 3, 11, 15]" represent?

    According to the List of actions and objects, Actions 0, 3, 11, 15 represent hold obj, look obj, carry obj, cut obj respectively while Object 1 represent person, which appears to be weird.

    question moved to discussion 
    opened by ltttpku 3
  • train

    train

    python main.py --world-size 1 --pretrained checkpoints/detr-r50-hicodet.pth --output-dir checkpoints/upt-r50-hicodet

    raise ValueError(f"The HOI loss is NaN for rank {self._rank}") ValueError: The HOI loss is NaN for rank 0

    opened by wangjunbianqiang 2
  • Multiple loss training code

    Multiple loss training code

    Hi, @fredzzhang :

    I want to try training with multiple losses. I found the relevant code. I added a loss, which is running and no error is reported.

    but I want to successfully train multiple loss and set the hyperparameters of loss, how do I do it?

    if self.training:

            interaction_loss = self.compute_interaction_loss(boxes, bh, bo, logits, prior, targets, pairwise_tokens_x_collated)
            interaction_x_loss = self.compute_interaction_x_loss(boxes, bh, bo, logits, prior, targets, pairwise_tokens_x_collated)
            loss_dict = dict(
                interaction_loss=interaction_loss,
                interaction_x_loss = interaction_x_loss
            )
            return loss_dict
    

    def _on_each_iteration(self):

        loss_dict = self._state.net(
            *self._state.inputs, targets=self._state.targets)
        if loss_dict['interaction_loss'].isnan():
            raise ValueError(f"The HOI loss is NaN for rank {self._rank}")
    
        self._state.loss = sum(loss for loss in loss_dict.values())
        self._state.optimizer.zero_grad(set_to_none=True)
        self._state.loss.backward()
        if self.max_norm > 0:
            torch.nn.utils.clip_grad_norm_(self._state.net.parameters(), self.max_norm)
        self._state.optimizer.step()
    

    yaoyaosanqi.

    opened by yaoyaosanqi 2
Releases(v1.0)
Owner
Frederic Zhang
PhD researcher, photographer, substandard musician but a linguistic genius
Frederic Zhang
T-LOAM: Truncated Least Squares Lidar-only Odometry and Mapping in Real-Time

T-LOAM: Truncated Least Squares Lidar-only Odometry and Mapping in Real-Time The first Lidar-only odometry framework with high performance based on tr

Pengwei Zhou 183 Dec 01, 2022
Progressive Domain Adaptation for Object Detection

Progressive Domain Adaptation for Object Detection Implementation of our paper Progressive Domain Adaptation for Object Detection, based on pytorch-fa

96 Nov 25, 2022
Chinese named entity recognization with BiLSTM using Keras

Chinese named entity recognization (Bilstm with Keras) Project Structure ./ ├── README.md ├── data │   ├── README.md │   ├── data 数据集 │   │   ├─

1 Dec 17, 2021
LV-BERT: Exploiting Layer Variety for BERT (Findings of ACL 2021)

LV-BERT Introduction In this repo, we introduce LV-BERT by exploiting layer variety for BERT. For detailed description and experimental results, pleas

Weihao Yu 14 Aug 24, 2022
This project implements "virtual speed" from heart rate monito

ANT+ Virtual Stride Based Speed and Distance Monitor Overview This project imple

2 May 20, 2022
(ICCV'21) Official PyTorch implementation of Relational Embedding for Few-Shot Classification

Relational Embedding for Few-Shot Classification (ICCV 2021) Dahyun Kang, Heeseung Kwon, Juhong Min, Minsu Cho [paper], [project hompage] We propose t

Dahyun Kang 82 Dec 24, 2022
Rainbow is all you need! A step-by-step tutorial from DQN to Rainbow

Do you want a RL agent nicely moving on Atari? Rainbow is all you need! This is a step-by-step tutorial from DQN to Rainbow. Every chapter contains bo

Jinwoo Park (Curt) 1.4k Dec 29, 2022
Keeping it safe - AI Based COVID-19 Tracker using Deep Learning and facial recognition

Keeping it safe - AI Based COVID-19 Tracker using Deep Learning and facial recognition

Vansh Wassan 15 Jun 17, 2021
DeeBERT: Dynamic Early Exiting for Accelerating BERT Inference

DeeBERT This is the code base for the paper DeeBERT: Dynamic Early Exiting for Accelerating BERT Inference. Code in this repository is also available

Castorini 132 Nov 14, 2022
Visual Adversarial Imitation Learning using Variational Models (VMAIL)

Visual Adversarial Imitation Learning using Variational Models (VMAIL) This is the official implementation of the NeurIPS 2021 paper. Project website

14 Nov 18, 2022
Official implementation of the paper "AAVAE: Augmentation-AugmentedVariational Autoencoders"

AAVAE Official implementation of the paper "AAVAE: Augmentation-AugmentedVariational Autoencoders" Abstract Recent methods for self-supervised learnin

Grid AI Labs 48 Dec 12, 2022
CVPR 2021

Smoothing the Disentangled Latent Style Space for Unsupervised Image-to-image Translation [Paper] | [Poster] | [Codes] Yahui Liu1,3, Enver Sangineto1,

Yahui Liu 37 Sep 12, 2022
ServiceX Transformer that converts flat ROOT ntuples into columnwise data

ServiceX_Uproot_Transformer ServiceX Transformer that converts flat ROOT ntuples into columnwise data Usage You can invoke the transformer from the co

Vis 0 Jan 20, 2022
A fast implementation of bss_eval metrics for blind source separation

fast_bss_eval Do you have a zillion BSS audio files to process and it is taking days ? Is your simulation never ending ? Fear no more! fast_bss_eval i

Robin Scheibler 99 Dec 13, 2022
The implementation of the CVPR2021 paper "Structure-Aware Face Clustering on a Large-Scale Graph with 10^7 Nodes"

STAR-FC This code is the implementation for the CVPR 2021 paper "Structure-Aware Face Clustering on a Large-Scale Graph with 10^7 Nodes" 🌟 🌟 . 🎓 Re

Shuai Shen 87 Dec 28, 2022
PyTorch Implementation of CvT: Introducing Convolutions to Vision Transformers

CvT: Introducing Convolutions to Vision Transformers Pytorch implementation of CvT: Introducing Convolutions to Vision Transformers Usage: img = torch

Rishikesh (ऋषिकेश) 193 Jan 03, 2023
Music source separation is a task to separate audio recordings into individual sources

Music Source Separation Music source separation is a task to separate audio recordings into individual sources. This repository is an PyTorch implmeme

Bytedance Inc. 958 Jan 03, 2023
(SIGIR2020) “Asymmetric Tri-training for Debiasing Missing-Not-At-Random Explicit Feedback’’

Asymmetric Tri-training for Debiasing Missing-Not-At-Random Explicit Feedback About This repository accompanies the real-world experiments conducted i

yuta-saito 19 Dec 01, 2022
Code for the paper "Training GANs with Stronger Augmentations via Contrastive Discriminator" (ICLR 2021)

Training GANs with Stronger Augmentations via Contrastive Discriminator (ICLR 2021) This repository contains the code for reproducing the paper: Train

Jongheon Jeong 174 Dec 29, 2022
Haze Removal can remove slight to extreme cases of haze affecting an image

Haze Removal can remove slight to extreme cases of haze affecting an image. Its most typical use is for landscape photography where the haze causes low contrast and low saturation, but it can also be

Grace Ugochi Nneji 3 Feb 15, 2022