The official codes for the ICCV2021 Oral presentation "Rethinking Counting and Localization in Crowds: A Purely Point-Based Framework"

Overview

P2PNet (ICCV2021 Oral Presentation)

This repository contains codes for the official implementation in PyTorch of P2PNet as described in Rethinking Counting and Localization in Crowds: A Purely Point-Based Framework.

An brief introduction of P2PNet can be found at 机器之心 (almosthuman).

The codes is tested with PyTorch 1.5.0. It may not run with other versions.

Visualized demos for P2PNet

The network

The overall architecture of the P2PNet. Built upon the VGG16, it firstly introduce an upsampling path to obtain fine-grained feature map. Then it exploits two branches to simultaneously predict a set of point proposals and their confidence scores.

Comparison with state-of-the-art methods

The P2PNet achieved state-of-the-art performance on several challenging datasets with various densities.

Methods Venue SHTechPartA
MAE/MSE
SHTechPartB
MAE/MSE
UCF_CC_50
MAE/MSE
UCF_QNRF
MAE/MSE
CAN CVPR'19 62.3/100.0 7.8/12.2 212.2/243.7 107.0/183.0
Bayesian+ ICCV'19 62.8/101.8 7.7/12.7 229.3/308.2 88.7/154.8
S-DCNet ICCV'19 58.3/95.0 6.7/10.7 204.2/301.3 104.4/176.1
SANet+SPANet ICCV'19 59.4/92.5 6.5/9.9 232.6/311.7 -/-
DUBNet AAAI'20 64.6/106.8 7.7/12.5 243.8/329.3 105.6/180.5
SDANet AAAI'20 63.6/101.8 7.8/10.2 227.6/316.4 -/-
ADSCNet CVPR'20 55.4/97.7 6.4/11.3 198.4/267.3 71.3/132.5
ASNet CVPR'20 57.78/90.13 -/- 174.84/251.63 91.59/159.71
AMRNet ECCV'20 61.59/98.36 7.02/11.00 184.0/265.8 86.6/152.2
AMSNet ECCV'20 56.7/93.4 6.7/10.2 208.4/297.3 101.8/163.2
DM-Count NeurIPS'20 59.7/95.7 7.4/11.8 211.0/291.5 85.6/148.3
Ours - 52.74/85.06 6.25/9.9 172.72/256.18 85.32/154.5

Comparison on the NWPU-Crowd dataset.

Methods MAE[O] MSE[O] MAE[L] MAE[S]
MCNN 232.5 714.6 220.9 1171.9
SANet 190.6 491.4 153.8 716.3
CSRNet 121.3 387.8 112.0 522.7
PCC-Net 112.3 457.0 111.0 777.6
CANNet 110.0 495.3 102.3 718.3
Bayesian+ 105.4 454.2 115.8 750.5
S-DCNet 90.2 370.5 82.9 567.8
DM-Count 88.4 388.6 88.0 498.0
Ours 77.44 362 83.28 553.92

The overall performance for both counting and localization.

nAP$_{\delta}$ SHTechPartA SHTechPartB UCF_CC_50 UCF_QNRF NWPU_Crowd
$\delta=0.05$ 10.9% 23.8% 5.0% 5.9% 12.9%
$\delta=0.25$ 70.3% 84.2% 54.5% 55.4% 71.3%
$\delta=0.50$ 90.1% 94.1% 88.1% 83.2% 89.1%
$\delta={{0.05:0.05:0.50}}$ 64.4% 76.3% 54.3% 53.1% 65.0%

Comparison for the localization performance in terms of F1-Measure on NWPU.

Method F1-Measure Precision Recall
FasterRCNN 0.068 0.958 0.035
TinyFaces 0.567 0.529 0.611
RAZ 0.599 0.666 0.543
Crowd-SDNet 0.637 0.651 0.624
PDRNet 0.653 0.675 0.633
TopoCount 0.692 0.683 0.701
D2CNet 0.700 0.741 0.662
Ours 0.712 0.729 0.695

Installation

  • Clone this repo into a directory named P2PNET_ROOT
  • Organize your datasets as required
  • Install Python dependencies. We use python 3.6.5 and pytorch 1.5.0
pip install -r requirements.txt

Organize the counting dataset

We use a list file to collect all the images and their ground truth annotations in a counting dataset. When your dataset is organized as recommended in the following, the format of this list file is defined as:

train/scene01/img01.jpg train/scene01/img01.txt
train/scene01/img02.jpg train/scene01/img02.txt
...
train/scene02/img01.jpg train/scene02/img01.txt

Dataset structures:

DATA_ROOT/
        |->train/
        |    |->scene01/
        |    |->scene02/
        |    |->...
        |->test/
        |    |->scene01/
        |    |->scene02/
        |    |->...
        |->train.list
        |->test.list

DATA_ROOT is your path containing the counting datasets.

Annotations format

For the annotations of each image, we use a single txt file which contains one annotation per line. Note that indexing for pixel values starts at 0. The expected format of each line is:

x1 y1
x2 y2
...

Training

The network can be trained using the train.py script. For training on SHTechPartA, use

CUDA_VISIBLE_DEVICES=0 python train.py --data_root $DATA_ROOT \
    --dataset_file SHHA \
    --epochs 3500 \
    --lr_drop 3500 \
    --output_dir ./logs \
    --checkpoints_dir ./weights \
    --tensorboard_dir ./logs \
    --lr 0.0001 \
    --lr_backbone 0.00001 \
    --batch_size 8 \
    --eval_freq 1 \
    --gpu_id 0

By default, a periodic evaluation will be conducted on the validation set.

Testing

A trained model (with an MAE of 51.96) on SHTechPartA is available at "./weights", run the following commands to launch a visualization demo:

CUDA_VISIBLE_DEVICES=0 python run_test.py --weight_path ./weights/SHTechA.pth --output_dir ./logs/

Acknowledgements

  • Part of codes are borrowed from the C^3 Framework.
  • We refer to DETR to implement our matching strategy.

Citing P2PNet

If you find P2PNet is useful in your project, please consider citing us:

@inproceedings{song2021rethinking,
  title={Rethinking Counting and Localization in Crowds: A Purely Point-Based Framework},
  author={Song, Qingyu and Wang, Changan and Jiang, Zhengkai and Wang, Yabiao and Tai, Ying and Wang, Chengjie and Li, Jilin and Huang, Feiyue and Wu, Yang},
  journal={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  year={2021}
}

Related works from Tencent Youtu Lab

  • [AAAI2021] To Choose or to Fuse? Scale Selection for Crowd Counting. (paper link & codes)
  • [ICCV2021] Uniformity in Heterogeneity: Diving Deep into Count Interval Partition for Crowd Counting. (paper link & codes)
Owner
Tencent YouTu Research
Tencent YouTu Research
GNN-based Recommendation Benchma

GRecX A Fair Benchmark for GNN-based Recommendation Preliminary Comparison DiffNet-Yelp dataset (featureless) Algo 73 Oct 17, 2022

IhoneyBakFileScan Modify - 批量网站备份文件扫描器,增加文件规则,优化内存占用

ihoneyBakFileScan_Modify 批量网站备份文件泄露扫描工具 2022.2.8 添加、修改内容 增加备份文件fuzz规则 修改备份文件大小判断

VMsec 220 Jan 05, 2023
Tensorflow implementation of "Learning Deconvolution Network for Semantic Segmentation"

Tensorflow implementation of Learning Deconvolution Network for Semantic Segmentation. Install Instructions Works with tensorflow 1.11.0 and uses the

Fabian Bormann 224 Apr 15, 2022
Dense Prediction Transformers

Vision Transformers for Dense Prediction This repository contains code and models for our paper: Vision Transformers for Dense Prediction René Ranftl,

Intelligent Systems Lab Org 1.3k Jan 02, 2023
Graph-total-spanning-trees - A Python script to get total number of Spanning Trees in a Graph

Total number of Spanning Trees in a Graph This is a python script just written f

Mehdi I. 0 Jul 18, 2022
Implementation of gaze tracking and demo

Predicting Customer Demand by Using Gaze Detecting and Object Tracking This project is the integration of gaze detecting and object tracking. Predict

2 Oct 20, 2022
HMLLDB is a collection of LLDB commands to assist in the debugging of iOS apps.

HMLLDB is a collection of LLDB commands to assist in the debugging of iOS apps. 中文介绍 Features Non-intrusive. Your iOS project does not need to be modi

mao2020 47 Oct 22, 2022
Annotate with anyone, anywhere.

h h is the web app that serves most of the https://hypothes.is/ website, including the web annotations API at https://hypothes.is/api/. The Hypothesis

Hypothesis 2.6k Jan 08, 2023
O2O-Afford: Annotation-Free Large-Scale Object-Object Affordance Learning (CoRL 2021)

O2O-Afford: Annotation-Free Large-Scale Object-Object Affordance Learning Object-object Interaction Affordance Learning. For a given object-object int

Kaichun Mo 26 Nov 04, 2022
Phylogeny Partners

Phylogeny-Partners Two states models Instalation You may need to install the cython, networkx, numpy, scipy package: pip install cython, networkx, num

1 Sep 19, 2022
Ranger deep learning optimizer rewrite to use newest components

Ranger21 - integrating the latest deep learning components into a single optimizer Ranger deep learning optimizer rewrite to use newest components Ran

Less Wright 266 Dec 28, 2022
Constructing Neural Network-Based Models for Simulating Dynamical Systems

Constructing Neural Network-Based Models for Simulating Dynamical Systems Note this repo is work in progress prior to reviewing This is a companion re

Christian Møldrup Legaard 21 Nov 25, 2022
Normalizing Flows with a resampled base distribution

Resampling Base Distributions of Normalizing Flows Normalizing flows are a popular class of models for approximating probability distributions. Howeve

Vincent Stimper 24 Nov 03, 2022
Official PyTorch implementation of Spatial Dependency Networks.

Spatial Dependency Networks: Neural Layers for Improved Generative Image Modeling Đorđe Miladinović   Aleksandar Stanić   Stefan Bauer   Jürgen Schmid

Djordje Miladinovic 34 Jan 19, 2022
KwaiRec: A Fully-observed Dataset for Recommender Systems (Density: Almost 100%)

KuaiRec: A Fully-observed Dataset for Recommender Systems (Density: Almost 100%) KuaiRec is a real-world dataset collected from the recommendation log

Chongming GAO (高崇铭) 70 Dec 28, 2022
Tensor-Based Quantum Machine Learning

TensorLy_Quantum TensorLy-Quantum is a Python library for Tensor-Based Quantum Machine Learning that builds on top of TensorLy and PyTorch. Website: h

TensorLy 85 Dec 03, 2022
Storage-optimizer - Identify potintial optimizations on the cloud storage accounts

Storage Optimizer Identify potintial optimizations on the cloud storage accounts

Zaher Mousa 1 Feb 13, 2022
Code for AutoNL on ImageNet (CVPR2020)

Neural Architecture Search for Lightweight Non-Local Networks This repository contains the code for CVPR 2020 paper Neural Architecture Search for Lig

Yingwei Li 104 Aug 31, 2022
😮The official implementation of "CoNeRF: Controllable Neural Radiance Fields" 😮

CoNeRF: Controllable Neural Radiance Fields This is the official implementation for "CoNeRF: Controllable Neural Radiance Fields" Project Page Paper V

Kacper Kania 61 Dec 24, 2022
Implementation of Feedback Transformer in Pytorch

Feedback Transformer - Pytorch Simple implementation of Feedback Transformer in Pytorch. They improve on Transformer-XL by having each token have acce

Phil Wang 93 Oct 04, 2022