High-resolution networks and Segmentation Transformer for Semantic Segmentation

Overview

High-resolution networks and Segmentation Transformer for Semantic Segmentation

Branches

  • This is the implementation for HRNet + OCR.
  • The PyTroch 1.1 version ia available here.
  • The PyTroch 0.4.1 version is available here.

News

  • [2021/05/04] We rephrase the OCR approach as Segmentation Transformer pdf. We will provide the updated implementation soon.

  • [2021/02/16] Based on the PaddleClas ImageNet pretrained weights, we achieve 83.22% on Cityscapes val, 59.62% on PASCAL-Context val (new SOTA), 45.20% on COCO-Stuff val (new SOTA), 58.21% on LIP val and 47.98% on ADE20K val. Please checkout openseg.pytorch for more details.

  • [2020/08/16] MMSegmentation has supported our HRNet + OCR.

  • [2020/07/20] The researchers from AInnovation have achieved Rank#1 on ADE20K Leaderboard via training our HRNet + OCR with a semi-supervised learning scheme. More details are in their Technical Report.

  • [2020/07/09] Our paper is accepted by ECCV 2020: Object-Contextual Representations for Semantic Segmentation. Notably, the reseachers from Nvidia set a new state-of-the-art performance on Cityscapes leaderboard: 85.4% via combining our HRNet + OCR with a new hierarchical mult-scale attention scheme.

  • [2020/03/13] Our paper is accepted by TPAMI: Deep High-Resolution Representation Learning for Visual Recognition.

  • HRNet + OCR + SegFix: Rank #1 (84.5) in Cityscapes leaderboard. OCR: object contextual represenations pdf. HRNet + OCR is reproduced here.

  • Thanks Google and UIUC researchers. A modified HRNet combined with semantic and instance multi-scale context achieves SOTA panoptic segmentation result on the Mapillary Vista challenge. See the paper.

  • Small HRNet models for Cityscapes segmentation. Superior to MobileNetV2Plus ....

  • Rank #1 (83.7) in Cityscapes leaderboard. HRNet combined with an extension of object context

  • Pytorch-v1.1 and the official Sync-BN supported. We have reproduced the cityscapes results on the new codebase. Please check the pytorch-v1.1 branch.

Introduction

This is the official code of high-resolution representations for Semantic Segmentation. We augment the HRNet with a very simple segmentation head shown in the figure below. We aggregate the output representations at four different resolutions, and then use a 1x1 convolutions to fuse these representations. The output representations is fed into the classifier. We evaluate our methods on three datasets, Cityscapes, PASCAL-Context and LIP.

hrnet

Besides, we further combine HRNet with Object Contextual Representation and achieve higher performance on the three datasets. The code of HRNet+OCR is contained in this branch. We illustrate the overall framework of OCR in the Figure and the equivalent Transformer pipelines:

OCR

Segmentation Transformer

Segmentation models

The models are initialized by the weights pretrained on the ImageNet. ''Paddle'' means the results are based on PaddleCls pretrained HRNet models. You can download the pretrained models from https://github.com/HRNet/HRNet-Image-Classification. Slightly different, we use align_corners = True for upsampling in HRNet.

  1. Performance on the Cityscapes dataset. The models are trained and tested with the input size of 512x1024 and 1024x2048 respectively. If multi-scale testing is used, we adopt scales: 0.5,0.75,1.0,1.25,1.5,1.75.
model Train Set Test Set OHEM Multi-scale Flip mIoU Link
HRNetV2-W48 Train Val No No No 80.9 Github/BaiduYun(Access Code:pmix)
HRNetV2-W48 + OCR Train Val No No No 81.6 Github/BaiduYun(Access Code:fa6i)
HRNetV2-W48 + OCR Train + Val Test No Yes Yes 82.3 Github/BaiduYun(Access Code:ycrk)
HRNetV2-W48 (Paddle) Train Val No No No 81.6 ---
HRNetV2-W48 + OCR (Paddle) Train Val No No No --- ---
HRNetV2-W48 + OCR (Paddle) Train + Val Test No Yes Yes --- ---
  1. Performance on the LIP dataset. The models are trained and tested with the input size of 473x473.
model OHEM Multi-scale Flip mIoU Link
HRNetV2-W48 No No Yes 55.83 Github/BaiduYun(Access Code:fahi)
HRNetV2-W48 + OCR No No Yes 56.48 Github/BaiduYun(Access Code:xex2)
HRNetV2-W48 (Paddle) No No Yes --- ---
HRNetV2-W48 + OCR (Paddle) No No Yes --- ---

Note Currently we could only reproduce HRNet+OCR results on LIP dataset with PyTorch 0.4.1.

  1. Performance on the PASCAL-Context dataset. The models are trained and tested with the input size of 520x520. If multi-scale testing is used, we adopt scales: 0.5,0.75,1.0,1.25,1.5,1.75,2.0 (the same as EncNet, DANet etc.).
model num classes OHEM Multi-scale Flip mIoU Link
HRNetV2-W48 59 classes No Yes Yes 54.1 Github/BaiduYun(Access Code:wz6v)
HRNetV2-W48 + OCR 59 classes No Yes Yes 56.2 Github/BaiduYun(Access Code:yyxh)
HRNetV2-W48 60 classes No Yes Yes 48.3 OneDrive/BaiduYun(Access Code:9uf8)
HRNetV2-W48 + OCR 60 classes No Yes Yes 50.1 Github/BaiduYun(Access Code:gtkb)
HRNetV2-W48 (Paddle) 59 classes No Yes Yes --- ---
HRNetV2-W48 (Paddle) 60 classes No Yes Yes --- ---
HRNetV2-W48 + OCR (Paddle) 59 classes No Yes Yes --- ---
HRNetV2-W48 + OCR (Paddle) 60 classes No Yes Yes --- ---
  1. Performance on the COCO-Stuff dataset. The models are trained and tested with the input size of 520x520. If multi-scale testing is used, we adopt scales: 0.5,0.75,1.0,1.25,1.5,1.75,2.0 (the same as EncNet, DANet etc.).
model OHEM Multi-scale Flip mIoU Link
HRNetV2-W48 Yes No No 36.2 Github/BaiduYun(Access Code:92gw)
HRNetV2-W48 + OCR Yes No No 39.7 Github/BaiduYun(Access Code:sjc4)
HRNetV2-W48 Yes Yes Yes 37.9 Github/BaiduYun(Access Code:92gw)
HRNetV2-W48 + OCR Yes Yes Yes 40.6 Github/BaiduYun(Access Code:sjc4)
HRNetV2-W48 (Paddle) Yes No No --- ---
HRNetV2-W48 + OCR (Paddle) Yes No No --- ---
HRNetV2-W48 (Paddle) Yes Yes Yes --- ---
HRNetV2-W48 + OCR (Paddle) Yes Yes Yes --- ---
  1. Performance on the ADE20K dataset. The models are trained and tested with the input size of 520x520. If multi-scale testing is used, we adopt scales: 0.5,0.75,1.0,1.25,1.5,1.75,2.0 (the same as EncNet, DANet etc.).
model OHEM Multi-scale Flip mIoU Link
HRNetV2-W48 Yes No No 43.1 Github/BaiduYun(Access Code:f6xf)
HRNetV2-W48 + OCR Yes No No 44.5 Github/BaiduYun(Access Code:peg4)
HRNetV2-W48 Yes Yes Yes 44.2 Github/BaiduYun(Access Code:f6xf)
HRNetV2-W48 + OCR Yes Yes Yes 45.5 Github/BaiduYun(Access Code:peg4)
HRNetV2-W48 (Paddle) Yes No No --- ---
HRNetV2-W48 + OCR (Paddle) Yes No No --- ---
HRNetV2-W48 (Paddle) Yes Yes Yes --- ---
HRNetV2-W48 + OCR (Paddle) Yes Yes Yes --- ---

Quick start

Install

  1. For LIP dataset, install PyTorch=0.4.1 following the official instructions. For Cityscapes and PASCAL-Context, we use PyTorch=1.1.0.
  2. git clone https://github.com/HRNet/HRNet-Semantic-Segmentation $SEG_ROOT
  3. Install dependencies: pip install -r requirements.txt

If you want to train and evaluate our models on PASCAL-Context, you need to install details.

pip install git+https://github.com/zhanghang1989/detail-api.git#subdirectory=PythonAPI

Data preparation

You need to download the Cityscapes, LIP and PASCAL-Context datasets.

Your directory tree should be look like this:

$SEG_ROOT/data
├── cityscapes
│   ├── gtFine
│   │   ├── test
│   │   ├── train
│   │   └── val
│   └── leftImg8bit
│       ├── test
│       ├── train
│       └── val
├── lip
│   ├── TrainVal_images
│   │   ├── train_images
│   │   └── val_images
│   └── TrainVal_parsing_annotations
│       ├── train_segmentations
│       ├── train_segmentations_reversed
│       └── val_segmentations
├── pascal_ctx
│   ├── common
│   ├── PythonAPI
│   ├── res
│   └── VOCdevkit
│       └── VOC2010
├── cocostuff
│   ├── train
│   │   ├── image
│   │   └── label
│   └── val
│       ├── image
│       └── label
├── ade20k
│   ├── train
│   │   ├── image
│   │   └── label
│   └── val
│       ├── image
│       └── label
├── list
│   ├── cityscapes
│   │   ├── test.lst
│   │   ├── trainval.lst
│   │   └── val.lst
│   ├── lip
│   │   ├── testvalList.txt
│   │   ├── trainList.txt
│   │   └── valList.txt

Train and Test

PyTorch Version Differences

Note that the codebase supports both PyTorch 0.4.1 and 1.1.0, and they use different command for training. In the following context, we use $PY_CMD to denote different startup command.

# For PyTorch 0.4.1
PY_CMD="python"
# For PyTorch 1.1.0
PY_CMD="python -m torch.distributed.launch --nproc_per_node=4"

e.g., when training on Cityscapes, we use PyTorch 1.1.0. So the command

$PY_CMD tools/train.py --cfg experiments/cityscapes/seg_hrnet_ocr_w48_train_512x1024_sgd_lr1e-2_wd5e-4_bs_12_epoch484.yaml

indicates

python -m torch.distributed.launch --nproc_per_node=4 tools/train.py --cfg experiments/cityscapes/seg_hrnet_ocr_w48_train_512x1024_sgd_lr1e-2_wd5e-4_bs_12_epoch484.yaml

Training

Just specify the configuration file for tools/train.py.

For example, train the HRNet-W48 on Cityscapes with a batch size of 12 on 4 GPUs:

$PY_CMD tools/train.py --cfg experiments/cityscapes/seg_hrnet_w48_train_512x1024_sgd_lr1e-2_wd5e-4_bs_12_epoch484.yaml

For example, train the HRNet-W48 + OCR on Cityscapes with a batch size of 12 on 4 GPUs:

$PY_CMD tools/train.py --cfg experiments/cityscapes/seg_hrnet_ocr_w48_train_512x1024_sgd_lr1e-2_wd5e-4_bs_12_epoch484.yaml

Note that we only reproduce HRNet+OCR on LIP dataset using PyTorch 0.4.1. So we recommend to use PyTorch 0.4.1 if you want to train on LIP dataset.

Testing

For example, evaluating HRNet+OCR on the Cityscapes validation set with multi-scale and flip testing:

python tools/test.py --cfg experiments/cityscapes/seg_hrnet_ocr_w48_train_512x1024_sgd_lr1e-2_wd5e-4_bs_12_epoch484.yaml \
                     TEST.MODEL_FILE hrnet_ocr_cs_8162_torch11.pth \
                     TEST.SCALE_LIST 0.5,0.75,1.0,1.25,1.5,1.75 \
                     TEST.FLIP_TEST True

Evaluating HRNet+OCR on the Cityscapes test set with multi-scale and flip testing:

python tools/test.py --cfg experiments/cityscapes/seg_hrnet_ocr_w48_train_512x1024_sgd_lr1e-2_wd5e-4_bs_12_epoch484.yaml \
                     DATASET.TEST_SET list/cityscapes/test.lst \
                     TEST.MODEL_FILE hrnet_ocr_trainval_cs_8227_torch11.pth \
                     TEST.SCALE_LIST 0.5,0.75,1.0,1.25,1.5,1.75 \
                     TEST.FLIP_TEST True

Evaluating HRNet+OCR on the PASCAL-Context validation set with multi-scale and flip testing:

python tools/test.py --cfg experiments/pascal_ctx/seg_hrnet_ocr_w48_cls59_520x520_sgd_lr1e-3_wd1e-4_bs_16_epoch200.yaml \
                     DATASET.TEST_SET testval \
                     TEST.MODEL_FILE hrnet_ocr_pascal_ctx_5618_torch11.pth \
                     TEST.SCALE_LIST 0.5,0.75,1.0,1.25,1.5,1.75,2.0 \
                     TEST.FLIP_TEST True

Evaluating HRNet+OCR on the LIP validation set with flip testing:

python tools/test.py --cfg experiments/lip/seg_hrnet_w48_473x473_sgd_lr7e-3_wd5e-4_bs_40_epoch150.yaml \
                     DATASET.TEST_SET list/lip/testvalList.txt \
                     TEST.MODEL_FILE hrnet_ocr_lip_5648_torch04.pth \
                     TEST.FLIP_TEST True \
                     TEST.NUM_SAMPLES 0

Evaluating HRNet+OCR on the COCO-Stuff validation set with multi-scale and flip testing:

python tools/test.py --cfg experiments/cocostuff/seg_hrnet_ocr_w48_520x520_ohem_sgd_lr1e-3_wd1e-4_bs_16_epoch110.yaml \
                     DATASET.TEST_SET list/cocostuff/testval.lst \
                     TEST.MODEL_FILE hrnet_ocr_cocostuff_3965_torch04.pth \
                     TEST.SCALE_LIST 0.5,0.75,1.0,1.25,1.5,1.75,2.0 \
                     TEST.MULTI_SCALE True TEST.FLIP_TEST True

Evaluating HRNet+OCR on the ADE20K validation set with multi-scale and flip testing:

python tools/test.py --cfg experiments/ade20k/seg_hrnet_ocr_w48_520x520_ohem_sgd_lr2e-2_wd1e-4_bs_16_epoch120.yaml \
                     DATASET.TEST_SET list/ade20k/testval.lst \
                     TEST.MODEL_FILE hrnet_ocr_ade20k_4451_torch04.pth \
                     TEST.SCALE_LIST 0.5,0.75,1.0,1.25,1.5,1.75,2.0 \
                     TEST.MULTI_SCALE True TEST.FLIP_TEST True

Other applications of HRNet

Citation

If you find this work or code is helpful in your research, please cite:

@inproceedings{SunXLW19,
  title={Deep High-Resolution Representation Learning for Human Pose Estimation},
  author={Ke Sun and Bin Xiao and Dong Liu and Jingdong Wang},
  booktitle={CVPR},
  year={2019}
}

@article{WangSCJDZLMTWLX19,
  title={Deep High-Resolution Representation Learning for Visual Recognition},
  author={Jingdong Wang and Ke Sun and Tianheng Cheng and 
          Borui Jiang and Chaorui Deng and Yang Zhao and Dong Liu and Yadong Mu and 
          Mingkui Tan and Xinggang Wang and Wenyu Liu and Bin Xiao},
  journal={TPAMI},
  year={2019}
}

@article{YuanCW19,
  title={Object-Contextual Representations for Semantic Segmentation},
  author={Yuhui Yuan and Xilin Chen and Jingdong Wang},
  booktitle={ECCV},
  year={2020}
}

Reference

[1] Deep High-Resolution Representation Learning for Visual Recognition. Jingdong Wang, Ke Sun, Tianheng Cheng, Borui Jiang, Chaorui Deng, Yang Zhao, Dong Liu, Yadong Mu, Mingkui Tan, Xinggang Wang, Wenyu Liu, Bin Xiao. Accepted by TPAMI. download

[2] Object-Contextual Representations for Semantic Segmentation. Yuhui Yuan, Xilin Chen, Jingdong Wang. download

Acknowledgement

We adopt sync-bn implemented by InplaceABN for PyTorch 0.4.1 experiments and the official sync-bn provided by PyTorch for PyTorch 1.10 experiments.

We adopt data precosessing on the PASCAL-Context dataset, implemented by PASCAL API.

Owner
HRNet
Code for pose estimation is available at https://github.com/leoxiaobin/deep-high-resolution-net.pytorch
HRNet
Geometric Sensitivity Decomposition

Geometric Sensitivity Decomposition This repo is the official implementation of A Geometric Perspective towards Neural Calibration via Sensitivity Dec

16 Dec 26, 2022
PyTorch Implementation of DSB for Score Based Generative Modeling. Experiments managed using Hydra.

Diffusion Schrödinger Bridge with Applications to Score-Based Generative Modeling This repository contains the implementation for the paper Diffusion

James Thornton 50 Jan 03, 2023
Code for the Image similarity challenge.

ISC 2021 This repository contains code for the Image Similarity Challenge 2021. Getting started The docs subdirectory has step-by-step instructions on

Facebook Research 173 Dec 12, 2022
Experiments with the Robust Binary Interval Search (RBIS) algorithm, a Query-Based prediction algorithm for the Online Search problem.

OnlineSearchRBIS Online Search with Best-Price and Query-Based Predictions This is the implementation of the Robust Binary Interval Search (RBIS) algo

S. K. 1 Apr 16, 2022
Learning Visual Words for Weakly-Supervised Semantic Segmentation

[IJCAI 2021] Learning Visual Words for Weakly-Supervised Semantic Segmentation Implementation of IJCAI 2021 paper Learning Visual Words for Weakly-Sup

Lixiang Ru 24 Oct 05, 2022
Object Detection using YOLO from PyImageSearch

Object Detection using YOLO from PyImageSearch By applying object detection, you’ll not only be able to determine what is in an image, but also where

Mohamed NIANG 1 Feb 09, 2022
⚡ H2G-Net for Semantic Segmentation of Histopathological Images

H2G-Net This repository contains the code relevant for the proposed design H2G-Net, which was introduced in the manuscript "Hybrid guiding: A multi-re

André Pedersen 8 Nov 24, 2022
[NeurIPS 2021] Large Scale Learning on Non-Homophilous Graphs: New Benchmarks and Strong Simple Methods

Large Scale Learning on Non-Homophilous Graphs: New Benchmarks and Strong Simple Methods Large Scale Learning on Non-Homophilous Graphs: New Benchmark

60 Jan 03, 2023
MAVE: : A Product Dataset for Multi-source Attribute Value Extraction

The dataset contains 3 million attribute-value annotations across 1257 unique categories on 2.2 million cleaned Amazon product profiles. It is a large, multi-sourced, diverse dataset for product attr

Google Research Datasets 89 Jan 08, 2023
Code for ICDM2020 full paper: "Sub-graph Contrast for Scalable Self-Supervised Graph Representation Learning"

Subg-Con Sub-graph Contrast for Scalable Self-Supervised Graph Representation Learning (Jiao et al., ICDM 2020): https://arxiv.org/abs/2009.10273 Over

34 Jul 06, 2022
A Moonraker plug-in for real-time compensation of frame thermal expansion

Frame Expansion Compensation A Moonraker plug-in for real-time compensation of frame thermal expansion. Installation Credit to protoloft, from whom I

58 Jan 02, 2023
kullanışlı ve işinizi kolaylaştıracak bir araç

Hey merhaba! işte çok sorulan sorularının cevabı ve sorunlarının çözümü; Soru= İçinde var denilen birçok şeyi göremiyorum bunun sebebi nedir? Cevap= B

Sexettin 16 Dec 17, 2022
Basit bir burç modülü.

Bu modulu burclar hakkinda gundelik bir sekilde bilgi alin diye yaptim ve sizler icin kullanima sunuyorum. Modulun kullanimi asiri basit: Ornek Kullan

Special 17 Jun 08, 2022
Logsig-RNN: a novel network for robust and efficient skeleton-based action recognition

GCN_LogsigRNN This repository holds the codebase for the paper: Logsig-RNN: a novel network for robust and efficient skeleton-based action recognition

7 Oct 14, 2022
A pytorch implementation of the ACL2019 paper "Simple and Effective Text Matching with Richer Alignment Features".

RE2 This is a pytorch implementation of the ACL 2019 paper "Simple and Effective Text Matching with Richer Alignment Features". The original Tensorflo

287 Dec 21, 2022
PyTorch implementation of "Representing Shape Collections with Alignment-Aware Linear Models" paper.

deep-linear-shapes PyTorch implementation of "Representing Shape Collections with Alignment-Aware Linear Models" paper. If you find this code useful i

Romain Loiseau 27 Sep 24, 2022
Official implementation of "Robust channel-wise illumination estimation"

This repository provides the official implementation of "Robust channel-wise illumination estimation." accepted in BMVC (2021).

Firas Laakom 4 Nov 08, 2022
PyTorch implemention of ICCV'21 paper SGPA: Structure-Guided Prior Adaptation for Category-Level 6D Object Pose Estimation

SGPA: Structure-Guided Prior Adaptation for Category-Level 6D Object Pose Estimation This is the PyTorch implemention of ICCV'21 paper SGPA: Structure

Chen Kai 24 Dec 05, 2022
PyTorch implementation of Deep HDR Imaging via A Non-Local Network (TIP 2020).

NHDRRNet-PyTorch This is the PyTorch implementation of Deep HDR Imaging via A Non-Local Network (TIP 2020). 0. Differences between Original Paper and

Yutong Zhang 1 Mar 01, 2022
StarGAN2 for practice

StarGAN2 for practice This version of StarGAN2 (coined as 'Post-modern Style Transfer') is intended mostly for fellow artists, who rarely look at scie

vadim epstein 87 Sep 24, 2022