(AAAI 2021) Progressive One-shot Human Parsing

Overview

End-to-end One-shot Human Parsing

This is the official repository for our two papers:


Introduction:

In the two papers, we propose a new task named One-shot Human Parsing (OSHP). OSHP requires parsing humans in a query image into an open set of reference classes defined by any single reference example (i.e., a support image) during testing, no matter whether they have been annotated during training (base classes) or not (novel classes). This new task mainly aims to accommodate human parsing into a wider range of applications that seek to parse flexible fashion/clothing classes that are not pre-defined in previous large-scale datasets.

Progressive One-shot Human Parsing (AAAI 2021) applies a progressive training scheme and is separated into three stages.

End-to-end One-shot Human Parsing (journal version) is a one-stage end-to-end training method, which has higher performance and FPS.


Main results:

You can find the well-trained models together with the performance in the following table.

EOPNet ATR-OS, Kway F1 ATR-OS, Kway Fold F2 LIP-OS, Kway F1 LIP-OS, Kway F2 CIHP-OS, Kway F1 CIHP-OS Kway F2
Novel mIoU 31.1 34.6 25.7 30.4 20.5 25.1
Human mIoU 61.9 63.3 43.0 45.7 49.1 45.5
Model Model Coming Soon Model Model Model Model

You can find the well-trained models together with the performance in the following table.

EOPNet ATR-OS, 1way F1 ATR-OS, 1way F2 LIP-OS, 1way F1 LIP-OS, 1way F2 CIHP-OS, 1way F1 CIHP-OS 1way F2
Novel mIoU 53.0 41.4 42.0 46.2 25.4 36.4
Human mIoU 68.2 69.5 57.0 58.0 53.8 55.4
Model Coming Soon

Getting started:

Data preparation:

First, please download ATR, LIP and CIHP dataset from source. Then, use the following commands to link the data into our project folder. Please also remember to download the atr flipped labels and cihp flipped labels.

# ATR dataset
$ ln -s YOUR_ATR_PATH/JPEGImages/* YOUR_PROJECT_ROOT/ATR_OS/trainval_images
$ ln -s YOUR_ATR_PATH/SegmentationClassAug/* YOUR_PROJECT_ROOT/ATR_OS/trainval_classes
$ ln -s YOUR_ATR_PATH/SegmentationClassAug_rev/* YOUR_PROJECT_ROOT/ATR_OS/Category_rev_ids


# LIP dataset
$ ln -s YOUR_LIP_PATH/TrainVal_images/TrainVal_images/train_images/* YOUR_PROJECT_ROOT/LIP_OS/trainval_images
$ ln -s YOUR_LIP_PATH/TrainVal_images/TrainVal_images/val_images/* YOUR_PROJECT_ROOT/LIP_OS/trainval_images
$ ln -s YOUR_LIP_PATH/TrainVal_parsing_annotations/TrainVal_parsing_annotations/train_segmentations/* YOUR_PROJECT_ROOT/LIP_OS/trainval_classes
$ ln -s YOUR_LIP_PATH/TrainVal_parsing_annotations/TrainVal_parsing_annotations/val_segmentations/* YOUR_PROJECT_ROOT/LIP_OS/trainval_classes
$ ln -s YOUR_LIP_PATH/Train_parsing_reversed_labels/TrainVal_parsing_annotations/* YOUR_PROJECT_ROOT/LIP_OS/Category_rev_ids
$ ln -s YOUR_LIP_PATH/val_segmentations_reversed/* YOUR_PROJECT_ROOT/LIP_OS/Category_rev_ids


# CIHP dataset
$ ln -s YOUR_CIHP_PATH/Training/Images/* YOUR_PROJECT_ROOT/CIHP_OS/trainval_images
$ ln -s YOUR_CIHP_PATH/Validation/Images/* YOUR_PROJECT_ROOT/CIHP_OS/trainval_images
$ ln -s YOUR_CIHP_PATH/Training/Category_ids/* YOUR_PROJECT_ROOT/CIHP_OS/trainval_classes
$ ln -s YOUR_CIHP_PATH/Validation/Category_ids/* YOUR_PROJECT_ROOT/CIHP_OS/trainval_classes
$ ln -s YOUR_CIHP_PATH/Category_rev_ids/* YOUR_PROJECT_ROOT/CIHP_OS/Category_rev_ids

Please also download our generated support .pkl files from source, which contains each class's image IDs. You can also generate support files on your own by controlling dtrain_dtest_split in oshp_loader.py, however, the training and validation list might be different from our paper.

Finally, your data folder should look like this:

${PROJECT ROOT}
|-- data
|   |--datasets
|       |-- ATR_OS
|       |   |-- list
|       |   |   |-- meta_train_id.txt
|       |   |   `-- meta_test_id.txt
|       |   |-- support
|       |   |   |-- meta_train_atr_supports.pkl
|       |   |   `-- meta_test_atr_supports.pkl
|       |   |-- trainval_images
|       |   |   |-- 997-1.jpg
|       |   |   |-- 997-2.jpg
|       |   |   `-- ...
|       |   |-- trainval_classes
|       |   |   |-- 997-1.png
|       |   |   |-- 997-2.png
|       |   |   `-- ... 
|       |   `-- Category_rev_ids
|       |       |-- 997-1.png
|       |       |-- 997-2.png
|       |       `-- ... 
|       |-- LIP_OS
|       |   |-- list
|       |   |   |-- meta_train_id.txt
|       |   |   |-- meta_test_id.txt
|       |   |-- support
|       |   |   |-- meta_train_lip_supports.pkl
|       |   |   `-- meta_test_lip_supports.pkl
|       |   |-- trainval_images
|       |   |   |-- ...
|       |   |-- trainval_classes
|       |   |   |-- ... 
|       |   `-- Category_rev_ids
|       |       |-- ... 
|       `-- CIHP_OS
|           |-- list
|           |   |-- meta_train_id.txt
|           |   |-- meta_test_id.txt
|           |-- support
|           |   |-- meta_train_cihp_supports.pkl
|           |   `-- meta_test_cihp_supports.pkl
|           |-- trainval_images
|           |   |-- ...
|           |-- trainval_classes
|           |   |-- ... 
|           `-- Category_rev_ids
|               |-- ... 

Finally, please download the DeepLab V3+ pretrained model (pretrained on COCO dataset) from source and put it into the data folder:

${PROJECT ROOT}
|-- data
|   |--pretrained_model
|       |--deeplab_v3plus_v3.pth

Installation:

Please make sure your current environment has Python >= 3.7.0 and pytorch >= 1.1.0. The pytorch can be downloaded from source.

Then, clone the repository and install the dependencies from the following commands:

git clone https://github.com/Charleshhy/One-shot-Human-Parsing.git
cd One-shot-Human-Parsing
pip install -r requirements.txt

Training:

To train EOPNet in End-to-end One-shot Human Parsing (journal version), run:

# OSHP kway on ATR-OS fold 1
bash scripts/atr_eop_kwf1.sh

Validation:

To evaluate EOPNet in End-to-end One-shot Human Parsing (journal version), run:

# OSHP kway on ATR-OS fold 1
bash scripts/evaluate_atr_eop_kwf1.sh

TODO:

  • Release training/validation code for POPNet
  • Release well-trained EOPNet 1-way models

Citation:

If you find our papers or this repository useful, please consider cite our papers:

@inproceedings{he2021progressive,
title={Progressive One-shot Human Parsing},
author={He, Haoyu and Zhang, Jing and Thuraisingham, Bhavani and Tao, Dacheng},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
year={2021}
}

@article{he2021end,
title={End-to-end One-shot Human Parsing},
author={He, Haoyu and Zhang, Jing and Zhuang, Bohan and Cai, Jianfei and Tao, Dacheng},
journal={arXiv preprint arXiv:2105.01241},
year={2021}
}

Acknowledgement:

This repository is mainly developed basing on Graphonomy and Grapy-ML.

The Python3 import playground

The Python3 import playground I have been confused about python modules and packages, this text tries to clear the topic up a bit. Sources: https://ch

Michael Moser 5 Feb 22, 2022
This is the official code release for the paper Shape and Material Capture at Home

This is the official code release for the paper Shape and Material Capture at Home. The code enables you to reconstruct a 3D mesh and Cook-Torrance BRDF from one or more images captured with a flashl

89 Dec 10, 2022
Voxel-based Network for Shape Completion by Leveraging Edge Generation (ICCV 2021, oral)

Voxel-based Network for Shape Completion by Leveraging Edge Generation This is the PyTorch implementation for the paper "Voxel-based Network for Shape

10 Dec 04, 2022
A simple but complete full-attention transformer with a set of promising experimental features from various papers

x-transformers A concise but fully-featured transformer, complete with a set of promising experimental features from various papers. Install $ pip ins

Phil Wang 2.3k Jan 03, 2023
A flexible and extensible framework for gait recognition.

A flexible and extensible framework for gait recognition. You can focus on designing your own models and comparing with state-of-the-arts easily with the help of OpenGait.

Shiqi Yu 335 Dec 22, 2022
Diverse Branch Block: Building a Convolution as an Inception-like Unit

Diverse Branch Block: Building a Convolution as an Inception-like Unit (PyTorch) (CVPR-2021) DBB is a powerful ConvNet building block to replace regul

253 Dec 24, 2022
Code of our paper "Contrastive Object-level Pre-training with Spatial Noise Curriculum Learning"

CCOP Code of our paper Contrastive Object-level Pre-training with Spatial Noise Curriculum Learning Requirement Install OpenSelfSup Install Detectron2

Chenhongyi Yang 21 Dec 13, 2022
2D&3D human pose estimation

Human Pose Estimation Papers [CVPR 2016] - 201511 [IJCAI 2016] - 201602 Other Action Recognition with Joints-Pooled 3D Deep Convolutional Descriptors

133 Jan 02, 2023
U-Time: A Fully Convolutional Network for Time Series Segmentation

U-Time & U-Sleep Official implementation of The U-Time [1] model for general-purpose time-series segmentation. The U-Sleep [2] model for resilient hig

Mathias Perslev 176 Dec 19, 2022
DeepOBS: A Deep Learning Optimizer Benchmark Suite

DeepOBS - A Deep Learning Optimizer Benchmark Suite DeepOBS is a benchmarking suite that drastically simplifies, automates and improves the evaluation

Aaron Bahde 7 May 12, 2020
Learning to Identify Top Elo Ratings with A Dueling Bandits Approach

Learning to Identify Top Elo Ratings We propose two algorithms MaxIn-Elo and MaxIn-mElo to solve the top players identification on the transitive and

2 Jan 14, 2022
Collection of common code that's shared among different research projects in FAIR computer vision team.

fvcore fvcore is a light-weight core library that provides the most common and essential functionality shared in various computer vision frameworks de

Meta Research 1.5k Jan 07, 2023
Multiwavelets-based operator model

Multiwavelet model for Operator maps Gaurav Gupta, Xiongye Xiao, and Paul Bogdan Multiwavelet-based Operator Learning for Differential Equations In Ne

Gaurav 33 Dec 04, 2022
A code implementation of AC-GC: Activation Compression with Guaranteed Convergence, in NeurIPS 2021.

Code For AC-GC: Lossy Activation Compression with Guaranteed Convergence This code is intended to be used as a supplemental material for submission to

Dave Evans 2 Nov 01, 2022
A trusty face recognition research platform developed by Tencent Youtu Lab

Introduction TFace: A trusty face recognition research platform developed by Tencent Youtu Lab. It provides a high-performance distributed training fr

Tencent 956 Jan 01, 2023
Code for Towards Unifying Behavioral and Response Diversity for Open-ended Learning in Zero-sum Games

Unifying Behavioral and Response Diversity for Open-ended Learning in Zero-sum Games How to run our algorithm? Create the new environment using: conda

MARL @ SJTU 8 Dec 27, 2022
The code of NeurIPS 2021 paper "Scalable Rule-Based Representation Learning for Interpretable Classification".

Rule-based Representation Learner This is a PyTorch implementation of Rule-based Representation Learner (RRL) as described in NeurIPS 2021 paper: Scal

Zhuo Wang 53 Dec 17, 2022
A package to predict protein inter-residue geometries from sequence data

trRosetta This package is a part of trRosetta protein structure prediction protocol developed in: Improved protein structure prediction using predicte

Ivan Anishchenko 185 Jan 07, 2023
Code for the tech report Toward Training at ImageNet Scale with Differential Privacy

Differentially private Imagenet training Code for the tech report Toward Training at ImageNet Scale with Differential Privacy by Alexey Kurakin, Steve

Google Research 29 Nov 03, 2022
Codes for AAAI22 paper "Learning to Solve Travelling Salesman Problem with Hardness-Adaptive Curriculum"

Paper For more details, please see our paper Learning to Solve Travelling Salesman Problem with Hardness-Adaptive Curriculum which has been accepted a

14 Sep 30, 2022