Propagate Yourself: Exploring Pixel-Level Consistency for Unsupervised Visual Representation Learning, CVPR 2021

Overview

Propagate Yourself: Exploring Pixel-Level Consistency for Unsupervised Visual Representation Learning

By Zhenda Xie*, Yutong Lin*, Zheng Zhang, Yue Cao, Stephen Lin and Han Hu.

This repo is an official implementation of "Propagate Yourself: Exploring Pixel-Level Consistency for Unsupervised Visual Representation Learning" on PyTorch.

Introduction

PixPro (pixel-to-propagation) is an unsupervised visual feature learning approach by leveraging pixel-level pretext tasks. The learnt feature can be well transferred to downstream dense prediction tasks such as object detection and semantic segmentation. PixPro achieves the best transferring performance on Pascal VOC object detection (60.2 AP using C4) and COCO object detection (41.4 / 40.5 mAP using FPN / C4) with a ResNet-50 backbone.

An illustration of the proposed PixPro method.

Architecture of the PixContrast and PixPro methods.

Citation

@article{xie2020propagate,
  title={Propagate Yourself: Exploring Pixel-Level Consistency for Unsupervised Visual Representation Learning},
  author={Xie, Zhenda and Lin, Yutong and Zhang, Zheng and Cao, Yue and Lin, Stephen and Hu, Han},
  conference={IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2021}
}

Main Results

PixPro pre-trained models

Epochs Arch Instance Branch Download
100 ResNet-50 script | model
400 ResNet-50 script | model
100 ResNet-50 ✔️ -
400 ResNet-50 ✔️ -

Pascal VOC object detection

Faster-RCNN with C4

Method Epochs Arch AP AP50 AP75 Download
Scratch - ResNet-50 33.8 60.2 33.1 -
Supervised 100 ResNet-50 53.5 81.3 58.8 -
MoCo 200 ResNet-50 55.9 81.5 62.6 -
SimCLR 1000 ResNet-50 56.3 81.9 62.5 -
MoCo v2 800 ResNet-50 57.6 82.7 64.4 -
InfoMin 200 ResNet-50 57.6 82.7 64.6 -
InfoMin 800 ResNet-50 57.5 82.5 64.0 -
PixPro (ours) 100 ResNet-50 58.8 83.0 66.5 config | model
PixPro (ours) 400 ResNet-50 60.2 83.8 67.7 config | model

COCO object detection

Mask-RCNN with FPN

Method Epochs Arch Schedule bbox AP mask AP Download
Scratch - ResNet-50 1x 32.8 29.9 -
Supervised 100 ResNet-50 1x 39.7 35.9 -
MoCo 200 ResNet-50 1x 39.4 35.6 -
SimCLR 1000 ResNet-50 1x 39.8 35.9 -
MoCo v2 800 ResNet-50 1x 40.4 36.4 -
InfoMin 200 ResNet-50 1x 40.6 36.7 -
InfoMin 800 ResNet-50 1x 40.4 36.6 -
PixPro (ours) 100 ResNet-50 1x 40.8 36.8 config | model
PixPro (ours) 100* ResNet-50 1x 41.3 37.1 -
PixPro (ours) 400* ResNet-50 1x 41.4 37.4 -

* Indicates methods with instance branch.

Mask-RCNN with C4

Method Epochs Arch Schedule bbox AP mask AP Download
Scratch - ResNet-50 1x 26.4 29.3 -
Supervised 100 ResNet-50 1x 38.2 33.3 -
MoCo 200 ResNet-50 1x 38.5 33.6 -
SimCLR 1000 ResNet-50 1x 38.4 33.6 -
MoCo v2 800 ResNet-50 1x 39.5 34.5 -
InfoMin 200 ResNet-50 1x 39.0 34.1 -
InfoMin 800 ResNet-50 1x 38.8 33.8 -
PixPro (ours) 100 ResNet-50 1x 40.0 34.8 config | model
PixPro (ours) 400 ResNet-50 1x 40.5 35.3 config | model

Getting started

Requirements

At present, we have not checked the compatibility of the code with other versions of the packages, so we only recommend the following configuration.

  • Python 3.7
  • PyTorch == 1.4.0
  • Torchvision == 0.5.0
  • CUDA == 10.1
  • Other dependencies

Installation

We recommand using conda env to setup the experimental environments.

# Create environment
conda create -n PixPro python=3.7 -y
conda activate PixPro

# Install PyTorch & Torchvision
conda install pytorch=1.4.0 cudatoolkit=10.1 torchvision -c pytorch -y

# Install apex
git clone https://github.com/NVIDIA/apex
cd apex
pip install -v --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./
cd ..

# Clone repo
git clone https://github.com/zdaxie/PixPro ./PixPro
cd ./PixPro

# Create soft link for data
mkdir data
ln -s ${ImageNet-Path} ./data/imagenet

# Install other requirements
pip install -r requirements.txt

Pretrain with PixPro

# Train with PixPro base for 100 epochs.
./tools/pixpro_base_r50_100ep.sh

Transfer to Pascal VOC or COCO object detection

# Convert a pre-trained PixPro model to detectron2's format
cd transfer/detection
python convert_pretrain_to_d2.py ${Input-Checkpoint(.pth)} ./output.pkl  

# Install Detectron2
python -m pip install detectron2==0.2.1 -f \
  https://dl.fbaipublicfiles.com/detectron2/wheels/cu101/torch1.4/index.html

# Create soft link for data
mkdir datasets
ln -s ${Pascal-VOC-Path}/VOC2007 ./datasets/VOC2007
ln -s ${Pascal-VOC-Path}/VOC2012 ./datasets/VOC2012
ln -s ${COCO-Path} ./datasets/coco

# Train detector with pre-trained PixPro model
# 1. Train Faster-RCNN with Pascal-VOC
python train_net.py --config-file configs/Pascal_VOC_R_50_C4_24k_PixPro.yaml --num-gpus 8 MODEL.WEIGHTS ./output.pkl
# 2. Train Mask-RCNN-FPN with COCO
python train_net.py --config-file configs/COCO_R_50_FPN_1x_PixPro.yaml --num-gpus 8 MODEL.WEIGHTS ./output.pkl
# 3. Train Mask-RCNN-C4 with COCO
python train_net.py --config-file configs/COCO_R_50_C4_1x_PixPro.yaml --num-gpus 8 MODEL.WEIGHTS ./output.pkl

# Test detector with provided fine-tuned model
python train_net.py --config-file configs/Pascal_VOC_R_50_C4_24k_PixPro.yaml --num-gpus 8 --eval-only \
  MODEL.WEIGHTS ./pixpro_base_r50_100ep_voc_md5_ec2dfa63.pth

More models and logs will be released!

Acknowledgement

Our testbed builds upon several existing publicly available codes. Specifically, we have modified and integrated the following code into this project:

Contributing to the project

Any pull requests or issues are welcomed.

Official PyTorch implementation of "VITON-HD: High-Resolution Virtual Try-On via Misalignment-Aware Normalization" (CVPR 2021)

VITON-HD — Official PyTorch Implementation VITON-HD: High-Resolution Virtual Try-On via Misalignment-Aware Normalization Seunghwan Choi*1, Sunghyun Pa

Seunghwan Choi 250 Jan 06, 2023
aka "Bayesian Methods for Hackers": An introduction to Bayesian methods + probabilistic programming with a computation/understanding-first, mathematics-second point of view. All in pure Python ;)

Bayesian Methods for Hackers Using Python and PyMC The Bayesian method is the natural approach to inference, yet it is hidden from readers behind chap

Cameron Davidson-Pilon 25.1k Jan 02, 2023
Who calls the shots? Rethinking Few-Shot Learning for Audio (WASPAA 2021)

rethink-audio-fsl This repo contains the source code for the paper "Who calls the shots? Rethinking Few-Shot Learning for Audio." (WASPAA 2021) Table

Yu Wang 34 Dec 24, 2022
Proof-Of-Concept Piano-Drums Music AI Model/Implementation

Rock Piano "When all is one and one is all, that's what it is to be a rock and not to roll." ---Led Zeppelin, "Stairway To Heaven" Proof-Of-Concept Pi

Alex 4 Nov 28, 2021
MogFace: Towards a Deeper Appreciation on Face Detection

MogFace: Towards a Deeper Appreciation on Face Detection Introduction In this repo, we propose a promising face detector, termed as MogFace. Our MogFa

48 Dec 20, 2022
The official PyTorch implementation of recent paper - SAINT: Improved Neural Networks for Tabular Data via Row Attention and Contrastive Pre-Training

This repository is the official PyTorch implementation of SAINT. Find the paper on arxiv SAINT: Improved Neural Networks for Tabular Data via Row Atte

Gowthami Somepalli 284 Dec 21, 2022
Spatial Single-Cell Analysis Toolkit

Single-Cell Image Analysis Package Scimap is a scalable toolkit for analyzing spatial molecular data. The underlying framework is generalizable to spa

Laboratory of Systems Pharmacology @ Harvard 30 Nov 08, 2022
When BERT Plays the Lottery, All Tickets Are Winning

When BERT Plays the Lottery, All Tickets Are Winning Large Transformer-based models were shown to be reducible to a smaller number of self-attention h

Sai 16 Nov 10, 2022
Controlling the MicriSpotAI robot from scratch

Abstract: The SpotMicroAI project is designed to be a low cost, easily built quadruped robot. The design is roughly based off of Boston Dynamics quadr

Florian Wilk 405 Jan 05, 2023
The official code for paper "R2D2: Recursive Transformer based on Differentiable Tree for Interpretable Hierarchical Language Modeling".

R2D2 This is the official code for paper titled "R2D2: Recursive Transformer based on Differentiable Tree for Interpretable Hierarchical Language Mode

Alipay 49 Dec 17, 2022
Multiple style transfer via variational autoencoder

ST-VAE Multiple style transfer via variational autoencoder By Zhi-Song Liu, Vicky Kalogeiton and Marie-Paule Cani This repo only provides simple testi

13 Oct 29, 2022
A toolset for creating Qualtrics-based IAT experiments

Qualtrics IAT Tool A web app for generating the Implicit Association Test (IAT) running on Qualtrics Online Web App The app is hosted by Streamlit, a

0 Feb 12, 2022
HistoSeg : Quick attention with multi-loss function for multi-structure segmentation in digital histology images

HistoSeg : Quick attention with multi-loss function for multi-structure segmentation in digital histology images Histological Image Segmentation This

Saad Wazir 11 Dec 16, 2022
GitHub repository for "Improving Video Generation for Multi-functional Applications"

Improving Video Generation for Multi-functional Applications GitHub repository for "Improving Video Generation for Multi-functional Applications" Pape

Bernhard Kratzwald 328 Dec 07, 2022
MLPs for Vision and Langauge Modeling (Coming Soon)

MLP Architectures for Vision-and-Language Modeling: An Empirical Study MLP Architectures for Vision-and-Language Modeling: An Empirical Study (Code wi

Yixin Nie 27 May 09, 2022
Simple, efficient and flexible vision toolbox for mxnet framework.

MXbox: Simple, efficient and flexible vision toolbox for mxnet framework. MXbox is a toolbox aiming to provide a general and simple interface for visi

Ligeng Zhu 31 Oct 19, 2019
Reverse engineering recurrent neural networks with Jacobian switching linear dynamical systems

Reverse engineering recurrent neural networks with Jacobian switching linear dynamical systems This repository is the official implementation of Rever

6 Aug 25, 2022
Self Governing Neural Networks (SGNN): the Projection Layer

Self Governing Neural Networks (SGNN): the Projection Layer A SGNN's word projections preprocessing pipeline in scikit-learn In this notebook, we'll u

Guillaume Chevalier 22 Nov 06, 2022
A PyTorch implementation of SIN: Superpixel Interpolation Network

SIN: Superpixel Interpolation Network This is is a PyTorch implementation of the superpixel segmentation network introduced in our PRICAI-2021 paper:

6 Sep 28, 2022
Research code for CVPR 2021 paper "End-to-End Human Pose and Mesh Reconstruction with Transformers"

MeshTransformer ✨ This is our research code of End-to-End Human Pose and Mesh Reconstruction with Transformers. MEsh TRansfOrmer is a simple yet effec

Microsoft 473 Dec 31, 2022