Revitalizing CNN Attention via Transformers in Self-Supervised Visual Representation Learning

Related tags

Deep LearningCARE
Overview

Revitalizing CNN Attention via Transformers in Self-Supervised Visual Representation Learning

This repository is the official implementation of CARE. Graph

Updates

  • (09/10/2021) Our paper is accepted by NeurIPS 2021.

Requirements

To install requirements:

conda create -n care python=3.6
conda install pytorch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2 cudatoolkit=10.1 -c pytorch
pip install tensorboard
pip install ipdb
pip install einops
pip install loguru
pip install pyarrow==3.0.0
pip install tqdm

πŸ“‹ Pytorch>=1.6 is needed for runing the code.

Data Preparation

Prepare the ImageNet data in {data_path}/train.lmdb and {data_path}/val.lmdb

Relpace the original data path in care/data/dataset_lmdb (Line7 and Line40) with your new {data_path}.

πŸ“‹ Note that we use the lmdb file to speed-up the data-processing procedure.

Training

Before training the ResNet-50 (100 epoch) in the paper, run this command first to add your PYTHONPATH:

export PYTHONPATH=$PYTHONPATH:{your_code_path}/care/
export PYTHONPATH=$PYTHONPATH:{your_code_path}/care/care/

Then run the training code via:

bash run_train.sh      #(The training script is used for trianing CARE with 8 gpus)
bash single_gpu_train.sh    #(We also provide the script for trainig CARE with only one gpu)

πŸ“‹ The training script is used to do unsupervised pre-training of a ResNet-50 model on ImageNet in an 8-gpu machine

  1. using -b to specify batch_size, e.g., -b 128
  2. using -d to specify gpu_id for training, e.g., -d 0-7
  3. using --log_path to specify the main folder for saving experimental results.
  4. using --experiment-name to specify the folder for saving training outputs.

The code base also supports for training other backbones (e.g., ResNet101 and ResNet152) with different training schedules (e.g., 200, 400 and 800 epochs).

Evaluation

Before start the evaluation, run this command first to add your PYTHONPATH:

export PYTHONPATH=$PYTHONPATH:{your_code_path}/care/
export PYTHONPATH=$PYTHONPATH:{your_code_path}/care/care/

Then, to evaluate the pre-trained model (e.g., ResNet50-100epoch) on ImageNet, run:

bash run_val.sh      #(The training script is used for evaluating CARE with 8 gpus)
bash debug_val.sh    #(We also provide the script for evaluating CARE with only one gpu)

πŸ“‹ The training script is used to do the supervised linear evaluation of a ResNet-50 model on ImageNet in an 8-gpu machine

  1. using -b to specify batch_size, e.g., -b 128
  2. using -d to specify gpu_id for training, e.g., -d 0-7
  3. Modifying --log_path according to your own config.
  4. Modifying --experiment-name according to your own config.

Pre-trained Models

We here provide some pre-trained models in the [shared folder]:

Here are some examples.

  • [ResNet-50 100epoch] trained on ImageNet using ResNet-50 with 100 epochs.
  • [ResNet-50 200epoch] trained on ImageNet using ResNet-50 with 200 epochs.
  • [ResNet-50 400epoch] trained on ImageNet using ResNet-50 with 400 epochs.

More models are provided in the following model zoo part.

πŸ“‹ We will provide more pretrained models in the future.

Model Zoo

Our model achieves the following performance on :

Self-supervised learning on image classifications.

Method Backbone epoch Top-1 Top-5 pretrained model linear evaluation model
CARE ResNet50 100 72.02% 90.02% [pretrained] (wip) [linear_model] (wip)
CARE ResNet50 200 73.78% 91.50% [pretrained] (wip) [linear_model] (wip)
CARE ResNet50 400 74.68% 91.97% [pretrained] (wip) [linear_model] (wip)
CARE ResNet50 800 75.56% 92.32% [pretrained] (wip) [linear_model] (wip)
CARE ResNet50(2x) 100 73.51% 91.66% [pretrained] (wip) [linear_model] (wip)
CARE ResNet50(2x) 200 75.00% 92.22% [pretrained] (wip) [linear_model] (wip)
CARE ResNet50(2x) 400 76.48% 92.99% [pretrained] (wip) [linear_model] (wip)
CARE ResNet50(2x) 800 77.04% 93.22% [pretrained] (wip) [linear_model] (wip)
CARE ResNet101 100 73.54% 91.63% [pretrained] (wip) [linear_model] (wip)
CARE ResNet101 200 75.89% 92.70% [pretrained] (wip) [linear_model] (wip)
CARE ResNet101 400 76.85% 93.31% [pretrained] (wip) [linear_model] (wip)
CARE ResNet101 800 77.23% 93.52% [pretrained] (wip) [linear_model] (wip)
CARE ResNet152 100 74.59% 92.09% [pretrained] (wip) [linear_model] (wip)
CARE ResNet152 200 76.58% 93.63% [pretrained] (wip) [linear_model] (wip)
CARE ResNet152 400 77.40% 93.63% [pretrained] (wip) [linear_model] (wip)
CARE ResNet152 800 78.11% 93.81% [pretrained] (wip) [linear_model] (wip)

Transfer learning to object detection and semantic segmentation.

COCO det

Method Backbone epoch AP_bb AP_50 AP_75 pretrained model det/seg model
CARE ResNet50 200 39.4 59.2 42.6 [pretrained] (wip) [model] (wip)
CARE ResNet50 400 39.6 59.4 42.9 [pretrained] (wip) [model] (wip)
CARE ResNet50-FPN 200 39.5 60.2 43.1 [pretrained] (wip) [model] (wip)
CARE ResNet50-FPN 400 39.8 60.5 43.5 [pretrained] (wip) [model] (wip)

COCO instance seg

Method Backbone epoch AP_mk AP_50 AP_75 pretrained model det/seg model
CARE ResNet50 200 34.6 56.1 36.8 [pretrained] (wip) [model] (wip)
CARE ResNet50 400 34.7 56.1 36.9 [pretrained] (wip) [model] (wip)
CARE ResNet50-FPN 200 35.9 57.2 38.5 [pretrained] (wip) [model] (wip)
CARE ResNet50-FPN 400 36.2 57.4 38.8 [pretrained] (wip) [model] (wip)

VOC07+12 det

Method Backbone epoch AP_bb AP_50 AP_75 pretrained model det/seg model
CARE ResNet50 200 57.7 83.0 64.5 [pretrained] (wip) [model] (wip)
CARE ResNet50 400 57.9 83.0 64.7 [pretrained] (wip) [model] (wip)

πŸ“‹ More results are provided in the paper.

Contributing

πŸ“‹ WIP

Owner
ChongjianGE
🎯 PhD in Computer Vision β˜‘οΈ MSc & BEng in Electrical Engineering
ChongjianGE
SwinIR: Image Restoration Using Swin Transformer

SwinIR: Image Restoration Using Swin Transformer This repository is the official PyTorch implementation of SwinIR: Image Restoration Using Shifted Win

Jingyun Liang 2.4k Jan 08, 2023
Implementation EfficientDet: Scalable and Efficient Object Detection in PyTorch

Implementation EfficientDet: Scalable and Efficient Object Detection in PyTorch

tonne 1.4k Dec 29, 2022
UDP++ (ECCVW 2020 Oral), (Winner of COCO 2020 Keypoint Challenge).

UDP-Pose This is the pytorch implementation for UDP++, which won the Fisrt place in COCO Keypoint Challenge at ECCV 2020 Workshop. Top-Down Results on

20 Jul 29, 2022
Direct LiDAR Odometry: Fast Localization with Dense Point Clouds

Direct LiDAR Odometry: Fast Localization with Dense Point Clouds DLO is a lightweight and computationally-efficient frontend LiDAR odometry solution w

VECTR at UCLA 369 Dec 30, 2022
ktrain is a Python library that makes deep learning and AI more accessible and easier to apply

Overview | Tutorials | Examples | Installation | FAQ | How to Cite Welcome to ktrain News and Announcements 2020-11-08: ktrain v0.25.x is released and

Arun S. Maiya 1.1k Jan 02, 2023
CMP 414/765 course repository for Spring 2022 semester

CMP414/765: Artificial Intelligence Spring2021 This is the GitHub repository for course CMP 414/765: Artificial Intelligence taught at The City Univer

ch00226855 4 May 16, 2022
The repository contains reproducible PyTorch source code of our paper Generative Modeling with Optimal Transport Maps, ICLR 2022.

Generative Modeling with Optimal Transport Maps The repository contains reproducible PyTorch source code of our paper Generative Modeling with Optimal

Litu Rout 30 Dec 22, 2022
Implementation for the IJCAI2021 work "Beyond the Spectrum: Detecting Deepfakes via Re-synthesis"

Beyond the Spectrum Implementation for the IJCAI2021 work "Beyond the Spectrum: Detecting Deepfakes via Re-synthesis" by Yang He, Ning Yu, Margret Keu

Yang He 27 Jan 07, 2023
Code for intrusion detection system (IDS) development using CNN models and transfer learning

Intrusion-Detection-System-Using-CNN-and-Transfer-Learning This is the code for the paper entitled "A Transfer Learning and Optimized CNN Based Intrus

Western OC2 Lab 38 Dec 12, 2022
The repository forked from NVlabs uses our data. (Differentiable rasterization applied to 3D model simplification tasks)

nvdiffmodeling [origin_code] Differentiable rasterization applied to 3D model simplification tasks, as described in the paper: Appearance-Driven Autom

Qiujie (Jay) Dong 2 Oct 31, 2022
[ICLR 2021] Is Attention Better Than Matrix Decomposition?

Enjoy-Hamburger πŸ” Official implementation of Hamburger, Is Attention Better Than Matrix Decomposition? (ICLR 2021) Under construction. Introduction T

Gsunshine 271 Dec 29, 2022
GPU-accelerated Image Processing library using OpenCL

pyclesperanto pyclesperanto is a python package for clEsperanto - a multi-language framework for GPU-accelerated image processing. clEsperanto uses Op

17 Dec 25, 2022
Code release of paper "Deep Multi-View Stereo gone wild"

Deep MVS gone wild Pytorch implementation of "Deep MVS gone wild" (Paper | website) This repository provides the code to reproduce the experiments of

FranΓ§ois Darmon 53 Dec 24, 2022
Metric learning algorithms in Python

metric-learn: Metric Learning in Python metric-learn contains efficient Python implementations of several popular supervised and weakly-supervised met

1.3k Dec 28, 2022
functorch is a prototype of JAX-like composable function transforms for PyTorch.

functorch is a prototype of JAX-like composable function transforms for PyTorch.

Facebook Research 1.2k Jan 09, 2023
Vision transformers (ViTs) have found only limited practical use in processing images

CXV Convolutional Xformers for Vision Vision transformers (ViTs) have found only limited practical use in processing images, in spite of their state-o

Cloudwalker 23 Sep 10, 2022
Unofficial pytorch implementation for Self-critical Sequence Training for Image Captioning. and others.

An Image Captioning codebase This is a codebase for image captioning research. It supports: Self critical training from Self-critical Sequence Trainin

Ruotian(RT) Luo 906 Jan 03, 2023
VolumeGAN - 3D-aware Image Synthesis via Learning Structural and Textural Representations

VolumeGAN - 3D-aware Image Synthesis via Learning Structural and Textural Representations 3D-aware Image Synthesis via Learning Structural and Textura

GenForce: May Generative Force Be with You 116 Dec 26, 2022
Repository for open research on optimizers.

Open Optimizers Repository for open research on optimizers. This is a test in sharing research/exploration as it happens. If you use anything from thi

Ariel Ekgren 6 Jun 24, 2022
MinkLoc3D-SI: 3D LiDAR place recognition with sparse convolutions,spherical coordinates, and intensity

MinkLoc3D-SI: 3D LiDAR place recognition with sparse convolutions,spherical coordinates, and intensity Introduction The 3D LiDAR place recognition aim

16 Dec 08, 2022