Spatial Contrastive Learning for Few-Shot Classification (SCL)

Overview

Spatial Contrastive Learning for Few-Shot Classification (SCL)

Paper 📃

This repo contains the official implementation of Spatial Contrastive Learning for Few-Shot Classification (SCL), which presents of a novel contrastive learning method applied to few-shot image classification in order to learn more general purpose embeddings, and facilitate the test-time adaptation to novel visual categories.

Highlights 🔥

(1) Contrastive Learning for Few-Shot Classification.
We explore contrastive learning as an auxiliary pre-training objective to learn more transferable features and facilitate the test time adaptation for few-shot classification.

(2) Spatial Contrastive Learning (SCL).
We propose a novel Spatial Contrastive (SC) loss that promotes the encoding of the relevant spatial information into the learned representations, and further promotes class-independent discriminative patterns.

(3) Contrastive Distillation for Few-Shot Classification.
We introduce a novel contrastive distillation objective to reduce the compactness of the features in the embedding space and provide additional refinement of the representations.

Requirements 🔧

This repo was tested with CentOS 7.7.1908, Python 3.7.7, PyTorch 1.6.0, and CUDA 10.2. However, we expect that the provided code is compatible with older and newer version alike.

The required packages are pytorch and torchvision, together with PIL and sckitlearn for data-preprocessing and evaluation, tqdm for showing the training progress, and some additional modules. To setup the necessary modules, simply run:

pip install -r requirements.txt

Datasets 💽

Standard Few-shot Setting

For the standard few-shot experiments, we used ImageNet derivatives: miniImagetNet and tieredImageNet, in addition to CIFAR-100 derivatives: FC100 and CIFAR-FS. These datasets are preprocessed by the repo of MetaOptNet, renamed and re-uploaded by RFS and can be downloaded from here: [DropBox]

After downloading all of the dataset, and placing them in the same folder which we refer to as DATA_PATH, where each dataset has its specific folder, eg: DATA_PATH/FC100. Then, during training, we can set the training argument data_root to DATA_PATH.

Cross-domain Few-shot Setting

In cross-domain setting, we train on miniImageNet but we test on a different dataset. Specifically, we consider 4 datasets: cub, cars, places and plantae. All of the datasets can be downloaded as follows:

cd dataset/download
python download.py DATASET_NAME DATA_PATH

where DATASET_NAME refers to one of the 4 datasets (cub, cars, places and plantae) and DATA_PATH refers to the path where the data will be downloaded and saved, which can be the path as the standard datasets above.

Running

All of the commands necessary to reproduce the results of the paper can be found in scripts/run.sh.

In general, to use the proposed method for few-shot classification, there is a two stage approach to follows: (1) training the model on the merged meta-training set using train_contrastive.py, then (2) an evaluation setting, where we evaluate the pre-trained embedding model on the meta-testing stage using eval_fewshot.py. Note that we can also apply an optional distillation step after the first pre-training step using train_distillation.py.

Other Use Cases

The proposed SCL method is not specific to few-shot classification, and can also be used for standard supervised or self-supervised training for image classification. For instance, this can be done as follows:

from losses import ContrastiveLoss
from models.attention import AttentionSimilarity

attention_module = AttentionSimilarity(hidden_size=128) # hidden_size depends on the encoder
contrast_criterion = ContrastiveLoss(temperature=10) # inverse temp is used (0.1)

....

# apply some augmentations
aug_inputs1, aug_inputs2 = augment(inputs) 
aug_inputs = torch.cat([aug_inputs1, aug_inputs2], dim=0)

# forward pass
features = encoder(aug_inputs)

# supervised case
loss_contrast = contrast_criterion(features, attention=attention_module, labels=labels)

# unsupervised case
loss_contrast = contrast_criterion(features, attention=attention_module, labels=None)

....

Citation 📝

If you find this repo useful for your research, please consider citing the paper as follows:

@article{ouali2020spatial,
  title={Spatial Contrastive Learning for Few-Shot Classification},
  author={Ouali, Yassine and Hudelot, C{\'e}line and Tami, Myriam},
  journal={arXiv preprint arXiv:2012.13831},
  year={2020}
}

For any questions, please contact Yassine Ouali.

Acknowlegements

  • The code structure is based on RFS repo.
  • The cross-domain datasets code is based on CrossDomainFewShot repo.
Python scripts form performing stereo depth estimation using the high res stereo model in PyTorch .

PyTorch-High-Res-Stereo-Depth-Estimation Python scripts form performing stereo depth estimation using the high res stereo model in PyTorch. Stereo dep

Ibai Gorordo 26 Nov 24, 2022
Code for Mesh Convolution Using a Learned Kernel Basis

Mesh Convolution This repository contains the implementation (in PyTorch) of the paper FULLY CONVOLUTIONAL MESH AUTOENCODER USING EFFICIENT SPATIALLY

Yi_Zhou 35 Jan 03, 2023
Neural Reprojection Error: Merging Feature Learning and Camera Pose Estimation

Neural Reprojection Error: Merging Feature Learning and Camera Pose Estimation This is the official repository for our paper Neural Reprojection Error

Hugo Germain 78 Dec 01, 2022
A rule learning algorithm for the deduction of syndrome definitions from time series data.

README This project provides a rule learning algorithm for the deduction of syndrome definitions from time series data. Large parts of the algorithm a

0 Sep 24, 2021
Explore the Expression: Facial Expression Generation using Auxiliary Classifier Generative Adversarial Network

Explore the Expression: Facial Expression Generation using Auxiliary Classifier Generative Adversarial Network This is the official implementation of

azad 2 Jul 09, 2022
This repository is for DSA and CP scripts for reference.

dsa-script-collections This Repo is the collection of DSA and CP scripts for reference. Contents Python Bubble Sort Insertion Sort Merge Sort Quick So

Aditya Kumar Pandey 9 Nov 22, 2022
공공장소에서 눈만 돌리면 CCTV가 보인다는 말이 과언이 아닐 정도로 CCTV가 우리 생활에 깊숙이 자리 잡았습니다.

ObsCare_Main 소개 공공장소에서 눈만 돌리면 CCTV가 보인다는 말이 과언이 아닐 정도로 CCTV가 우리 생활에 깊숙이 자리 잡았습니다. CCTV의 대수가 급격히 늘어나면서 관리와 효율성 문제와 더불어, 곳곳에 설치된 CCTV를 개별 관제하는 것으로는 응급 상

5 Jul 07, 2022
Reproduction process of AlexNet

PaddlePaddle论文复现杂谈 背景 注:该repo基于PaddlePaddle,对AlexNet进行复现。时间仓促,难免有所疏漏,如果问题或者想法,欢迎随时提issue一块交流。 飞桨论文复现赛地址:https://aistudio.baidu.com/aistudio/competitio

19 Nov 29, 2022
🇰🇷 Text to Image in Korean

KoDALLE Utilizing pretrained language model’s token embedding layer and position embedding layer as DALLE’s text encoder. Background Training DALLE mo

HappyFace 74 Sep 22, 2022
Implementation of momentum^2 teacher

Momentum^2 Teacher: Momentum Teacher with Momentum Statistics for Self-Supervised Learning Requirements All experiments are done with python3.6, torch

jemmy li 121 Sep 26, 2022
Code for paper "Learning to Reweight Examples for Robust Deep Learning"

learning-to-reweight-examples Code for paper Learning to Reweight Examples for Robust Deep Learning. [arxiv] Environment We tested the code on tensorf

Uber Research 261 Jan 01, 2023
PyTorch implementation of "PatchGame: Learning to Signal Mid-level Patches in Referential Games" to appear in NeurIPS 2021

PatchGame: Learning to Signal Mid-level Patches in Referential Games This repository is the official implementation of the paper - "PatchGame: Learnin

Kamal Gupta 22 Mar 16, 2022
Image to Image translation, image generataton, few shot learning

Semi-supervised Learning for Few-shot Image-to-Image Translation [paper] Abstract: In the last few years, unpaired image-to-image translation has witn

yaxingwang 49 Nov 18, 2022
Conversational text Analysis using various NLP techniques

PyConverse Let me try first Installation pip install pyconverse Usage Please try this notebook that demos the core functionalities: basic usage noteb

Rita Anjana 158 Dec 25, 2022
上海交通大学全自动抢课脚本,支持准点开抢与抢课后持续捡漏两种模式。2021/06/08更新。

Welcome to Course-Bullying-in-SJTU-v3.1! 2021/6/8 紧急更新v3.1 更新说明 为了更好地保护用户隐私,将原来用户名+密码的登录方式改为微信扫二维码+cookie登录方式,不再需要配置使用pytesseract。在使用扫码登录模式时,请稍等,二维码将马

87 Sep 13, 2022
Pytorch implementation of set transformer

set_transformer Official PyTorch implementation of the paper Set Transformer: A Framework for Attention-based Permutation-Invariant Neural Networks .

Juho Lee 410 Jan 06, 2023
Image Fusion Transformer

Image-Fusion-Transformer Platform Python 3.7 Pytorch =1.0 Training Dataset MS-COCO 2014 (T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ram

Vibashan VS 68 Dec 23, 2022
Unofficial PyTorch reimplementation of the paper Swin Transformer V2: Scaling Up Capacity and Resolution

PyTorch reimplementation of the paper Swin Transformer V2: Scaling Up Capacity and Resolution [arXiv 2021].

Christoph Reich 122 Dec 12, 2022
The implementation of CVPR2021 paper Temporal Query Networks for Fine-grained Video Understanding, by Chuhan Zhang, Ankush Gupta and Andrew Zisserman.

Temporal Query Networks for Fine-grained Video Understanding 📋 This repository contains the implementation of CVPR2021 paper Temporal_Query_Networks

55 Dec 21, 2022
unet for image segmentation

Implementation of deep learning framework -- Unet, using Keras The architecture was inspired by U-Net: Convolutional Networks for Biomedical Image Seg

zhixuhao 4.1k Dec 31, 2022