[AAAI-2022] Official implementations of MCL: Mutual Contrastive Learning for Visual Representation Learning

Related tags

Deep LearningMCL
Overview

Mutual Contrastive Learning for Visual Representation Learning

This project provides source code for our Mutual Contrastive Learning for Visual Representation Learning (MCL).

Installation

Requirements

Ubuntu 18.04 LTS

Python 3.8 (Anaconda is recommended)

CUDA 11.1

PyTorch 1.7.0

NCCL for CUDA 11.1

Supervised Learning on CIFAR-100 dataset

Dataset

CIFAR-100 : download

unzip to the ./data folder

Training two baseline networks

python main_cifar.py --arch resnet32 --number-net 2

More commands for training various architectures can be found in scripts/train_cifar_baseline.sh

Training two networks by MCL

python main_cifar.py --arch resnet32  --number-net 2 \
    --alpha 0.1 --gamma 1. --beta 0.1 --lam 1. 

More commands for training various architectures can be found in scripts/train_cifar_mcl.sh

Results of MCL on CIFAR-100

We perform all experiments on a single NVIDIA RTX 3090 GPU (24GB) with three runs.

Network Baseline MCL(×2) MCL(×4)
ResNet-32 70.91±0.14 72.96±0.28 74.04±0.07
ResNet-56 73.15±0.23 74.48±0.23 75.74±0.16
ResNet-110 75.29±0.16 77.12±0.20 78.82±0.14
WRN-16-2 72.55±0.24 74.56±0.11 75.79±0.07
WRN-40-2 76.89±0.29 77.51±0.42 78.84±0.22
HCGNet-A1 77.42±0.16 78.62±0.26 79.50±0.15
ShuffleNetV2 0.5× 67.39±0.35 69.55±0.22 70.92±0.28
ShuffleNetV2 1× 70.93±0.24 73.26±0.18 75.18±0.25

Training multiple networks by MCL combined with Logit distillation

python main_cifar.py --arch WRN_16_2  --number-net 4 \
    --alpha 0.1 --gamma 1. --beta 0.1 --lam 1. \
    --logit-distill

More commands for training various architectures can be found in scripts/train_cifar_mcl_logit.sh

Results of MCL combined with logit distillation on CIFAR-100

We perform all experiments on a single NVIDIA RTX 3090 GPU (24GB) with three runs.

Network Baseline MCL(×4)+Logit KD
WRN-16-2 72.55±0.24 76.34±0.22
WRN-40-2 76.89±0.29 80.02±0.45
WRN-28-4 79.17±0.29 81.68±0.31
ShuffleNetV2 1× 70.93±0.24 77.02±0.32
HCGNet-A2 79.00±0.41 82.47±0.20

Supervised Learning on ImageNet dataset

Dataset preparation

  • Download the ImageNet dataset to YOUR_IMAGENET_PATH and move validation images to labeled subfolders

  • Create a datasets subfolder and a symlink to the ImageNet dataset

$ ln -s PATH_TO_YOUR_IMAGENET ./data/

Folder of ImageNet Dataset:

data/ImageNet
├── train
├── val

Training two networks by MCL

python main_cifar.py --arch resnet18  --number-net 2 \
    --alpha 0.1 --gamma 1. --beta 0.1 --lam 1. 

More commands for training various architectures can be found in scripts/train_imagenet_mcl.sh

Results of MCL on ImageNet

We perform all experiments on a single NVIDIA Tesla V100 GPU (32GB) with three runs.

Network Baseline MCL(×2) MCL(×4)
ResNet-18 69.76 70.32 70.77
ResNet-34 73.30 74.13 74.34

Training two networks by MCL combined with logit distillation

python main_cifar.py --arch resnet18  --number-net 2 \
    --alpha 0.1 --gamma 1. --beta 0.1 --lam 1. 

More commands for training various architectures can be found in scripts/train_imagenet_mcl.sh

Results of MCL combined with logit distillation on ImageNet

We perform all experiments on a single NVIDIA Tesla V100 GPU (32GB) with three runs.

Network Baseline MCL(×4)+Logit KD
ResNet-18 69.76 70.82

Self-Supervised Learning on ImageNet dataset

Apply MCL(×2) to MoCo

python main_moco_mcl.py \
  -a resnet18 \
  --lr 0.03 \
  --batch-size 256 \
  --number-net 2 \
  --dist-url 'tcp://localhost:10001' \
  --multiprocessing-distributed \
  --world-size 1 \
  --rank 0 \
  --gpu-ids 0,1,2,3,4,5,6,7 

Linear Classification

python main_lincls.py \
  -a resnet18 \
  --lr 30.0 \
  --batch-size 256 \
  --pretrained [your checkpoint path]/checkpoint_0199.pth.tar \
  --dist-url 'tcp://localhost:10001' \
  --multiprocessing-distributed \
  --world-size 1 \
  --rank 0 \
  --gpu-ids 0,1,2,3,4,5,6,7 

Results of applying MCL to MoCo on ImageNet

We perform all experiments on 8 NVIDIA RTX 3090 GPUs with three runs.

Network Baseline MCL(×2)
ResNet-18 47.45±0.11 48.04±0.13

Citation

@inproceedings{yang2022mcl,
  title={Mutual Contrastive Learning for Visual Representation Learning},
  author={Chuanguang Yang, Zhulin An, Linhang Cai, Yongjun Xu},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  year={2022}
}
Owner
winycg
winycg
Pytorch implementation of SimSiam Architecture

SimSiam-pytorch A simple pytorch implementation of Exploring Simple Siamese Representation Learning which is developed by Facebook AI Research (FAIR)

Saeed Shurrab 1 Oct 20, 2021
PyTorch-LIT is the Lite Inference Toolkit (LIT) for PyTorch which focuses on easy and fast inference of large models on end-devices.

PyTorch-LIT PyTorch-LIT is the Lite Inference Toolkit (LIT) for PyTorch which focuses on easy and fast inference of large models on end-devices. With

Amin Rezaei 157 Dec 11, 2022
《Fst Lerning of Temporl Action Proposl vi Dense Boundry Genertor》(AAAI 2020)

Update 2020.03.13: Release tensorflow-version and pytorch-version DBG complete code. 2019.11.12: Release tensorflow-version DBG inference code. 2019.1

Tencent 338 Dec 16, 2022
🔀 Visual Room Rearrangement

AI2-THOR Rearrangement Challenge Welcome to the 2021 AI2-THOR Rearrangement Challenge hosted at the CVPR'21 Embodied-AI Workshop. The goal of this cha

AI2 55 Dec 22, 2022
Sleep staging from ECG, assisted with EEG

Sleep_Staging_Knowledge Distillation This codebase implements knowledge distillation approach for ECG based sleep staging assisted by EEG based sleep

2 Dec 12, 2022
[NeurIPS 2021] Towards Better Understanding of Training Certifiably Robust Models against Adversarial Examples | ⛰️⚠️

Towards Better Understanding of Training Certifiably Robust Models against Adversarial Examples This repository is the official implementation of "Tow

Sungyoon Lee 4 Jul 12, 2022
YOLOv4-v3 Training Automation API for Linux

This repository allows you to get started with training a state-of-the-art Deep Learning model with little to no configuration needed! You provide your labeled dataset or label your dataset using our

BMW TechOffice MUNICH 626 Dec 31, 2022
MoCap-Solver: A Neural Solver for Optical Motion Capture Data

MoCap-Solver is a data-driven-based robust marker denoising method, which takes raw mocap markers as input and outputs corresponding clean markers and skeleton motions.

55 Dec 28, 2022
Deep Crop Rotation

Deep Crop Rotation Paper (to come very soon!) We propose a deep learning approach to modelling both inter- and intra-annual patterns for parcel classi

Félix Quinton 5 Sep 23, 2022
MLJetReconstruction - using machine learning to reconstruct jets for CMS

MLJetReconstruction - using machine learning to reconstruct jets for CMS The C++ data extraction code used here was based heavily on that foundv here.

ALPhA Davidson 0 Nov 17, 2021
StyleGAN2 - Official TensorFlow Implementation

StyleGAN2 - Official TensorFlow Implementation

NVIDIA Research Projects 10.1k Dec 28, 2022
Aggragrating Nested Transformer Official Jax Implementation

NesT is a simple method, which aggragrates nested local transformers on image blocks. The idea makes vision transformers attain better accuracy, data efficiency, and convergence on the ImageNet bench

Google Research 169 Dec 20, 2022
Reproducing-BowNet: Learning Representations by Predicting Bags of Visual Words

Reproducing-BowNet Our reproducibility effort based on the 2020 ML Reproducibility Challenge. We are reproducing the results of this CVPR 2020 paper:

6 Mar 16, 2022
face_recognization (FaceNet) + TFHE (HNP) + hand_face_detection (Mediapipe)

SuperControlSystem Face_Recognization (FaceNet) 面部识别 (FaceNet) Fully Homomorphic Encryption over the Torus (HNP) 环面全同态加密 (TFHE) Hand_Face_Detection (M

liziyu0104 2 Dec 30, 2021
"NAS-Bench-301 and the Case for Surrogate Benchmarks for Neural Architecture Search".

NAS-Bench-301 This repository containts code for the paper: "NAS-Bench-301 and the Case for Surrogate Benchmarks for Neural Architecture Search". The

AutoML-Freiburg-Hannover 57 Nov 30, 2022
Easily Process a Batch of Cox Models

ezcox: Easily Process a Batch of Cox Models The goal of ezcox is to operate a batch of univariate or multivariate Cox models and return tidy result. ⏬

Shixiang Wang 15 May 23, 2022
transfer attack; adversarial examples; black-box attack; unrestricted Adversarial Attacks on ImageNet; CVPR2021 天池黑盒竞赛

transfer_adv CVPR-2021 AIC-VI: unrestricted Adversarial Attacks on ImageNet CVPR2021 安全AI挑战者计划第六期赛道2:ImageNet无限制对抗攻击 介绍 : 深度神经网络已经在各种视觉识别问题上取得了最先进的性能。

25 Dec 08, 2022
Using Tensorflow Object Detection API to detect Waymo open dataset

Waymo-2D-Object-Detection Using Tensorflow Object Detection API to detect Waymo open dataset Result CenterNet Training Loss SSD ResNet Training Loss C

76 Dec 12, 2022
E2EDNA2 - An automated pipeline for simulation of DNA aptamers complexed with small molecules and short peptides

E2EDNA2 - An automated pipeline for simulation of DNA aptamers complexed with small molecules and short peptides

11 Nov 08, 2022
tensorrt int8 量化yolov5 4.0 onnx模型

onnx模型转换为 int8 tensorrt引擎

123 Dec 28, 2022