Official PyTorch implementation of "Proxy Synthesis: Learning with Synthetic Classes for Deep Metric Learning" (AAAI 2021)

Overview

Proxy Synthesis: Learning with Synthetic Classes for Deep Metric Learning

Official PyTorch implementation of "Proxy Synthesis: Learning with Synthetic Classes for Deep Metric Learning" (AAAI 2021)

Geonmo Gu*1, Byungsoo Ko*1, Han-Gyu Kim2 (* Authors contributed equally.)

1@NAVER/LINE Vision, 2@NAVER Clova Speech

Overview

Proxy Synthesis

  • Proxy Synthesis (PS) is a novel regularizer for any softmax variants and proxy-based losses in deep metric learning.

How it works?

  • Proxy Synthesis exploits synthetic classes and improves generalization by considering class relations and obtaining smooth decision boundaries.
  • Synthetic classes mimic unseen classes during training phase as described in below Figure.

Experimental results

  • Proxy Synthesis improves performance for every loss and benchmark dataset.

Getting Started

Installation

  1. Clone the repository locally
$ git clone https://github.com/navervision/proxy-synthesis
  1. Create conda virtual environment
$ conda create -n proxy_synthesis python=3.7 anaconda
$ conda activate proxy_synthesis
  1. Install pytorch
$ conda install pytorch torchvision cudatoolkit=<YOUR_CUDA_VERSION> -c pytorch
  1. Install faiss
$ conda install faiss-gpu cudatoolkit=<YOUR_CUDA_VERSION> -c pytorch
  1. Install requirements
$ pip install -r requirements.txt

Prepare Data

  • Download CARS196 dataset and unzip
$ wget http://imagenet.stanford.edu/internal/car196/car_ims.tgz
$ tar zxvf car_ims.tgz -C ./dataset
  • Rearrange CARS196 directory by following structure
# Dataset structure
/dataset/carDB/
  train/
    class1/
      img1.jpeg
    class2/
      img2.jpeg
  test/
    class1/
      img3.jpeg
    class2/
      img4.jpeg
# Rearrange dataset structure
$ python dataset/prepare_cars.py

Train models

Norm-SoftMax loss with CARS196

# Norm-SoftMax
$ python main.py --gpu=0 \
--save_path=./logs/CARS196_norm_softmax \
--data=./dataset/carDB --data_name=cars196 \
--dim=512 --batch_size=128 --epochs=130 \
--freeze_BN --loss=Norm_SoftMax \
--decay_step=50 --decay_stop=50 --n_instance=1 \
--scale=23.0 --check_epoch=5

PS + Norm-SoftMax loss with CARS196

# PS + Norm-SoftMax
$ python main.py --gpu=0 \
--save_path=./logs/CARS196_PS_norm_softmax \
--data=./dataset/carDB --data_name=cars196 \
 --dim=512 --batch_size=128 --epochs=130 \
--freeze_BN --loss=Norm_SoftMax \
--decay_step=50 --decay_stop=50 --n_instance=1 \
--scale=23.0 --check_epoch=5 \
--ps_alpha=0.40 --ps_mu=1.0

Proxy-NCA loss with CARS196

# Proxy-NCA
$ python main.py --gpu=0 \
--save_path=./logs/CARS196_proxy_nca \
--data=./dataset/carDB --data_name=cars196 \
--dim=512 --batch_size=128 --epochs=130 \
--freeze_BN --loss=Proxy_NCA \
--decay_step=50 --decay_stop=50 --n_instance=1 \
--scale=12.0 --check_epoch=5

PS + Proxy-NCA loss with CARS196

# PS + Proxy-NCA
$ python main.py --gpu=0 \
--save_path=./logs/CARS196_PS_proxy_nca \
--data=./dataset/carDB --data_name=cars196 \
--dim=512 --batch_size=128 --epochs=130 \
--freeze_BN --loss=Proxy_NCA \
--decay_step=50 --decay_stop=50 --n_instance=1 \
--scale=12.0 --check_epoch=5 \
--ps_alpha=0.40 --ps_mu=1.0

Check Test Results

$ tensorboard --logdir=logs --port=10000

Experimental results

  • We report [email protected], RP and MAP performances of each loss, which are trained with CARS196 dataset for 8 runs.

[email protected]

Loss 1 2 3 4 5 6 7 8 Mean ± std
Norm-SoftMax 83.38 83.25 83.25 83.18 83.05 82.90 82.83 82.79 83.08 ± 0.21
PS + Norm-SoftMax 84.69 84.58 84.45 84.35 84.22 83.95 83.91 83.89 84.25 ± 0.31
Proxy-NCA 83.74 83.69 83.62 83.32 83.06 83.00 82.97 82.84 83.28 ± 0.36
PS + Proxy-NCA 84.52 84.39 84.32 84.29 84.22 84.12 83.94 83.88 84.21 ± 0.21

RP

Loss 1 2 3 4 5 6 7 8 Mean ± std
Norm-SoftMax 35.85 35.51 35.28 35.28 35.24 34.95 34.87 34.84 35.23 ± 0.34
PS + Norm-SoftMax 37.01 36.98 36.92 36.74 36.74 36.73 36.54 36.45 36.76 ± 0.20
Proxy-NCA 36.08 35.85 35.79 35.66 35.66 35.63 35.47 35.43 35.70 ± 0.21
PS + Proxy-NCA 36.97 36.84 36.72 36.64 36.63 36.60 36.43 36.41 36.66 ± 0.18

MAP

Loss 1 2 3 4 5 6 7 8 Mean ± std
Norm-SoftMax 25.56 25.56 25.00 24.93 24.90 24.59 24.57 24.56 24.92 ± 0.35
PS + Norm-SoftMax 26.71 26.67 26.65 26.56 26.53 26.52 26.30 26.17 26.51 ± 0.18
Proxy-NCA 25.66 25.52 25.37 25.36 25.33 25.26 25.22 25.04 25.35 ± 0.18
PS + Proxy-NCA 26.77 26.63 26.50 26.42 26.37 26.31 26.25 26.12 26.42 ± 0.20

Performance Graph

  • Below figure shows performance graph of test set during training.

Reference

  • Our code is based on SoftTriple repository (Arxiv, Github)

Citation

If you find Proxy Synthesis useful in your research, please consider to cite the following paper.

@inproceedings{gu2020proxy,
    title={Proxy Synthesis: Learning with Synthetic Classes for Deep Metric Learning},
    author={Geonmo Gu, Byungsoo Ko, and Han-Gyu Kim},
    booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
    year={2021}
}

License

Copyright 2021-present NAVER Corp.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Owner
NAVER/LINE Vision
Open source repository of Vision, NAVER & LINE
NAVER/LINE Vision
Nvidia Semantic Segmentation monorepo

Paper | YouTube | Cityscapes Score Pytorch implementation of our paper Hierarchical Multi-Scale Attention for Semantic Segmentation. Please refer to t

NVIDIA Corporation 1.6k Jan 04, 2023
Official Pytorch and JAX implementation of "Efficient-VDVAE: Less is more"

The Official Pytorch and JAX implementation of "Efficient-VDVAE: Less is more" Arxiv preprint Louay Hazami   ·   Rayhane Mama   ·   Ragavan Thurairatn

Rayhane Mama 144 Dec 23, 2022
YOLOX-RMPOLY

本算法为适应robomaster比赛,而改动自矩形识别的yolox算法。 基于旷视科技YOLOX,实现对不规则四边形的目标检测 TODO 修改onnx推理模型 更改/添加标注: 1.yolox/models/yolox_polyhead.py: 1.1继承yolox/models/yolo_

3 Feb 25, 2022
Mmrotate - OpenMMLab Rotated Object Detection Benchmark

OpenMMLab website HOT OpenMMLab platform TRY IT OUT 📘 Documentation | 🛠️ Insta

OpenMMLab 1.2k Jan 04, 2023
Official repo for the work titled "SharinGAN: Combining Synthetic and Real Data for Unsupervised GeometryEstimation"

SharinGAN Official repo for the work titled "SharinGAN: Combining Synthetic and Real Data for Unsupervised GeometryEstimation" The official project we

Koutilya PNVR 23 Oct 19, 2022
Pytorch implementation of SimSiam Architecture

SimSiam-pytorch A simple pytorch implementation of Exploring Simple Siamese Representation Learning which is developed by Facebook AI Research (FAIR)

Saeed Shurrab 1 Oct 20, 2021
Machine learning, in numpy

numpy-ml Ever wish you had an inefficient but somewhat legible collection of machine learning algorithms implemented exclusively in NumPy? No? Install

David Bourgin 11.6k Dec 30, 2022
A machine learning project which can detect and predict the skin disease through image recognition.

ML-Project-2021 A machine learning project which can detect and predict the skin disease through image recognition. The dataset used for this is the H

Debshishu Ghosh 1 Jan 13, 2022
This repo is the official implementation of "L2ight: Enabling On-Chip Learning for Optical Neural Networks via Efficient in-situ Subspace Optimization".

L2ight is a closed-loop ONN on-chip learning framework to enable scalable ONN mapping and efficient in-situ learning. L2ight adopts a three-stage learning flow that first calibrates the complicated p

Jiaqi Gu 9 Jul 14, 2022
A way to store images in YAML.

YAMLImg A way to store images in YAML. I made this after seeing Roadcrosser's JSON-G because it was too inspiring to ignore this opportunity. Installa

5 Mar 14, 2022
Implementation of CaiT models in TensorFlow and ImageNet-1k checkpoints. Includes code for inference and fine-tuning.

CaiT-TF (Going deeper with Image Transformers) This repository provides TensorFlow / Keras implementations of different CaiT [1] variants from Touvron

Sayak Paul 9 Jun 26, 2022
Scikit-learn compatible estimation of general graphical models

skggm : Gaussian graphical models using the scikit-learn API In the last decade, learning networks that encode conditional independence relationships

213 Jan 02, 2023
Removing Inter-Experimental Variability from Functional Data in Systems Neuroscience

Removing Inter-Experimental Variability from Functional Data in Systems Neuroscience This repository is the official implementation of [https://www.bi

Eulerlab 6 Oct 09, 2022
Official PyTorch implementation of MAAD: A Model and Dataset for Attended Awareness

MAAD: A Model for Attended Awareness in Driving Install // Datasets // Training // Experiments // Analysis // License Official PyTorch implementation

7 Oct 16, 2022
Mixed Neural Likelihood Estimation for models of decision-making

Mixed neural likelihood estimation for models of decision-making Mixed neural likelihood estimation (MNLE) enables Bayesian parameter inference for mo

mackelab 9 Dec 22, 2022
Code for our TKDE paper "Understanding WeChat User Preferences and “Wow” Diffusion"

wechat-wow-analysis Understanding WeChat User Preferences and “Wow” Diffusion. Fanjin Zhang, Jie Tang, Xueyi Liu, Zhenyu Hou, Yuxiao Dong, Jing Zhang,

18 Sep 16, 2022
Discovering Interpretable GAN Controls [NeurIPS 2020]

GANSpace: Discovering Interpretable GAN Controls Figure 1: Sequences of image edits performed using control discovered with our method, applied to thr

Erik Härkönen 1.7k Jan 03, 2023
Source code for our EMNLP'21 paper 《Raise a Child in Large Language Model: Towards Effective and Generalizable Fine-tuning》

Child-Tuning Source code for EMNLP 2021 Long paper: Raise a Child in Large Language Model: Towards Effective and Generalizable Fine-tuning. 1. Environ

46 Dec 12, 2022
Efficient neural networks for analog audio effect modeling

micro-TCN Efficient neural networks for audio effect modeling

Christian Steinmetz 94 Dec 29, 2022
Keyword spotting on Arm Cortex-M Microcontrollers

Keyword spotting for Microcontrollers This repository consists of the tensorflow models and training scripts used in the paper: Hello Edge: Keyword sp

Arm Software 1k Dec 30, 2022