Official PyTorch implementation of "Proxy Synthesis: Learning with Synthetic Classes for Deep Metric Learning" (AAAI 2021)

Overview

Proxy Synthesis: Learning with Synthetic Classes for Deep Metric Learning

Official PyTorch implementation of "Proxy Synthesis: Learning with Synthetic Classes for Deep Metric Learning" (AAAI 2021)

Geonmo Gu*1, Byungsoo Ko*1, Han-Gyu Kim2 (* Authors contributed equally.)

1@NAVER/LINE Vision, 2@NAVER Clova Speech

Overview

Proxy Synthesis

  • Proxy Synthesis (PS) is a novel regularizer for any softmax variants and proxy-based losses in deep metric learning.

How it works?

  • Proxy Synthesis exploits synthetic classes and improves generalization by considering class relations and obtaining smooth decision boundaries.
  • Synthetic classes mimic unseen classes during training phase as described in below Figure.

Experimental results

  • Proxy Synthesis improves performance for every loss and benchmark dataset.

Getting Started

Installation

  1. Clone the repository locally
$ git clone https://github.com/navervision/proxy-synthesis
  1. Create conda virtual environment
$ conda create -n proxy_synthesis python=3.7 anaconda
$ conda activate proxy_synthesis
  1. Install pytorch
$ conda install pytorch torchvision cudatoolkit=<YOUR_CUDA_VERSION> -c pytorch
  1. Install faiss
$ conda install faiss-gpu cudatoolkit=<YOUR_CUDA_VERSION> -c pytorch
  1. Install requirements
$ pip install -r requirements.txt

Prepare Data

  • Download CARS196 dataset and unzip
$ wget http://imagenet.stanford.edu/internal/car196/car_ims.tgz
$ tar zxvf car_ims.tgz -C ./dataset
  • Rearrange CARS196 directory by following structure
# Dataset structure
/dataset/carDB/
  train/
    class1/
      img1.jpeg
    class2/
      img2.jpeg
  test/
    class1/
      img3.jpeg
    class2/
      img4.jpeg
# Rearrange dataset structure
$ python dataset/prepare_cars.py

Train models

Norm-SoftMax loss with CARS196

# Norm-SoftMax
$ python main.py --gpu=0 \
--save_path=./logs/CARS196_norm_softmax \
--data=./dataset/carDB --data_name=cars196 \
--dim=512 --batch_size=128 --epochs=130 \
--freeze_BN --loss=Norm_SoftMax \
--decay_step=50 --decay_stop=50 --n_instance=1 \
--scale=23.0 --check_epoch=5

PS + Norm-SoftMax loss with CARS196

# PS + Norm-SoftMax
$ python main.py --gpu=0 \
--save_path=./logs/CARS196_PS_norm_softmax \
--data=./dataset/carDB --data_name=cars196 \
 --dim=512 --batch_size=128 --epochs=130 \
--freeze_BN --loss=Norm_SoftMax \
--decay_step=50 --decay_stop=50 --n_instance=1 \
--scale=23.0 --check_epoch=5 \
--ps_alpha=0.40 --ps_mu=1.0

Proxy-NCA loss with CARS196

# Proxy-NCA
$ python main.py --gpu=0 \
--save_path=./logs/CARS196_proxy_nca \
--data=./dataset/carDB --data_name=cars196 \
--dim=512 --batch_size=128 --epochs=130 \
--freeze_BN --loss=Proxy_NCA \
--decay_step=50 --decay_stop=50 --n_instance=1 \
--scale=12.0 --check_epoch=5

PS + Proxy-NCA loss with CARS196

# PS + Proxy-NCA
$ python main.py --gpu=0 \
--save_path=./logs/CARS196_PS_proxy_nca \
--data=./dataset/carDB --data_name=cars196 \
--dim=512 --batch_size=128 --epochs=130 \
--freeze_BN --loss=Proxy_NCA \
--decay_step=50 --decay_stop=50 --n_instance=1 \
--scale=12.0 --check_epoch=5 \
--ps_alpha=0.40 --ps_mu=1.0

Check Test Results

$ tensorboard --logdir=logs --port=10000

Experimental results

  • We report [email protected], RP and MAP performances of each loss, which are trained with CARS196 dataset for 8 runs.

[email protected]

Loss 1 2 3 4 5 6 7 8 Mean ± std
Norm-SoftMax 83.38 83.25 83.25 83.18 83.05 82.90 82.83 82.79 83.08 ± 0.21
PS + Norm-SoftMax 84.69 84.58 84.45 84.35 84.22 83.95 83.91 83.89 84.25 ± 0.31
Proxy-NCA 83.74 83.69 83.62 83.32 83.06 83.00 82.97 82.84 83.28 ± 0.36
PS + Proxy-NCA 84.52 84.39 84.32 84.29 84.22 84.12 83.94 83.88 84.21 ± 0.21

RP

Loss 1 2 3 4 5 6 7 8 Mean ± std
Norm-SoftMax 35.85 35.51 35.28 35.28 35.24 34.95 34.87 34.84 35.23 ± 0.34
PS + Norm-SoftMax 37.01 36.98 36.92 36.74 36.74 36.73 36.54 36.45 36.76 ± 0.20
Proxy-NCA 36.08 35.85 35.79 35.66 35.66 35.63 35.47 35.43 35.70 ± 0.21
PS + Proxy-NCA 36.97 36.84 36.72 36.64 36.63 36.60 36.43 36.41 36.66 ± 0.18

MAP

Loss 1 2 3 4 5 6 7 8 Mean ± std
Norm-SoftMax 25.56 25.56 25.00 24.93 24.90 24.59 24.57 24.56 24.92 ± 0.35
PS + Norm-SoftMax 26.71 26.67 26.65 26.56 26.53 26.52 26.30 26.17 26.51 ± 0.18
Proxy-NCA 25.66 25.52 25.37 25.36 25.33 25.26 25.22 25.04 25.35 ± 0.18
PS + Proxy-NCA 26.77 26.63 26.50 26.42 26.37 26.31 26.25 26.12 26.42 ± 0.20

Performance Graph

  • Below figure shows performance graph of test set during training.

Reference

  • Our code is based on SoftTriple repository (Arxiv, Github)

Citation

If you find Proxy Synthesis useful in your research, please consider to cite the following paper.

@inproceedings{gu2020proxy,
    title={Proxy Synthesis: Learning with Synthetic Classes for Deep Metric Learning},
    author={Geonmo Gu, Byungsoo Ko, and Han-Gyu Kim},
    booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
    year={2021}
}

License

Copyright 2021-present NAVER Corp.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Owner
NAVER/LINE Vision
Open source repository of Vision, NAVER & LINE
NAVER/LINE Vision
RCDNet: A Model-driven Deep Neural Network for Single Image Rain Removal (CVPR2020)

RCDNet: A Model-driven Deep Neural Network for Single Image Rain Removal (CVPR2020) Hong Wang, Qi Xie, Qian Zhao, and Deyu Meng [PDF] [Supplementary M

Hong Wang 6 Sep 27, 2022
Miscellaneous and lightweight network tools

Network Tools Collection of miscellaneous and lightweight network tools to simplify daily operations, administration, and troubleshooting of networks.

Nicholas Russo 22 Mar 22, 2022
TraND: Transferable Neighborhood Discovery for Unsupervised Cross-domain Gait Recognition.

TraND This is the code for the paper "Jinkai Zheng, Xinchen Liu, Chenggang Yan, Jiyong Zhang, Wu Liu, Xiaoping Zhang and Tao Mei: TraND: Transferable

Jinkai Zheng 32 Apr 04, 2022
Official PyTorch implementation of our AAAI22 paper: TransMEF: A Transformer-Based Multi-Exposure Image Fusion Framework via Self-Supervised Multi-Task Learning. Code will be available soon.

Official-PyTorch-Implementation-of-TransMEF Official PyTorch implementation of our AAAI22 paper: TransMEF: A Transformer-Based Multi-Exposure Image Fu

117 Dec 27, 2022
MINOS: Multimodal Indoor Simulator

MINOS Simulator MINOS is a simulator designed to support the development of multisensory models for goal-directed navigation in complex indoor environ

194 Dec 27, 2022
Neon-erc20-example - Example of creating SPL token and wrapping it with ERC20 interface in Neon EVM

Example of wrapping SPL token by ERC2-20 interface in Neon Requirements Install

7 Mar 28, 2022
3D Multi-Person Pose Estimation by Integrating Top-Down and Bottom-Up Networks

3D Multi-Person Pose Estimation by Integrating Top-Down and Bottom-Up Networks Introduction This repository contains the code and models for the follo

124 Jan 06, 2023
Adjusting for Autocorrelated Errors in Neural Networks for Time Series

Adjusting for Autocorrelated Errors in Neural Networks for Time Series This repository is the official implementation of the paper "Adjusting for Auto

Fan-Keng Sun 51 Nov 05, 2022
Code Release for Learning to Adapt to Evolving Domains

EAML Code release for "Learning to Adapt to Evolving Domains" (NeurIPS 2020) Prerequisites PyTorch = 0.4.0 (with suitable CUDA and CuDNN version) tor

23 Dec 07, 2022
MakeItTalk: Speaker-Aware Talking-Head Animation

MakeItTalk: Speaker-Aware Talking-Head Animation This is the code repository implementing the paper: MakeItTalk: Speaker-Aware Talking-Head Animation

Adobe Research 285 Jan 08, 2023
Interactive Visualization to empower domain experts to align ML model behaviors with their knowledge.

An interactive visualization system designed to helps domain experts responsibly edit Generalized Additive Models (GAMs). For more information, check

InterpretML 83 Jan 04, 2023
Populating 3D Scenes by Learning Human-Scene Interaction https://posa.is.tue.mpg.de/

Populating 3D Scenes by Learning Human-Scene Interaction [Project Page] [Paper] License Software Copyright License for non-commercial scientific resea

Mohamed Hassan 81 Nov 08, 2022
A Flexible Generative Framework for Graph-based Semi-supervised Learning (NeurIPS 2019)

G3NN This repo provides a pytorch implementation for the 4 instantiations of the flexible generative framework as described in the following paper: A

Jiaqi Ma 14 Oct 11, 2022
Easy and Efficient Object Detector

EOD Easy and Efficient Object Detector EOD (Easy and Efficient Object Detection) is a general object detection model production framework. It aim on p

381 Jan 01, 2023
Robust, modular and efficient implementation of advanced Hamiltonian Monte Carlo algorithms

AdvancedHMC.jl AdvancedHMC.jl provides a robust, modular and efficient implementation of advanced HMC algorithms. An illustrative example for Advanced

The Turing Language 167 Jan 01, 2023
Binary Stochastic Neurons in PyTorch

Binary Stochastic Neurons in PyTorch http://r2rt.com/binary-stochastic-neurons-in-tensorflow.html https://github.com/pytorch/examples/tree/master/mnis

Onur Kaplan 54 Nov 21, 2022
This repo contains the code for paper Inverse Weighted Survival Games

Inverse-Weighted-Survival-Games This repo contains the code for paper Inverse Weighted Survival Games instructions general loss function (--lfn) can b

3 Jan 12, 2022
SciPy fixes and extensions

scipyx SciPy is large library used everywhere in scientific computing. That's why breaking backwards-compatibility comes as a significant cost and is

Nico Schlömer 16 Jul 17, 2022
A MatConvNet-based implementation of the Fully-Convolutional Networks for image segmentation

MatConvNet implementation of the FCN models for semantic segmentation This package contains an implementation of the FCN models (training and evaluati

VLFeat.org 175 Feb 18, 2022
Material del curso IIC2233 Programación Avanzada 📚

Contenidos Los contenidos se organizan según la semana del semestre en que nos encontremos, y según la semana que se destina para su estudio. Los cont

IIC2233 @ UC 72 Dec 23, 2022