Source code of our work: "Benchmarking Deep Models for Salient Object Detection"

Related tags

Deep LearningSALOD
Overview

SALOD

Source code of our work: "Benchmarking Deep Models for Salient Object Detection".
In this works, we propose a new benchmark for SALient Object Detection (SALOD) methods.

We re-implement 14 methods using same settings, including input size, data loader and evaluation metrics (thanks to Metrics). Hyperparameters of optimizer are different because of various network structures and objective functions. We try our best to tune the optimizer for these models to achieve the best performance one-by-one. Some other networks are debugging now, it is welcome for your contributions on these networks to obtain better performance.

Properties

  1. A unify interface for new models. To develop a new network, you only need to 1) set configs; 2) define network; 3) define loss function. See methods/template.
  2. We build a new dataset by collecting several prevalent datasets in SOD task.
  3. Easy to adopt different backbones (Available backbones: ResNet-50, VGG-16, MobileNet-v2, EfficientNet-B0, GhostNet, Res2Net)
  4. Testing all networks on your own device. By input the name of network, you can test all available methods in our benchmark. Comparisons includes FPS, GFLOPs, model size and multiple effectiveness metrics.
  5. We implement a loss factory that you can change the loss functions using command line parameters.

Available Methods:

Methods Publish. Input Weight Optim. LR Epoch Paper Src Code
DHSNet CVPR2016 320^2 95M Adam 2e-5 30 openaccess Pytorch
NLDF CVPR2017 320^2 161M Adam 1e-5 30 openaccess Pytorch/TF
Amulet ICCV2017 320^2 312M Adam 1e-5 30 openaccess Pytorch
SRM ICCV2017 320^2 240M Adam 5e-5 30 openaccess Pytorch
PicaNet CVPR2018 320^2 464M SGD 1e-2 30 openaccess Pytorch
DSS TPAMI2019 320^2 525M Adam 2e-5 30 IEEE/ArXiv Pytorch
BASNet CVPR2019 320^2 374M Adam 1e-5 30 openaccess Pytorch
CPD CVPR2019 320^2 188M Adam 1e-5 30 openaccess Pytorch
PoolNet CVPR2019 320^2 267M Adam 5e-5 30 openaccess Pytorch
EGNet ICCV2019 320^2 437M Adam 5e-5 30 openaccess Pytorch
SCRN ICCV2019 320^2 100M SGD 1e-2 30 openaccess Pytorch
GCPA AAAI2020 320^2 263M SGD 1e-2 30 aaai.org Pytorch
ITSD CVPR2020 320^2 101M SGD 5e-3 30 openaccess Pytorch
MINet CVPR2020 320^2 635M SGD 1e-3 30 openaccess Pytorch
Tuning ----- ----- ------ ------ ----- ----- ----- -----
*PAGE CVPR2019 320^2 ------ ------ ----- ----- openaccess TF
*PFA CVPR2019 320^2 ------ ------ ----- ----- openaccess Pytorch
*F3Net AAAI2020 320^2 ------ ------ ----- ----- aaai.org Pytorch
*PFPN AAAI2020 320^2 ------ ------ ----- ----- aaai.org Pytorch
*LDF CVPR2020 320^2 ------ ------ ----- ----- openaccess Pytorch

Usage

# model_name: lower-cased method name. E.g. poolnet, egnet, gcpa, dhsnet or minet.
python3 train.py model_name --gpus=0

python3 test.py model_name --gpus=0 --weight=path_to_weight 

python3 test_fps.py model_name --gpus=0

# To evaluate generated maps:
python3 eval.py --pre_path=path_to_maps

Results

We report benchmark results here.
More results please refer to Reproduction, Few-shot and Generalization.

Notice: please contact us if you get better results.

VGG16-based:

Methods #Param. GFLOPs Tr. Time FPS max-F ave-F Fbw MAE SM EM Weight
DHSNet 15.4 52.5 7.5 69.8 .884 .815 .812 .049 .880 .893
Amulet 33.2 1362 12.5 35.1 .855 .790 .772 .061 .854 .876
NLDF 24.6 136 9.7 46.3 .886 .824 .828 .045 .881 .898
SRM 37.9 73.1 7.9 63.1 .857 .779 .769 .060 .859 .874
PicaNet 26.3 74.2 40.5* 8.8 .889 .819 .823 .046 .884 .899
DSS 62.2 99.4 11.3 30.3 .891 .827 .826 .046 .888 .899
BASNet 80.5 114.3 16.9 32.6 .906 .853 .869 .036 .899 .915
CPD 29.2 85.9 10.5 36.3 .886 .815 .792 .052 .885 .888
PoolNet 52.5 236.2 26.4 23.1 .902 .850 .852 .039 .898 .913
EGNet 101 178.8 19.2 16.3 .909 .853 .859 .037 .904 .914
SCRN 16.3 47.2 9.3 24.8 .896 .820 .822 .046 .891 .894
GCPA 42.8 197.1 17.5 29.3 .903 .836 .845 .041 .898 .907
ITSD 16.9 76.3 15.2* 30.6 .905 .820 .834 .045 .901 .896
MINet 47.8 162 21.8 23.4 .900 .839 .852 .039 .895 .909

ResNet50-based:

Methods #Param. GFLOPs Tr. Time FPS max-F ave-F Fbw MAE SM EM Weight
DHSNet 24.2 13.8 3.9 49.2 .909 .830 .848 .039 .905 .905
Amulet 79.8 1093.8 6.3 35.1 .895 .822 .835 .042 .894 .900
NLDF 41.1 115.1 9.2 30.5 .903 .837 .855 .038 .898 .910
SRM 61.2 20.2 5.5 34.3 .882 .803 .812 .047 .885 .891
PicaNet 106.1 36.9 18.5* 14.8 .904 .823 .843 .041 .902 .902
DSS 134.3 35.3 6.6 27.3 .894 .821 .826 .045 .893 .898
BASNet 95.5 47.2 12.2 32.8 .917 .861 .884 .032 .909 .921
CPD 47.9 14.7 7.7 22.7 .906 .842 .836 .040 .904 .908
PoolNet 68.3 66.9 10.2 33.9 .912 .843 .861 .036 .907 .912
EGNet 111.7 222.8 25.7 10.2 .917 .851 .867 .036 .912 .914
SCRN 25.2 12.5 5.5 19.3 .910 .838 .845 .040 .906 .905
GCPA 67.1 54.3 6.8 37.8 .916 .841 .866 .035 .912 .912
ITSD 25.7 19.6 5.7 29.4 .913 .825 .842 .042 .907 .899
MINet 162.4 87 11.7 23.5 .913 .851 .871 .034 .906 .917

Create New Model

To create a new model, you can copy the template folder and modify it as you want.

cp -r ./methods/template ./methods/new_name

More details please refer to python files in template floder.

Loss Factory

We supply a Loss Factory for an easier way to tune the loss functions. You can set --loss and --lw parameters to use it.

Here are some examples:

loss_dict = {'b': BCE, 's': SSIM, 'i': IOU, 'd': DICE, 'e': Edge, 'c': CTLoss}

python train.py ... --loss=bd
# loss = 1 * bce_loss + 1 * dice_loss

python train.py ... --loss=bs --lw=0.3,0.7
# loss = 0.3 * bce_loss + 0.7 * ssim_loss

python train.py ... --loss=bsid --lw=0.3,0.1,0.5,0.2
# loss = 0.3 * bce_loss + 0.1 * ssim_loss + 0.5 * iou_loss + 0.2 * dice_loss
clustering moroccan stocks time series data using k-means with dtw (dynamic time warping)

Moroccan Stocks Clustering Context Hey! we don't always have to forecast time series am I right ? We use k-means to cluster about 70 moroccan stock pr

Ayman Lafaz 7 Oct 18, 2022
[ICCV-2021] An Empirical Study of the Collapsing Problem in Semi-Supervised 2D Human Pose Estimation

An Empirical Study of the Collapsing Problem in Semi-Supervised 2D Human Pose Estimation (ICCV 2021) Introduction This is an official pytorch implemen

rongchangxie 42 Jan 04, 2023
This is the official PyTorch implementation of our paper: "Artistic Style Transfer with Internal-external Learning and Contrastive Learning".

Artistic Style Transfer with Internal-external Learning and Contrastive Learning This is the official PyTorch implementation of our paper: "Artistic S

51 Dec 20, 2022
DeepDiffusion: Unsupervised Learning of Retrieval-adapted Representations via Diffusion-based Ranking on Latent Feature Manifold

DeepDiffusion Introduction This repository provides the code of the DeepDiffusion algorithm for unsupervised learning of retrieval-adapted representat

4 Nov 15, 2022
This is a collection of our NAS and Vision Transformer work.

This is a collection of our NAS and Vision Transformer work.

Microsoft 828 Dec 28, 2022
auto-tuning momentum SGD optimizer

YellowFin YellowFin is an auto-tuning optimizer based on momentum SGD which requires no manual specification of learning rate and momentum. It measure

Jian Zhang 288 Nov 19, 2022
Python implementation of Lightning-rod Agent, the Stack4Things board-side probe

Iotronic Lightning-rod Agent Python implementation of Lightning-rod Agent, the Stack4Things board-side probe. Free software: Apache 2.0 license Websit

2 May 19, 2022
Scales, Chords, and Cadences: Practical Music Theory for MIR Researchers

ISMIR-musicTheoryTutorial This repository has slides and Jupyter notebooks for the ISMIR 2021 tutorial Scales, Chords, and Cadences: Practical Music T

Johanna Devaney 58 Oct 11, 2022
A coin flip game in which you can put the amount of money below or equal to 1000 and then choose heads or tail

COIN_FLIPPY ##This is a simple example package. You can use Github-flavored Markdown to write your content. Coinflippy A coin flip game in which you c

2 Dec 26, 2021
Our CIKM21 Paper "Incorporating Query Reformulating Behavior into Web Search Evaluation"

Reformulation-Aware-Metrics Introduction This codebase contains source-code of the Python-based implementation of our CIKM 2021 paper. Chen, Jia, et a

xuanyuan14 5 Mar 05, 2022
Code for the paper Task Agnostic Morphology Evolution.

Task-Agnostic Morphology Optimization This repository contains code for the paper Task-Agnostic Morphology Evolution by Donald (Joey) Hejna, Pieter Ab

Joey Hejna 18 Aug 04, 2022
An Artificial Intelligence trying to drive a car by itself on a user created map

An Artificial Intelligence trying to drive a car by itself on a user created map

Akhil Sahukaru 17 Jan 13, 2022
MINOS: Multimodal Indoor Simulator

MINOS Simulator MINOS is a simulator designed to support the development of multisensory models for goal-directed navigation in complex indoor environ

194 Dec 27, 2022
Python PID Tuner - Based on a FOPDT model obtained using a Open Loop Process Reaction Curve

PythonPID_Tuner Step 1: Takes a Process Reaction Curve in csv format - assumes data at 100ms interval (column names CV and PV) Step 2: Makes a rough e

6 Jan 14, 2022
Deep Ensemble Learning with Jet-Like architecture

Ransomware analysis using DEL with jet-like architecture comprising two CNN wings, a sparse AE tail, a non-linear PCA to produce a diverse feature space, and an MLP nose

Ahsen Nazir 2 Feb 06, 2022
Orchestrating Distributed Materials Acceleration Platform Tutorial

Orchestrating Distributed Materials Acceleration Platform Tutorial This tutorial for orchestrating distributed materials acceleration platform was pre

BIG-MAP 1 Jan 25, 2022
[ICCV21] Official implementation of the "Social NCE: Contrastive Learning of Socially-aware Motion Representations" in PyTorch.

Social-NCE + CrowdNav Website | Paper | Video | Social NCE + Trajectron | Social NCE + STGCNN This is an official implementation for Social NCE: Contr

VITA lab at EPFL 125 Dec 23, 2022
Codes and models of NeurIPS2021 paper - DominoSearch: Find layer-wise fine-grained N:M sparse schemes from dense neural networks

DominoSearch This is repository for codes and models of NeurIPS2021 paper - DominoSearch: Find layer-wise fine-grained N:M sparse schemes from dense n

11 Sep 10, 2022
Federated Learning Based on Dynamic Regularization

Federated Learning Based on Dynamic Regularization This is implementation of Federated Learning Based on Dynamic Regularization. Requirements Please i

39 Jan 07, 2023
Official implementation of NeurIPS'21: Implicit SVD for Graph Representation Learning

isvd Official implementation of NeurIPS'21: Implicit SVD for Graph Representation Learning If you find this code useful, you may cite us as: @inprocee

Sami Abu-El-Haija 16 Jan 08, 2023