Pytorch Implementation of Auto-Compressing Subset Pruning for Semantic Image Segmentation

Related tags

Deep Learningacosp
Overview

Pytorch Implementation of Auto-Compressing Subset Pruning for Semantic Image Segmentation

Introduction

ACoSP is an online pruning algorithm that compresses convolutional neural networks during training. It learns to select a subset of channels from convolutional layers through a sigmoid function, as shown in the figure. For each channel a w_i is used to scale activations.

ACoSP selection scheme.

The segmentation maps display compressed PSPNet-50 models trained on Cityscapes. The models are up to 16 times smaller.

Repository

This repository is a PyTorch implementation of ACoSP based on hszhao/semseg. It was used to run all experiments used for the publication and is meant to guarantee reproducibility and audibility of our results.

The training, test and configuration infrastructure is kept close to semseg, with only some minor modifications to enable more reproducibility and integrate our pruning code. The model/ package contains the PSPNet50 and SegNet model definitions. In acosp/ all code required to prune during training is defined.

The current configs expect a special folder structure (but can be easily adapted):

  • /data: Datasets, Pretrained-weights
  • /logs/exp: Folder to store experiments

Installation

  1. Clone the repository:

    git clone [email protected]:merantix/acosp.git
  2. Install ACoSP including requirements:

    pip install .

Using ACoSP

The implementation of ACoSP is encapsulated in /acosp and using it independent of all other experimentation code is quite straight forward.

  1. Create a pruner and adapt the model:
from acosp.pruner import SoftTopKPruner
import acosp.inject

# Create pruner object
pruner = SoftTopKPruner(
    starting_epoch=0,
    ending_epoch=100,  # Pruning duration
    final_sparsity=0.5,  # Final sparsity
)
# Add sigmoid soft k masks to model
pruner.configure_model(model)
  1. In your training loop update the temperature of all masking layers:
# Update the temperature in all masking layers
pruner.update_mask_layers(model, epoch)
  1. Convert the soft pruning to hard pruning when ending_epoch is reached:
if epoch == pruner.ending_epoch:
    # Convert to binary channel mask
    acosp.inject.soft_to_hard_k(model)

Experiments

  1. Highlight:

    • All initialization models, trained models are available. The structure is:
      | init/  # initial models
      | exp/
      |-- ade20k/  # ade20k/camvid/cityscapes/voc2012/cifar10
      | |-- pspnet50_{SPARSITY}/  # the sparsity refers to the relative amount of weights that are removed. I.e. sparsity=0.75 <==> compression_ratio=4 
      |   |-- model # model files
      |   |-- ... # config/train/test files
      |-- evals/  # all result with class wise IoU/Acc
      
  2. Hardware Requirements: At least 60GB (PSPNet50) / 16GB (SegNet) of GPU RAM. Can be distributed to multiple GPUs.

  3. Train:

    • Download related datasets and symlink the paths to them as follows (you can alternatively modify the relevant paths specified in folder config):

      mkdir -p /
      ln -s /path_to_ade20k_dataset /data/ade20k
      
    • Download ImageNet pre-trained models and put them under folder /data for weight initialization. Remember to use the right dataset format detailed in FAQ.md.

    • Specify the gpu used in config then do training. (Training using acosp have only been carried out on a single GPU. And not been tested with DDP). The general structure to access individual configs is as follows:

      sh tool/train.sh ${DATASET} ${CONFIG_NAME_WITHOUT_DATASET}

      E.g. to train a PSPNet50 on the ade20k dataset and use the config `config/ade20k/ade20k_pspnet50.yaml', execute:

      sh tool/train.sh ade20k pspnet50
  4. Test:

    • Download trained segmentation models and put them under folder specified in config or modify the specified paths.

    • For full testing (get listed performance):

      sh tool/test.sh ade20k pspnet50
  5. Visualization: tensorboardX incorporated for better visualization.

    tensorboard --logdir=/logs/exp/ade20k
  6. Other:

    • Resources: GoogleDrive LINK contains shared models, visual predictions and data lists.
    • Models: ImageNet pre-trained models and trained segmentation models can be accessed. Note that our ImageNet pretrained models are slightly different from original ResNet implementation in the beginning part.
    • Predictions: Visual predictions of several models can be accessed.
    • Datasets: attributes (names and colors) are in folder dataset and some sample lists can be accessed.
    • Some FAQs: FAQ.md.

Performance

Description: mIoU/mAcc stands for mean IoU, mean accuracy of each class and all pixel accuracy respectively. General parameters cross different datasets are listed below:

  • Network: {NETWORK} @ ACoSP-{COMPRESSION_RATIO}
  • Train Parameters: sync_bn(True), scale_min(0.5), scale_max(2.0), rotate_min(-10), rotate_max(10), zoom_factor(8), aux_weight(0.4), base_lr(1e-2), power(0.9), momentum(0.9), weight_decay(1e-4).
  • Test Parameters: ignore_label(255).
  1. ADE20K: Train Parameters: classes(150), train_h(473), train_w(473), epochs(100). Test Parameters: classes(150), test_h(473), test_w(473), base_size(512).

    • Setting: train on train (20210 images) set and test on val (2000 images) set.
    Network mIoU/mAcc
    PSPNet50 41.42/51.48
    PSPNet50 @ ACoSP-2 38.97/49.56
    PSPNet50 @ ACoSP-4 33.67/43.17
    PSPNet50 @ ACoSP-8 28.04/35.60
    PSPNet50 @ ACoSP-16 19.39/25.52
  2. PASCAL VOC 2012: Train Parameters: classes(21), train_h(473), train_w(473), epochs(50). Test Parameters: classes(21), test_h(473), test_w(473), base_size(512).

    • Setting: train on train_aug (10582 images) set and test on val (1449 images) set.
    Network mIoU/mAcc
    PSPNet50 77.30/85.27
    PSPNet50 @ ACoSP-2 72.71/81.87
    PSPNet50 @ ACoSP-4 65.84/77.12
    PSPNet50 @ ACoSP-8 58.26/69.65
    PSPNet50 @ ACoSP-16 48.06/58.83
  3. Cityscapes: Train Parameters: classes(19), train_h(713/512 -PSP/SegNet), train_h(713/1024 -PSP/SegNet), epochs(200). Test Parameters: classes(19), train_h(713/512 -PSP/SegNet), train_h(713/1024 -PSP/SegNet), base_size(2048).

    • Setting: train on fine_train (2975 images) set and test on fine_val (500 images) set.
    Network mIoU/mAcc
    PSPNet50 77.35/84.27
    PSPNet50 @ ACoSP-2 74.11/81.73
    PSPNet50 @ ACoSP-4 71.50/79.40
    PSPNet50 @ ACoSP-8 66.06/74.33
    PSPNet50 @ ACoSP-16 59.49/67.74
    SegNet 65.12/73.85
    SegNet @ ACoSP-2 64.62/73.19
    SegNet @ ACoSP-4 60.77/69.57
    SegNet @ ACoSP-8 54.34/62.48
    SegNet @ ACoSP-16 44.12/50.87
  4. CamVid: Train Parameters: classes(11), train_h(360), train_w(720), epochs(450). Test Parameters: classes(11), test_h(360), test_w(720), base_size(360).

    • Setting: train on train (367 images) set and test on test (233 images) set.
    Network mIoU/mAcc
    SegNet 55.49+-0.85/65.44+-1.01
    SegNet @ ACoSP-2 51.85+-0.83/61.86+-0.85
    SegNet @ ACoSP-4 50.10+-1.11/59.79+-1.49
    SegNet @ ACoSP-8 47.25+-1.18/56.87+-1.10
    SegNet @ ACoSP-16 42.27+-1.95/51.25+-2.02
  5. Cifar10: Train Parameters: classes(10), train_h(32), train_w(32), epochs(50). Test Parameters: classes(10), test_h(32), test_w(32), base_size(32).

    • Setting: train on train (50000 images) set and test on test (10000 images) set.
    Network mAcc
    ResNet18 89.68
    ResNet18 @ ACoSP-2 88.50
    ResNet18 @ ACoSP-4 86.21
    ResNet18 @ ACoSP-8 81.06
    ResNet18 @ ACoSP-16 76.81

Citation

If you find the acosp/ code or trained models useful, please consider citing:

For the general training code, please also consider referencing hszhao/semseg.

Question

Some FAQ.md collected. You are welcome to send pull requests or give some advices. Contact information: at.

Owner
Merantix
Merantix
FSL-Mate: A collection of resources for few-shot learning (FSL).

FSL-Mate is a collection of resources for few-shot learning (FSL). In particular, FSL-Mate currently contains FewShotPapers: a paper list which tracks

Yaqing Wang 1.5k Jan 08, 2023
AITom is an open-source platform for AI driven cellular electron cryo-tomography analysis.

AITom Introduction AITom is an open-source platform for AI driven cellular electron cryo-tomography analysis. AITom is originated from the tomominer l

93 Jan 02, 2023
An Open-Source Tool for Automatic Disease Diagnosis..

OpenMedicalChatbox An Open-Source Package for Automatic Disease Diagnosis. Overview Due to the lack of open source for existing RL-base automated diag

8 Nov 08, 2022
Zeyuan Chen, Yangchao Wang, Yang Yang and Dong Liu.

Principled S2R Dehazing This repository contains the official implementation for PSD Framework introduced in the following paper: PSD: Principled Synt

zychen 78 Dec 30, 2022
Fantasy Points Prediction and Dream Team Formation

Fantasy-Points-Prediction-and-Dream-Team-Formation Collected Data from open source resources that have over 100 Parameters for predicting cricket play

Akarsh Singh 2 Sep 13, 2022
[NeurIPS 2021] Towards Better Understanding of Training Certifiably Robust Models against Adversarial Examples | ⛰️⚠️

Towards Better Understanding of Training Certifiably Robust Models against Adversarial Examples This repository is the official implementation of "Tow

Sungyoon Lee 4 Jul 12, 2022
Official page of Patchwork (RA-L'21 w/ IROS'21)

Patchwork Official page of "Patchwork: Concentric Zone-based Region-wise Ground Segmentation with Ground Likelihood Estimation Using a 3D LiDAR Sensor

Hyungtae Lim 254 Jan 05, 2023
Fair Recommendation in Two-Sided Platforms

Fair Recommendation in Two-Sided Platforms

gourabgggg 1 Nov 10, 2021
Accelerating BERT Inference for Sequence Labeling via Early-Exit

Sequence-Labeling-Early-Exit Code for ACL 2021 paper: Accelerating BERT Inference for Sequence Labeling via Early-Exit Requirement: Please refer to re

李孝男 23 Oct 14, 2022
Official implementation of NeurIPS 2021 paper "Contextual Similarity Aggregation with Self-attention for Visual Re-ranking"

CSA: Contextual Similarity Aggregation with Self-attention for Visual Re-ranking PyTorch training code for CSA (Contextual Similarity Aggregation). We

Hui Wu 19 Oct 21, 2022
Heart Arrhythmia Classification

This program takes and input of an ECG in European Data Format (EDF) and outputs the classification for heartbeats into normal vs different types of arrhythmia . It uses a deep learning model for cla

4 Nov 02, 2022
A resource for learning about ML, DL, PyTorch and TensorFlow. Feedback always appreciated :)

A resource for learning about ML, DL, PyTorch and TensorFlow. Feedback always appreciated :)

Aladdin Persson 4.7k Jan 08, 2023
DenseNet Implementation in Keras with ImageNet Pretrained Models

DenseNet-Keras with ImageNet Pretrained Models This is an Keras implementation of DenseNet with ImageNet pretrained weights. The weights are converted

Felix Yu 568 Oct 31, 2022
automated systems to assist guarding corona Virus precautions for Closed Rooms (e.g. Halls, offices, etc..)

Automatic-precautionary-guard automated systems to assist guarding corona Virus precautions for Closed Rooms (e.g. Halls, offices, etc..) what is this

badra 0 Jan 06, 2022
FluidNet re-written with ATen tensor lib

fluidnet_cxx: Accelerating Fluid Simulation with Convolutional Neural Networks. A PyTorch/ATen Implementation. This repository is based on the paper,

JoliBrain 50 Jun 07, 2022
[CVPR 2022] CoTTA Code for our CVPR 2022 paper Continual Test-Time Domain Adaptation

CoTTA Code for our CVPR 2022 paper Continual Test-Time Domain Adaptation Prerequisite Please create and activate the following conda envrionment. To r

Qin Wang 87 Jan 08, 2023
Omniverse sample scripts - A guide for developing with Python scripts on NVIDIA Ominverse

Omniverse sample scripts ここでは、NVIDIA Omniverse ( https://www.nvidia.com/ja-jp/om

ft-lab (Yutaka Yoshisaka) 37 Nov 17, 2022
[CVPR 2022] TransEditor: Transformer-Based Dual-Space GAN for Highly Controllable Facial Editing

TransEditor: Transformer-Based Dual-Space GAN for Highly Controllable Facial Editing (CVPR 2022) This repository provides the official PyTorch impleme

Billy XU 128 Jan 03, 2023
AFL binary instrumentation

E9AFL --- Binary AFL E9AFL inserts American Fuzzy Lop (AFL) instrumentation into x86_64 Linux binaries. This allows binaries to be fuzzed without the

242 Dec 12, 2022
Citation Intent Classification in scientific papers using the Scicite dataset an Pytorch

Citation Intent Classification Table of Contents About the Project Built With Installation Usage Acknowledgments About The Project Citation Intent Cla

Federico Nocentini 4 Mar 04, 2022