PyTorch implementation of: Michieli U. and Zanuttigh P., "Continual Semantic Segmentation via Repulsion-Attraction of Sparse and Disentangled Latent Representations", CVPR 2021.

Related tags

Deep LearningSDR
Overview

Continual Semantic Segmentation via Repulsion-Attraction of Sparse and Disentangled Latent Representations

This is the official PyTorch implementation of our work: "Continual Semantic Segmentation via Repulsion-Attraction of Sparse and Disentangled Latent Representations" published at CVPR 2021.

In this paper, we present some novel approaches constraining the feature space for continual learning semantic segmentation models. The evaluation on Pascal VOC2012 and on ADE20K validated our method.

Paper
5-min video
slides
poster
teaser

Requirements

This repository uses the following libraries:

  • Python (3.7.6)
  • Pytorch (1.4.0) [tested up to 1.7.1]
  • torchvision (0.5.0)
  • tensorboardX (2.0)
  • matplotlib (3.1.1)
  • numpy (1.18.1)
  • apex (0.1) [optional]
  • inplace-abn (1.0.7) [optional]

We also assume to have installed pytorch.distributed package.

All the dependencies are listed in the requirements.txt file which can be used in conda as:
conda create --name <env> --file requirements.txt

How to download data

In this project we use two dataset, ADE20K and Pascal-VOC 2012. We provide the scripts to download them in 'data/download_<dataset_name>.sh'. The script takes no inputs and you should use it in the target directory (where you want to download data).

How to perform training

The most important file is run.py, that is in charge to start the training or test procedure. To run it, simpy use the following command:

python -m torch.distributed.launch --nproc_per_node=<num_GPUs> run.py --data_root <data_folder> --name <exp_name> .. other args ..

The default is to use a pretraining for the backbone used, which is the one officially released by PyTorch models and it will be downloaded automatically. If you don't want to use pretrained, please use --no-pretrained.

There are many options and you can see them all by using --help option. Some of them are discussed in the following:

  • please specify the data folder using: --data_root <data_root>
  • dataset: --dataset voc (Pascal-VOC 2012) | ade (ADE20K)
  • task: --task <task>, where tasks are
    • 15-5, 15-5s, 19-1 (VOC), 100-50, 100-10, 50, 100-50b, 100-10b, 50b (ADE, b indicates the order)
  • step (each step is run separately): --step <N>, where N is the step number, starting from 0
  • (only for Pascal-VOC) disjoint is default setup, to enable overlapped: --overlapped
  • learning rate: --lr 0.01 (for step 0) | 0.001 (for step > 0)
  • batch size: --batch_size 8 (Pascal-VOC 2012) | 4 (ADE20K)
  • epochs: --epochs 30 (Pascal-VOC 2012) | 60 (ADE20K)
  • method: --method <method name>, where names are
    • FT, LWF, LWF-MC, ILT, EWC, RW, PI, MIB, CIL, SDR
      Note that method overwrites other parameters, but can be used as a kickstart to use default parameters for each method (see more on this in the hyperparameters section below)

For all the details please follow the information provided using the help option.

Example training commands

We provide some example scripts in the *.slurm and *.bat files.
For instance, to run the step 0 of 19-1 VOC2012 you can run:

python -u -m torch.distributed.launch 1> 'outputs/19-1/output_19-1_step0.txt' 2>&1 \
--nproc_per_node=1 run.py \
--batch_size 8 \
--logdir logs/19-1/ \
--dataset voc \
--name FT \
--task 19-1 \
--step 0 \
--lr 0.001 \
--epochs 30 \
--debug \
--sample_num 10 \
--unce \
--loss_de_prototypes 1 \
--where_to_sim GPU_windows

Note: loss_de_prototypes is set to 1 only for having the prototypes computed in the 0-th step (no distillation is actually computed of course).

Then, the step 1 of the same scenario can be computed simply as:

python -u -m torch.distributed.launch 1> 'outputs/19-1/output_19-1_step1.txt'  2>&1 \
--nproc_per_node=1 run.py \
--batch_size 8 \
--logdir logs/19-1/ \
--dataset voc \
--task 19-1 \
--step 1 \
--lr 0.0001 \
--epochs 30 \
--debug \
--sample_num 10 \
--where_to_sim GPU_windows \
--method SDR \
--step_ckpt 'logs/19-1/19-1-voc_FT/19-1-voc_FT_0.pth'

The results obtained are reported inside the outputs/ and logs/ folder, which can be downloaded here, and are 0.4% of mIoU higher than those reported in the main paper due to a slightly changed hyperparameter.

To run other approaches it is sufficient to change the --method parameter into one of the following: FT, LWF, LWF-MC, ILT, EWC, RW, PI, MIB, CIL, SDR.

Note: for the best results, the hyperparameters may change. Please see further details on the hyperparameters section below.

Once you trained the model, you can see the result on tensorboard (we perform the test after the whole training) or on the output files. or you can test it by using the same script and parameters but using the option

--test

that will skip all the training procedure and test the model on test data.

Do you want to try our constraints on your codebase or task?

If you want to try our novel constraints on your codebase or on a different problem you can check the utils/loss.py file. Here, you can take the definitions of the different losses and embed them into your codebase
The names of the variables could be interpreted as:

  • targets-- ground truth map,
  • outputs-- segmentation map output from the current network
  • outputs_old-- segmentation map output from the previous network
  • features-- features taken from the end of the currently-trained encoder,
  • features_old-- features taken from the end of the previous encoder [used for distillation on the encoder on ILT, but not used on SDR],
  • prototypes-- prototypical feature representations
  • incremental_step -- index of the current incremental step (0 if first non-incremental training is performed)
  • classes_old-- index of previous classes

Range for the Hyper-parameters

For what concerns the hyperparameters of our approach:

  • The parameter for the distillation loss is in the same range of that of MiB,
  • Prototypes matching: lambda was searched in range 1e-1 to 1e-3,
  • Contrastive learning (or clustering): lambda was searched in the range of 1e-2 to 1e-3,
  • Features sparsification: lambda was searched in the range of 1e-3 to 1e-5 A kick-start could be to use KD 10, PM 1e-2, CL 1e-3 and FS 1e-4.
    The best parameters may vary across datasets and incremental setup. However, we typically did a grid search and kept it fixed across learning steps.

So, writing explicitly all the parameters, the command would look something like the following:

python -u -m torch.distributed.launch 1> 'outputs/19-1/output_19-1_step1_custom.txt'  2>&1 \
--nproc_per_node=1 run.py \
--batch_size 8 \
--logdir logs/19-1/ \
--dataset voc \
--task 19-1 \
--step 1 \
--lr 0.0001 \
--epochs 30 \
--debug \
--sample_num 10 \
--where_to_sim GPU_windows \
--unce \
--loss_featspars $loss_featspars \
--lfs_normalization $lfs_normalization \
--lfs_shrinkingfn $lfs_shrinkingfn \
--lfs_loss_fn_touse $lfs_loss_fn_touse \
--loss_de_prototypes $loss_de_prototypes \
--loss_de_prototypes_sumafter \
--lfc_sep_clust $lfc_sep_clust \
--loss_fc $loss_fc \
--loss_kd $loss_kd \
--step_ckpt 'logs/19-1/19-1-voc_FT/19-1-voc_FT_0.pth'

Cite us

If you use this repository, please consider to cite

   @inProceedings{michieli2021continual,
   author = {Michieli, Umberto and Zanuttigh, Pietro},
   title  = {Continual Semantic Segmentation via Repulsion-Attraction of Sparse and Disentangled Latent Representations},
   booktitle = {Computer Vision and Pattern Recognition (CVPR)},
   year      = {2021},
   month     = {June}
   }

And our previous works ILT and its journal extension.

Acknowledgements

We gratefully acknowledge the authors of MiB paper for the insightful discussion and for providing the open source codebase, which has been the starting point for our work.
We also acknowledge the authors of CIL for providing their code even before the official release.

Owner
Multimedia Technology and Telecommunication Lab
Department of Information Engineering, University of Padova
Multimedia Technology and Telecommunication Lab
Harmonic Memory Networks for Graph Completion

HMemNetworks Code and documentation for Harmonic Memory Networks, a series of models for compositionally assembling representations of graph elements

mlalisse 0 Oct 27, 2021
Deeprl - Standard DQN and dueling network for simple games

DeepRL This code implements the standard deep Q-learning and dueling network with experience replay (memory buffer) for playing simple games. DQN algo

Yao Zhou 6 Apr 12, 2020
Meta Language-Specific Layers in Multilingual Language Models

Meta Language-Specific Layers in Multilingual Language Models This repo contains the source codes for our paper On Negative Interference in Multilingu

Zirui Wang 20 Feb 13, 2022
Pytorch implementation of Integrating Tree Path in Transformer for Code Representation

This is an official Pytorch implementation of the approaches proposed in: Han Peng, Ge Li, Wenhan Wang, Yunfei Zhao, Zhi Jin “Integrating Tree Path in

Han Peng 16 Dec 23, 2022
Ensemble Learning Priors Driven Deep Unfolding for Scalable Snapshot Compressive Imaging [PyTorch]

Ensemble Learning Priors Driven Deep Unfolding for Scalable Snapshot Compressive Imaging [PyTorch] Abstract Snapshot compressive imaging (SCI) can rec

integirty 6 Nov 01, 2022
Context Decoupling Augmentation for Weakly Supervised Semantic Segmentation

Context Decoupling Augmentation for Weakly Supervised Semantic Segmentation The code of: Context Decoupling Augmentation for Weakly Supervised Semanti

54 Dec 12, 2022
Much faster than SORT(Simple Online and Realtime Tracking), a little worse than SORT

QSORT QSORT(Quick + Simple Online and Realtime Tracking) is a simple online and realtime tracking algorithm for 2D multiple object tracking in video s

Yonghye Kwon 8 Jul 27, 2022
A hyperparameter optimization framework

Optuna: A hyperparameter optimization framework Website | Docs | Install Guide | Tutorial Optuna is an automatic hyperparameter optimization software

7.4k Jan 04, 2023
Qt-GUI implementation of the YOLOv5 algorithm (ver.6 and ver.5)

YOLOv5-GUI 🎉 YOLOv5算法(ver.6及ver.5)的Qt-GUI实现 🎉 Qt-GUI implementation of the YOLOv5 algorithm (ver.6 and ver.5). 基于YOLOv5的v5版本和v6版本及Javacr大佬的UI逻辑进行编写

EricFang 12 Dec 28, 2022
The software associated with a paper accepted at EMNLP 2021 titled "Open Knowledge Graphs Canonicalization using Variational Autoencoders".

Open-KG-canonicalization The software associated with a paper accepted at EMNLP 2021 titled "Open Knowledge Graphs Canonicalization using Variational

International Business Machines 13 Nov 11, 2022
Code for the paper: Audio-Visual Scene Analysis with Self-Supervised Multisensory Features

[Paper] [Project page] This repository contains code for the paper: Andrew Owens, Alexei A. Efros. Audio-Visual Scene Analysis with Self-Supervised Mu

Andrew Owens 202 Dec 13, 2022
A simple, unofficial implementation of MAE using pytorch-lightning

Masked Autoencoders in PyTorch A simple, unofficial implementation of MAE (Masked Autoencoders are Scalable Vision Learners) using pytorch-lightning.

Connor Anderson 20 Dec 03, 2022
NEATEST: Evolving Neural Networks Through Augmenting Topologies with Evolution Strategy Training

NEATEST: Evolving Neural Networks Through Augmenting Topologies with Evolution Strategy Training

Göktuğ Karakaşlı 16 Dec 05, 2022
DenseNet Implementation in Keras with ImageNet Pretrained Models

DenseNet-Keras with ImageNet Pretrained Models This is an Keras implementation of DenseNet with ImageNet pretrained weights. The weights are converted

Felix Yu 568 Oct 31, 2022
Graph-total-spanning-trees - A Python script to get total number of Spanning Trees in a Graph

Total number of Spanning Trees in a Graph This is a python script just written f

Mehdi I. 0 Jul 18, 2022
🎃 Core identification module of AI powerful point reading system platform.

ppReader-Kernel Intro Core identification module of AI powerful point reading system platform. Usage 硬件: Windows10、GPU:nvdia GTX 1060 、普通RBG相机 软件: con

CrashKing 1 Jan 11, 2022
Tackling the Class Imbalance Problem of Deep Learning Based Head and Neck Organ Segmentation

Info This is the code repository of the work Tackling the Class Imbalance Problem of Deep Learning Based Head and Neck Organ Segmentation from Elias T

2 Apr 20, 2022
Semi-Supervised Learning with Ladder Networks in Keras. Get 98% test accuracy on MNIST with just 100 labeled examples !

Semi-Supervised Learning with Ladder Networks in Keras This is an implementation of Ladder Network in Keras. Ladder network is a model for semi-superv

Divam Gupta 101 Sep 07, 2022
Tensorflow solution of NER task Using BiLSTM-CRF model with Google BERT Fine-tuning And private Server services

Tensorflow solution of NER task Using BiLSTM-CRF model with Google BERT Fine-tuning

MaCan 4.2k Dec 29, 2022
给yolov5加个gui界面,使用pyqt5,yolov5是5.0版本

博文地址 https://xugaoxiang.com/2021/06/30/yolov5-pyqt5 代码执行 项目中使用YOLOv5的v5.0版本,界面文件是project.ui pip install -r requirements.txt python main.py 图片检测 视频检测

Xu GaoXiang 215 Dec 30, 2022