Semi-Supervised Semantic Segmentation with Pixel-Level Contrastive Learning from a Class-wise Memory Bank

Overview

This repository provides the official code for replicating experiments from the paper: Semi-Supervised Semantic Segmentation with Pixel-Level Contrastive Learning from a Class-wise Memory Bank which as been accepted as an oral paper in the IEEE International Conference on Computer Vision (ICCV) 2021.

This code is based on ClassMix code

Semi-Supervised Semantic Segmentation with Pixel-Level Contrastive Learning from a Class-wise Memory Bank

Prerequisites

  • CUDA/CUDNN
  • Python3
  • Packages found in requirements.txt

Contact

If any question, please either open a github issue or contact via email to: [email protected]

Datasets

Create a folder outsite the code folder:

mkdir ../data/

Cityscapes

mkdir ../data/CityScapes/

Download the dataset from (Link).

Download the files named 'gtFine_trainvaltest.zip', 'leftImg8bit_trainvaltest.zip' and extract in ../data/Cityscapes/

Pascal VOC 2012

mkdir ../data/VOC2012/

Download the dataset from (Link).

Download the file 'training/validation data' under 'Development kit' and extract in ../data/VOC2012/

GTA5

mkdir ../data/GTA5/

Download the dataset from (Link). Unzip all the datasets parts to create an structure like this:

../data/GTA5/images/val/*.png
../data/GTA5/images/train/*.png
../data/GTA5/labels/val/*.png
../data/GTA5/labels/train/*.png

Then, reformat the label images from colored images to training ids. For that, execute this:

python3 utils/translate_labels.py

Experiments

Here there are some examples for replicating the experiments from the paper. Implementation details are specified in the paper (section 4.2) any modification could potentially affect to the final result.

Semi-Supervised

Search here for the desired configuration:

ls ./configs/

For example, for this configuration:

  • Dataset: CityScapes
  • % of labels: 1/30
  • Pretrain: COCO
  • Split: 0
  • Network: Deeplabv2

Execute:

python3 trainSSL.py --config ./configs/configSSL_city_1_30_split0_COCO.json 

Another example, for this configuration:

  • Dataset: CityScapes
  • % of labels: 1/30
  • Pretrain: imagenet
  • Split: 0
  • Network: Deeplabv3+

Execute:

python3 trainSSL.py --config ./configs/configSSL_city_1_30_split0_v3.json 

For example, for this configuration:

  • Dataset: PASCAL VOC
  • % of labels: 1/50
  • Pretrain: COCO
  • Split: 0

Execute:

python3 trainSSL.py --config ./configs/configSSL_pascal_1_50_split0_COCO.json 

For replicating paper experiments, just execute the training of the specific set-up to replicate. We already provide all the configuration files used in the paper. For modifying them and a detail description of all the parameters in the configuration files, check this example:

Configuration File Description

2 for random splits "labeled_samples": 744, # Number of labeled samples to use for supervised learning. The rest will be use without labels. Options: any integer "input_size": "512,512" # Image crop size Options: any integer tuple } }, "seed": 5555, # seed for randomization. Options: any integer "ignore_label": 250, # ignore label value. Options: any integer "utils": { "save_checkpoint_every": 10000, # The model will be saved every this number of iterations. Options: any integer "checkpoint_dir": "../saved/DeepLab", # Path to save the models. Options: any path "val_per_iter": 1000, # The model will be evaluated every this number of iterations. Options: any integer "save_best_model": true # Whether to use teacher model for generating the psuedolabels. The student model wil obe used otherwise. Options: boolean } }">
{
  "model": "DeepLab", # Network architecture. Options: Deeplab
  "version": "2", # Version of the network architecture. Options: {2, 3} for deeplabv2 and deeplabv3+
  "dataset": "cityscapes", # Dataset to use. Options: {"cityscapes", "pascal"}

  "training": { 
    "batch_size": 5, # Batch size to use. Options: any integer
    "num_workers": 3, # Number of cpu workers (threads) to use for laoding the dataset. Options: any integer
    "optimizer": "SGD", # Optimizer to use. Options: {"SGD"}
    "momentum": 0.9, # momentum for SGD optimizer, Options: any float 
    "num_iterations": 100000, # Number of iterations to train. Options: any integer
    "learning_rate": 2e-4, # Learning rate. Options: any float
    "lr_schedule": "Poly", # decay scheduler for the learning rate. Options: {"Poly"}
    "lr_schedule_power": 0.9, # Power value for the Poly scheduler. Options: any float
    "pretraining": "COCO", # Pretraining to use. Options: {"COCO", "imagenet"}
    "weight_decay": 5e-4, # Weight decay. Options: any float
    "use_teacher_train": true, # Whether to use the teacher network to generate pseudolabels. Use student otherwise. Options: boolean. 
    "save_teacher_test": false, # Whether to save the teacher network as the model for testing. Use student otherwise. Options: boolean. 
    
    "data": {
      "split_id_list": 0, # Data splits to use. Options: {0, 1, 2} for pre-computed splits. N >2 for random splits
      "labeled_samples": 744, # Number of labeled samples to use for supervised learning. The rest will be use without labels. Options: any integer
      "input_size": "512,512" # Image crop size  Options: any integer tuple
    }

  },
  "seed": 5555, # seed for randomization. Options: any integer
  "ignore_label": 250, # ignore label value. Options: any integer

  "utils": {
    "save_checkpoint_every": 10000,  # The model will be saved every this number of iterations. Options: any integer
    "checkpoint_dir": "../saved/DeepLab", # Path to save the models. Options: any path
    "val_per_iter": 1000, # The model will be evaluated every this number of iterations. Options: any integer
    "save_best_model": true # Whether to use teacher model for generating the psuedolabels. The student model wil obe used otherwise. Options: boolean
  }
}

Memory Restrictions

All experiments have been run in an NVIDIA Tesla V100. To try to fit the training in a smaller GPU, try to follow this tips:

  • Reduce batch_size from the configuration file
  • Reduce input_size from the configuration file
  • Instead of using trainSSL.py use trainSSL_less_memory.py which optimized labeled and unlabeled data separate steps.

For example, for this configuration:

  • Dataset: PASCAL VOC
  • % of labels: 1/50
  • Pretrain: COCO
  • Split: 0
  • Batch size: 8
  • Crop size: 256x256 Execute:
python3 trainSSL_less_memory.py --config ./configs/configSSL_pascal_1_50_split2_COCO_reduced.json 

Semi-Supervised Domain Adaptation

Experiments for domain adaptation from GTA5 dataset to Cityscapes.

For example, for configuration:

  • % of labels: 1/30
  • Pretrain: Imagenet
  • Split: 0

Execute:

python3 trainSSL_domain_adaptation_targetCity.py --config ./configs/configSSL_city_1_30_split0_imagenet.json 

Evaluation

The training code will evaluate the training model every some specific number of iterations (modify the parameter val_per_iter in the configuration file).

Best evaluated model will be printed at the end of the training.

For every training, several weights will be saved under the path specified in the parameter checkpoint_dir of the configuration file.

One model every save_checkpoint_every (see configuration file) will be saved, plus the best evaluated model.

So, the model has trained we can already know the performance.

For a later evaluation, just execute the next command specifying the model to evaluate in the model-path argument:

python3 evaluateSSL.py --model-path ../saved/DeepLab/best.pth

Citation

If you find this work useful, please consider citing:

@inproceedings{alonso2021semi,
  title={Semi-Supervised Semantic Segmentation with Pixel-Level Contrastive Learning from a Class-wise Memory Bank},
  author={Alonso, I{\~n}igo and Sabater, Alberto and Ferstl, David and Montesano, Luis and Murillo, Ana C},
  booktitle={Proceedings of the IEEE International Conference on Computer Vision},
  year={2021}
}

License

Thi code is released under the Apache 2.0 license. Please see the LICENSE file for more information.

Owner
Iñigo Alonso Ruiz
PhD student (University of Zaragoza)
Iñigo Alonso Ruiz
Official repository for the paper "Going Beyond Linear Transformers with Recurrent Fast Weight Programmers"

Recurrent Fast Weight Programmers This is the official repository containing the code we used to produce the experimental results reported in the pape

IDSIA 36 Nov 15, 2022
PyTorch code accompanying our paper on Maximum Entropy Generators for Energy-Based Models

Maximum Entropy Generators for Energy-Based Models All experiments have tensorboard visualizations for samples / density / train curves etc. To run th

Rithesh Kumar 135 Oct 27, 2022
pytorch implementation of trDesign

trdesign-pytorch This repository is a PyTorch implementation of the trDesign paper based on the official TensorFlow implementation. The initial port o

Learn Ventures Inc. 41 Dec 29, 2022
MERLOT: Multimodal Neural Script Knowledge Models

merlot MERLOT: Multimodal Neural Script Knowledge Models MERLOT is a model for learning what we are calling "neural script knowledge" -- representatio

Rowan Zellers 190 Dec 22, 2022
SmoothGrad implementation in PyTorch

SmoothGrad implementation in PyTorch PyTorch implementation of SmoothGrad: removing noise by adding noise. Vanilla Gradients SmoothGrad Guided backpro

SSKH 143 Jan 05, 2023
YOLOv5🚀 reproduction by Guo Quanhao using PaddlePaddle

YOLOv5-Paddle YOLOv5 🚀 reproduction by Guo Quanhao using PaddlePaddle 支持AutoBatch 支持AutoAnchor 支持GPU Memory 快速开始 使用AIStudio高性能环境快速构建YOLOv5训练(PaddlePa

QuanHao Guo 20 Nov 14, 2022
Pip-package for trajectory benchmarking from "Be your own Benchmark: No-Reference Trajectory Metric on Registered Point Clouds", ECMR'21

Map Metrics for Trajectory Quality Map metrics toolkit provides a set of metrics to quantitatively evaluate trajectory quality via estimating consiste

Mobile Robotics Lab. at Skoltech 31 Oct 28, 2022
Red Team tool for exfiltrating files from a target's Google Drive that you have access to, via Google's API.

GD-Thief Red Team tool for exfiltrating files from a target's Google Drive that you(the attacker) has access to, via the Google Drive API. This includ

Antonio Piazza 39 Dec 27, 2022
AlphaBot2 Pi Core software for interfacing with the various components.

AlphaBot2-Pi-Core AlphaBot2 Pi Core software for interfacing with the various components. This project is currently a W.I.P. I will update this readme

KyleDev 1 Feb 13, 2022
Neural Contours: Learning to Draw Lines from 3D Shapes (CVPR2020)

Neural Contours: Learning to Draw Lines from 3D Shapes This repository contains the PyTorch implementation for CVPR 2020 Paper "Neural Contours: Learn

93 Dec 16, 2022
TriMap: Large-scale Dimensionality Reduction Using Triplets

TriMap TriMap is a dimensionality reduction method that uses triplet constraints to form a low-dimensional embedding of a set of points. The triplet c

Ehsan Amid 235 Dec 24, 2022
Code for Multimodal Neural SLAM for Interactive Instruction Following

Code for Multimodal Neural SLAM for Interactive Instruction Following Code structure The code is adapted from E.T. and most training as well as data p

7 Dec 07, 2022
StarGAN - Official PyTorch Implementation (CVPR 2018)

StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation

Yunjey Choi 5.1k Dec 30, 2022
M2MRF: Many-to-Many Reassembly of Features for Tiny Lesion Segmentation in Fundus Images

M2MRF: Many-to-Many Reassembly of Features for Tiny Lesion Segmentation in Fundus Images This repo is the official implementation of paper "M2MRF: Man

12 Dec 14, 2022
Theory-inspired Parameter Control Benchmarks for Dynamic Algorithm Configuration

This repo is for the paper: Theory-inspired Parameter Control Benchmarks for Dynamic Algorithm Configuration The DAC environment is based on the Dynam

Carola Doerr 1 Aug 19, 2022
Learn about quantum computing and algorithm on quantum computing

quantum_computing this repo contains everything i learn about quantum computing and algorithm on quantum computing what is aquantum computing quantum

arfy slowy 8 Dec 25, 2022
Pytorch Implementation of Zero-Shot Image-to-Text Generation for Visual-Semantic Arithmetic

Pytorch Implementation of Zero-Shot Image-to-Text Generation for Visual-Semantic Arithmetic [Paper] [Colab is coming soon] Approach Example Usage To r

170 Jan 03, 2023
A tiny, pedagogical neural network library with a pytorch-like API.

candl A tiny, pedagogical implementation of a neural network library with a pytorch-like API. The primary use of this library is for education. Use th

Sri Pranav 3 May 23, 2022
🔎 Monitor deep learning model training and hardware usage from your mobile phone 📱

Monitor deep learning model training and hardware usage from mobile. 🔥 Features Monitor running experiments from mobile phone (or laptop) Monitor har

labml.ai 1.2k Dec 25, 2022
Pyeventbus: a publish/subscribe event bus

pyeventbus pyeventbus is a publish/subscribe event bus for Python 2.7. simplifies the communication between python classes decouples event senders and

15 Apr 21, 2022