Semi-Supervised Semantic Segmentation with Pixel-Level Contrastive Learning from a Class-wise Memory Bank

Overview

This repository provides the official code for replicating experiments from the paper: Semi-Supervised Semantic Segmentation with Pixel-Level Contrastive Learning from a Class-wise Memory Bank which as been accepted as an oral paper in the IEEE International Conference on Computer Vision (ICCV) 2021.

This code is based on ClassMix code

Semi-Supervised Semantic Segmentation with Pixel-Level Contrastive Learning from a Class-wise Memory Bank

Prerequisites

  • CUDA/CUDNN
  • Python3
  • Packages found in requirements.txt

Contact

If any question, please either open a github issue or contact via email to: [email protected]

Datasets

Create a folder outsite the code folder:

mkdir ../data/

Cityscapes

mkdir ../data/CityScapes/

Download the dataset from (Link).

Download the files named 'gtFine_trainvaltest.zip', 'leftImg8bit_trainvaltest.zip' and extract in ../data/Cityscapes/

Pascal VOC 2012

mkdir ../data/VOC2012/

Download the dataset from (Link).

Download the file 'training/validation data' under 'Development kit' and extract in ../data/VOC2012/

GTA5

mkdir ../data/GTA5/

Download the dataset from (Link). Unzip all the datasets parts to create an structure like this:

../data/GTA5/images/val/*.png
../data/GTA5/images/train/*.png
../data/GTA5/labels/val/*.png
../data/GTA5/labels/train/*.png

Then, reformat the label images from colored images to training ids. For that, execute this:

python3 utils/translate_labels.py

Experiments

Here there are some examples for replicating the experiments from the paper. Implementation details are specified in the paper (section 4.2) any modification could potentially affect to the final result.

Semi-Supervised

Search here for the desired configuration:

ls ./configs/

For example, for this configuration:

  • Dataset: CityScapes
  • % of labels: 1/30
  • Pretrain: COCO
  • Split: 0
  • Network: Deeplabv2

Execute:

python3 trainSSL.py --config ./configs/configSSL_city_1_30_split0_COCO.json 

Another example, for this configuration:

  • Dataset: CityScapes
  • % of labels: 1/30
  • Pretrain: imagenet
  • Split: 0
  • Network: Deeplabv3+

Execute:

python3 trainSSL.py --config ./configs/configSSL_city_1_30_split0_v3.json 

For example, for this configuration:

  • Dataset: PASCAL VOC
  • % of labels: 1/50
  • Pretrain: COCO
  • Split: 0

Execute:

python3 trainSSL.py --config ./configs/configSSL_pascal_1_50_split0_COCO.json 

For replicating paper experiments, just execute the training of the specific set-up to replicate. We already provide all the configuration files used in the paper. For modifying them and a detail description of all the parameters in the configuration files, check this example:

Configuration File Description

2 for random splits "labeled_samples": 744, # Number of labeled samples to use for supervised learning. The rest will be use without labels. Options: any integer "input_size": "512,512" # Image crop size Options: any integer tuple } }, "seed": 5555, # seed for randomization. Options: any integer "ignore_label": 250, # ignore label value. Options: any integer "utils": { "save_checkpoint_every": 10000, # The model will be saved every this number of iterations. Options: any integer "checkpoint_dir": "../saved/DeepLab", # Path to save the models. Options: any path "val_per_iter": 1000, # The model will be evaluated every this number of iterations. Options: any integer "save_best_model": true # Whether to use teacher model for generating the psuedolabels. The student model wil obe used otherwise. Options: boolean } }">
{
  "model": "DeepLab", # Network architecture. Options: Deeplab
  "version": "2", # Version of the network architecture. Options: {2, 3} for deeplabv2 and deeplabv3+
  "dataset": "cityscapes", # Dataset to use. Options: {"cityscapes", "pascal"}

  "training": { 
    "batch_size": 5, # Batch size to use. Options: any integer
    "num_workers": 3, # Number of cpu workers (threads) to use for laoding the dataset. Options: any integer
    "optimizer": "SGD", # Optimizer to use. Options: {"SGD"}
    "momentum": 0.9, # momentum for SGD optimizer, Options: any float 
    "num_iterations": 100000, # Number of iterations to train. Options: any integer
    "learning_rate": 2e-4, # Learning rate. Options: any float
    "lr_schedule": "Poly", # decay scheduler for the learning rate. Options: {"Poly"}
    "lr_schedule_power": 0.9, # Power value for the Poly scheduler. Options: any float
    "pretraining": "COCO", # Pretraining to use. Options: {"COCO", "imagenet"}
    "weight_decay": 5e-4, # Weight decay. Options: any float
    "use_teacher_train": true, # Whether to use the teacher network to generate pseudolabels. Use student otherwise. Options: boolean. 
    "save_teacher_test": false, # Whether to save the teacher network as the model for testing. Use student otherwise. Options: boolean. 
    
    "data": {
      "split_id_list": 0, # Data splits to use. Options: {0, 1, 2} for pre-computed splits. N >2 for random splits
      "labeled_samples": 744, # Number of labeled samples to use for supervised learning. The rest will be use without labels. Options: any integer
      "input_size": "512,512" # Image crop size  Options: any integer tuple
    }

  },
  "seed": 5555, # seed for randomization. Options: any integer
  "ignore_label": 250, # ignore label value. Options: any integer

  "utils": {
    "save_checkpoint_every": 10000,  # The model will be saved every this number of iterations. Options: any integer
    "checkpoint_dir": "../saved/DeepLab", # Path to save the models. Options: any path
    "val_per_iter": 1000, # The model will be evaluated every this number of iterations. Options: any integer
    "save_best_model": true # Whether to use teacher model for generating the psuedolabels. The student model wil obe used otherwise. Options: boolean
  }
}

Memory Restrictions

All experiments have been run in an NVIDIA Tesla V100. To try to fit the training in a smaller GPU, try to follow this tips:

  • Reduce batch_size from the configuration file
  • Reduce input_size from the configuration file
  • Instead of using trainSSL.py use trainSSL_less_memory.py which optimized labeled and unlabeled data separate steps.

For example, for this configuration:

  • Dataset: PASCAL VOC
  • % of labels: 1/50
  • Pretrain: COCO
  • Split: 0
  • Batch size: 8
  • Crop size: 256x256 Execute:
python3 trainSSL_less_memory.py --config ./configs/configSSL_pascal_1_50_split2_COCO_reduced.json 

Semi-Supervised Domain Adaptation

Experiments for domain adaptation from GTA5 dataset to Cityscapes.

For example, for configuration:

  • % of labels: 1/30
  • Pretrain: Imagenet
  • Split: 0

Execute:

python3 trainSSL_domain_adaptation_targetCity.py --config ./configs/configSSL_city_1_30_split0_imagenet.json 

Evaluation

The training code will evaluate the training model every some specific number of iterations (modify the parameter val_per_iter in the configuration file).

Best evaluated model will be printed at the end of the training.

For every training, several weights will be saved under the path specified in the parameter checkpoint_dir of the configuration file.

One model every save_checkpoint_every (see configuration file) will be saved, plus the best evaluated model.

So, the model has trained we can already know the performance.

For a later evaluation, just execute the next command specifying the model to evaluate in the model-path argument:

python3 evaluateSSL.py --model-path ../saved/DeepLab/best.pth

Citation

If you find this work useful, please consider citing:

@inproceedings{alonso2021semi,
  title={Semi-Supervised Semantic Segmentation with Pixel-Level Contrastive Learning from a Class-wise Memory Bank},
  author={Alonso, I{\~n}igo and Sabater, Alberto and Ferstl, David and Montesano, Luis and Murillo, Ana C},
  booktitle={Proceedings of the IEEE International Conference on Computer Vision},
  year={2021}
}

License

Thi code is released under the Apache 2.0 license. Please see the LICENSE file for more information.

Owner
Iñigo Alonso Ruiz
PhD student (University of Zaragoza)
Iñigo Alonso Ruiz
Solutions and questions for AoC2021. Merry christmas!

Advent of Code 2021 Merry christmas! 🎄 🎅 To get solutions and approximate execution times for implementations, please execute the run.py script in t

Wilhelm Ågren 5 Dec 29, 2022
Animatable Neural Radiance Fields for Modeling Dynamic Human Bodies

To make the comparison with Animatable NeRF easier on the Human3.6M dataset, we save the quantitative results at here, which also contains the results of other methods, including Neural Body, D-NeRF,

ZJU3DV 359 Jan 08, 2023
ServiceX Transformer that converts flat ROOT ntuples into columnwise data

ServiceX_Uproot_Transformer ServiceX Transformer that converts flat ROOT ntuples into columnwise data Usage You can invoke the transformer from the co

Vis 0 Jan 20, 2022
Implementation of CVPR'21: RfD-Net: Point Scene Understanding by Semantic Instance Reconstruction

RfD-Net [Project Page] [Paper] [Video] RfD-Net: Point Scene Understanding by Semantic Instance Reconstruction Yinyu Nie, Ji Hou, Xiaoguang Han, Matthi

Yinyu Nie 162 Jan 06, 2023
SubOmiEmbed: Self-supervised Representation Learning of Multi-omics Data for Cancer Type Classification

SubOmiEmbed: Self-supervised Representation Learning of Multi-omics Data for Cancer Type Classification

Sayed Hashim 3 Nov 15, 2022
A Python Reconnection Tool for alt:V

altv-reconnect What? It invokes a reconnect in the altV Client Dev Console. You get to determine when your local client should reconnect when developi

8 Jun 30, 2022
MiraiML: asynchronous, autonomous and continuous Machine Learning in Python

MiraiML Mirai: future in japanese. MiraiML is an asynchronous engine for continuous & autonomous machine learning, built for real-time usage. Usage In

Arthur Paulino 25 Jul 27, 2022
A Pytorch implementation of "Manifold Matching via Deep Metric Learning for Generative Modeling" (ICCV 2021)

Manifold Matching via Deep Metric Learning for Generative Modeling A Pytorch implementation of "Manifold Matching via Deep Metric Learning for Generat

69 Dec 10, 2022
VQMIVC - Vector Quantization and Mutual Information-Based Unsupervised Speech Representation Disentanglement for One-shot Voice Conversion

VQMIVC: Vector Quantization and Mutual Information-Based Unsupervised Speech Representation Disentanglement for One-shot Voice Conversion (Interspeech

Disong Wang 262 Dec 31, 2022
Computing Shapley values using VAEAC

Shapley values and the VAEAC method In this GitHub repository, we present the implementation of the VAEAC approach from our paper "Using Shapley Value

3 Nov 23, 2022
HiFi++: a Unified Framework for Neural Vocoding, Bandwidth Extension and Speech Enhancement

HiFi++ : a Unified Framework for Neural Vocoding, Bandwidth Extension and Speech Enhancement This is the unofficial implementation of Vocoder part of

Rishikesh (ऋषिकेश) 118 Dec 29, 2022
Simple renderer for use with MuJoCo (>=2.1.2) Python Bindings.

Viewer for MuJoCo in Python Interactive renderer to use with the official Python bindings for MuJoCo. Starting with version 2.1.2, MuJoCo comes with n

Rohan P. Singh 62 Dec 30, 2022
Repo for FUZE project. I will also publish some Linux kernel LPE exploits for various real world kernel vulnerabilities here. the samples are uploaded for education purposes for red and blue teams.

Linux_kernel_exploits Some Linux kernel exploits for various real world kernel vulnerabilities here. More exploits are yet to come. This repo contains

Wei Wu 472 Dec 21, 2022
CVPR 2021 Challenge on Super-Resolution Space

Learning the Super-Resolution Space Challenge NTIRE 2021 at CVPR Learning the Super-Resolution Space challenge is held as a part of the 6th edition of

andreas 104 Oct 26, 2022
code for "Feature Importance-aware Transferable Adversarial Attacks"

Feature Importance-aware Attack(FIA) This repository contains the code for the paper: Feature Importance-aware Transferable Adversarial Attacks (ICCV

Hengchang Guo 44 Nov 24, 2022
ComputerVision - This repository aims at realized easy network architecture

ComputerVision This repository aims at realized easy network architecture Colori

DongDong 4 Dec 14, 2022
[WACV 2022] Contextual Gradient Scaling for Few-Shot Learning

CxGrad - Official PyTorch Implementation Contextual Gradient Scaling for Few-Shot Learning Sanghyuk Lee, Seunghyun Lee, and Byung Cheol Song In WACV 2

Sanghyuk Lee 4 Dec 05, 2022
Source Code for ICSE 2022 Paper - ``Can We Achieve Fairness Using Semi-Supervised Learning?''

Fair-SSL Source Code for ICSE 2022 Paper - Can We Achieve Fairness Using Semi-Supervised Learning? Ethical bias in machine learning models has become

1 Dec 18, 2021
Create Data & AI apps in 20 lines of code with Shimoku

Install with: pip install shimoku-api-python Start with: from os import getenv import shimoku_api_python.client as Shimoku

Shimoku 5 Nov 07, 2022