Multi-Target Adversarial Frameworks for Domain Adaptation in Semantic Segmentation

Related tags

Deep LearningMTAF
Overview

Multi-Target Adversarial Frameworks for Domain Adaptation in Semantic Segmentation

Paper

Multi-Target Adversarial Frameworks for Domain Adaptation in Semantic Segmentation
Antoine Saporta, Tuan-Hung Vu, Matthieu Cord, Patrick Pérez
valeo.ai, France
IEEE International Conference on Computer Vision (ICCV), 2021 (Poster)

If you find this code useful for your research, please cite our paper:

@inproceedings{saporta2021mtaf,
  title={Multi-Target Adversarial Frameworks for Domain Adaptation in Semantic Segmentation},
  author={Saporta, Antoine and Vu, Tuan-Hung and Cord, Mathieu and P{\'e}rez, Patrick},
  booktitle={ICCV},
  year={2021}
}

Abstract

In this work, we address the task of unsupervised domain adaptation (UDA) for semantic segmentation in presence of multiple target domains: The objective is to train a single model that can handle all these domains at test time. Such a multi-target adaptation is crucial for a variety of scenarios that real-world autonomous systems must handle. It is a challenging setup since one faces not only the domain gap between the labeled source set and the unlabeled target set, but also the distribution shifts existing within the latter among the different target domains. To this end, we introduce two adversarial frameworks: (i) multi-discriminator, which explicitly aligns each target domain to its counterparts, and (ii) multi-target knowledge transfer, which learns a target-agnostic model thanks to a multi-teacher/single-student distillation mechanism.The evaluation is done on four newly-proposed multi-target benchmarks for UDA in semantic segmentation. In all tested scenarios, our approaches consistently outperform baselines, setting competitive standards for the novel task.

Preparation

Pre-requisites

  • Python 3.7
  • Pytorch >= 0.4.1
  • CUDA 9.0 or higher

Installation

  1. Clone the repo:
$ git clone https://github.com/valeoai/MTAF
$ cd MTAF
  1. Install OpenCV if you don't already have it:
$ conda install -c menpo opencv
  1. Install NVIDIA Apex if you don't already have it: follow the instructions on: https://github.com/NVIDIA/apex

  2. Install this repository and the dependencies using pip:

$ pip install -e <root_dir>

With this, you can edit the MTAF code on the fly and import function and classes of MTAF in other project as well.

  1. Optional. To uninstall this package, run:
$ pip uninstall MTAF

Datasets

By default, the datasets are put in <root_dir>/data. We use symlinks to hook the MTAF codebase to the datasets. An alternative option is to explicitlly specify the parameters DATA_DIRECTORY_SOURCE and DATA_DIRECTORY_TARGET in YML configuration files.

  • GTA5: Please follow the instructions here to download images and semantic segmentation annotations. The GTA5 dataset directory should have this basic structure:
<root_dir>/data/GTA5/                               % GTA dataset root
<root_dir>/data/GTA5/images/                        % GTA images
<root_dir>/data/GTA5/labels/                        % Semantic segmentation labels
...
  • Cityscapes: Please follow the instructions in Cityscape to download the images and ground-truths. The Cityscapes dataset directory should have this basic structure:
<root_dir>/data/cityscapes/                         % Cityscapes dataset root
<root_dir>/data/cityscapes/leftImg8bit              % Cityscapes images
<root_dir>/data/cityscapes/leftImg8bit/train
<root_dir>/data/cityscapes/leftImg8bit/val
<root_dir>/data/cityscapes/gtFine                   % Semantic segmentation labels
<root_dir>/data/cityscapes/gtFine/train
<root_dir>/data/cityscapes/gtFine/val
...
  • Mapillary: Please follow the instructions in Mapillary Vistas to download the images and validation ground-truths. The Mapillary Vistas dataset directory should have this basic structure:
<root_dir>/data/mapillary/                          % Mapillary dataset root
<root_dir>/data/mapillary/train                     % Mapillary train set
<root_dir>/data/mapillary/train/images
<root_dir>/data/mapillary/validation                % Mapillary validation set
<root_dir>/data/mapillary/validation/images
<root_dir>/data/mapillary/validation/labels
...
  • IDD: Please follow the instructions in IDD to download the images and validation ground-truths. The IDD Segmentation dataset directory should have this basic structure:
<root_dir>/data/IDD/                         % IDD dataset root
<root_dir>/data/IDD/leftImg8bit              % IDD images
<root_dir>/data/IDD/leftImg8bit/train
<root_dir>/data/IDD/leftImg8bit/val
<root_dir>/data/IDD/gtFine                   % Semantic segmentation labels
<root_dir>/data/IDD/gtFine/val
...

Pre-trained models

Pre-trained models can be downloaded here and put in <root_dir>/pretrained_models

Running the code

For evaluation, execute:

$ cd <root_dir>/mtaf/scripts
$ python test.py --cfg ./configs/gta2cityscapes_mapillary_baseline_pretrained.yml
$ python test.py --cfg ./configs/gta2cityscapes_mapillary_mdis_pretrained.yml
$ python test.py --cfg ./configs/gta2cityscapes_mapillary_mtkt_pretrained.yml

Training

For the experiments done in the paper, we used pytorch 1.3.1 and CUDA 10.0. To ensure reproduction, the random seed has been fixed in the code. Still, you may need to train a few times to reach the comparable performance.

By default, logs and snapshots are stored in <root_dir>/experiments with this structure:

<root_dir>/experiments/logs
<root_dir>/experiments/snapshots

To train the multi-target baseline:

$ cd <root_dir>/mtaf/scripts
$ python train.py --cfg ./configs/gta2cityscapes_mapillary_baseline.yml

To train the Multi-Discriminator framework:

$ cd <root_dir>/mtaf/scripts
$ python train.py --cfg ./configs/gta2cityscapes_mapillary_mdis.yml

To train the Multi-Target Knowledge Transfer framework:

$ cd <root_dir>/mtaf/scripts
$ python train.py --cfg ./configs/gta2cityscapes_mapillary_mtkt.yml

Testing

To test the multi-target baseline:

$ cd <root_dir>/mtaf/scripts
$ python test.py --cfg ./configs/gta2cityscapes_mapillary_baseline.yml

To test the Multi-Discriminator framework:

$ cd <root_dir>/mtaf/scripts
$ python test.py --cfg ./configs/gta2cityscapes_mapillary_mdis.yml

To test the Multi-Target Knowledge Transfer framework:

$ cd <root_dir>/mtaf/scripts
$ python test.py --cfg ./configs/gta2cityscapes_mapillary_mtkt.yml

Acknowledgements

This codebase is heavily borrowed from ADVENT.

License

MTAF is released under the Apache 2.0 license.

Comments
  • question about adversarial training code in train_UDA.py

    question about adversarial training code in train_UDA.py

    Thank you for sharing the code for your excellent work. I have some basic questions about your implementation. pred_trg_main = interp_target(all_pred_trg_main[i+1]) ## what does [i+1] mean? pred_trg_main_list.append(pred_trg_main) pred_trg_target = interp_target(all_pred_trg_main[0]) ## what does [0] mean? pred_trg_target_list.append(pred_trg_target)

    In train_UDA.py, line 829-836, why should we use index[i+1] and [0]? What's the meaning of that? Also, where is the definition of the target-agnostic classifier in your code?

    Thanks again and look forward to hearing back from you!

    opened by yuzhang03 2
  • the problem for training loss

    the problem for training loss

    Thanks for enlightening work agian.

    I train the Mdis method for one source and one target, but I am confused for the loss, and I plot by tensorboard. And as I think, the adv loss should walk low and the discrimitor loss should walk higher. but in the loss below, the two losses oscillate around a number. whats wrong with it?

    Besides, I infer the training results should be better when training in manner of 1source 1target instead of 1source multi target. But in my training, I dont get good results.

    So hope your thought sincerely.

    And my training config: adv loss weight: 0.5 adv learning rate: 1e-5 seg learning rate: 1.25e-5

    adversarial loss of one source and one target
    image

    dicriminator loss of one source and one target image

    opened by slz929 2
  • problem for training data

    problem for training data

    Thanks for enlightening and practical work about multi-target DA ! I have read your paper, and I found one source dataset and 3 target datasets of unequal quantity, does the quantity of data for every domain matters? And what is the appropriate amount of training data for MTKT? Another question, I want to know why KL loss is used for knowledge transfer? If I want to train an embedding word instead of a segmentation map, is the KL loss appropriate, and is there a better alternative?

    opened by slz929 2
  • About the generation of segmentation color maps

    About the generation of segmentation color maps

    Thanks for the great research!

    I have a question though, the mIoU you report in your paper is for 7 classes, but the segmentation colour map in the qualitative analysis seems to be for the 19 classes commonly used in domain adaptive semantic segmentation.

    In other words, how can a model trained on 7 classes be used to generate a 19-class segmentation colour map? Or am I wrong in my understanding?

    I look forward to your response.

    Thank you!

    opened by liwei1101 1
  • About labels of IDD dataset

    About labels of IDD dataset

    Hello! @SportaXD Thank you for your great work!

    I was reproducing the code and noticed: the labels in the IDD dataset are in JSON file format instead of segmentation label form.

    How is this problem solved?

    opened by liwei1101 1
  • About MTKT code

    About MTKT code

    In train_UDA.py 758 line

            d_main_list[i] = d_main
            optimizer_d_main_list.append(optimizer_d_main)
            d_aux_list[i] = d_aux
            optimizer_d_aux_list.append(optimizer_d_aux)
    

    If this were done(d_main_list[i] = d_main and d_aux_list[i] = d_aux), it would make all the discriminators in the list use the same one, shouldn't there be one discriminator for each classifier?

    opened by liwei1101 1
  • About 'the multi-target baseline'

    About 'the multi-target baseline'

    Thank you for sharing the code for your excellent work. I have some basic questions about your implementation.

    d_main = get_fc_discriminator(num_classes=num_classes)
    d_main.train()
    d_main.to(device)
    d_aux = get_fc_discriminator(num_classes=num_classes)
    d_aux.train()
    d_aux.to(device)
    

    Can you tell me why the multi-domain baseline code does not use multiple discriminators but only one discriminator. It looks like a single domain approach. Thanks!

    opened by liwei1101 1
  • about eval_UDA.py

    about eval_UDA.py

    Thanks for sharing your codes.

    I was impressed with your good research.

    Could you explain why the output map is not resized for target size(cfg.TEST.OUTPUT_SIZE_TARGET) in the case of Mapillary dataset in line 57 of eval_UDA.py?

    When I tested the trained model on Mapillary dataset, inference took a long time due to the large resolution.

    I'm looking forward to hearing from you.

    Thank you!

    opened by jdg900 1
  • modifying info7class.json and train_UDA.py

    modifying info7class.json and train_UDA.py

    we have found a small bug in "./MTAF/mtaf/dataset/cityscapes_list/info7class.json". valeo

    It should be 7 Classes rather than 19 Classes in the configuration file. It appears in the Evaluation stage, where the result is printed out in the mIoU evaluation metrics and the names of the 7 classes.

    Also, there is a typo in the comments.

    opened by mohamedelmesawy 1
  • Running MTAF on a slightly different setup

    Running MTAF on a slightly different setup

    Hello, thanks for sharing the code and such a good contribution. I would like to run your method on a setup that is a bit different, specifically adapting from Cityscapes ---> BDD, Mapillary. I have seen that the code accepts Cityscapes for both source and target, so that shouldnt be a problem, and I have added a dataloader for BDD to be the target 1.

    In order to get the best performance, do I need to train the baseline and then train the method using MTKT or MDIS loading the baseline as pretrained? Or do I get the best performance directly by running the training script for MTKT or MDIS without the baseline?

    opened by fabriziojpiva 1
Owner
Valeo.ai
The GitHub account of Valeo.ai
Valeo.ai
A light weight data augmentation tool for training CNNs and Viola Jones detectors

hey-daug A light weight data augmentation tool for training CNNs and Viola Jones detectors (Haar Cascades). This tool inflates your data by up to six

Jaiyam Sharma 2 Nov 23, 2019
I created My own Virtual Artificial Intelligence named genesis, He can assist with my Tasks and also perform some analysis,,

Virtual-Artificial-Intelligence-genesis- I created My own Virtual Artificial Intelligence named genesis, He can assist with my Tasks and also perform

AKASH M 1 Nov 05, 2021
Deep learning operations reinvented (for pytorch, tensorflow, jax and others)

This video in better quality. einops Flexible and powerful tensor operations for readable and reliable code. Supports numpy, pytorch, tensorflow, and

Alex Rogozhnikov 6.2k Jan 01, 2023
Pytorch implementation of OCNet series and SegFix.

openseg.pytorch News 2021/09/14 MMSegmentation has supported our ISANet and refer to ISANet for more details. 2021/08/13 We have released the implemen

openseg-group 1.1k Dec 23, 2022
This application is the basic of automated online-class-joiner(for YıldızEdu) within the right time. Gets the ZOOM link by scheduled date and time.

This application is the basic of automated online-class-joiner(for YıldızEdu) within the right time. Gets the ZOOM link by scheduled date and time.

215355 1 Dec 16, 2021
TensorFlow port of PyTorch Image Models (timm) - image models with pretrained weights.

TensorFlow-Image-Models Introduction Usage Models Profiling License Introduction TensorfFlow-Image-Models (tfimm) is a collection of image models with

Martins Bruveris 227 Dec 20, 2022
Official implementation for the paper: Generating Smooth Pose Sequences for Diverse Human Motion Prediction

Generating Smooth Pose Sequences for Diverse Human Motion Prediction This is official implementation for the paper Generating Smooth Pose Sequences fo

Wei Mao 28 Dec 10, 2022
Location-Sensitive Visual Recognition with Cross-IOU Loss

The trained models are temporarily unavailable, but you can train the code using reasonable computational resource. Location-Sensitive Visual Recognit

Kaiwen Duan 146 Dec 25, 2022
Parris, the automated infrastructure setup tool for machine learning algorithms.

README Parris, the automated infrastructure setup tool for machine learning algorithms. What Is This Tool? Parris is a tool for automating the trainin

Joseph Greene 319 Aug 02, 2022
Official implementation of CVPR2020 paper "Deep Generative Model for Robust Imbalance Classification"

Deep Generative Model for Robust Imbalance Classification Deep Generative Model for Robust Imbalance Classification Xinyue Wang, Yilin Lyu, Liping Jin

9 Nov 01, 2022
This repository contains the source code for the paper First Order Motion Model for Image Animation

!!! Check out our new paper and framework improved for articulated objects First Order Motion Model for Image Animation This repository contains the s

13k Jan 09, 2023
Code, final versions, and information on the Sparkfun Graphical Datasheets

Graphical Datasheets Code, final versions, and information on the SparkFun Graphical Datasheets. Generated Cells After Running Script Example Complete

SparkFun Electronics 102 Jan 05, 2023
Implementation of hyperparameter optimization/tuning methods for machine learning & deep learning models

Hyperparameter Optimization of Machine Learning Algorithms This code provides a hyper-parameter optimization implementation for machine learning algor

Li Yang 1.1k Dec 19, 2022
POCO: Point Convolution for Surface Reconstruction

POCO: Point Convolution for Surface Reconstruction by: Alexandre Boulch and Renaud Marlet Abstract Implicit neural networks have been successfully use

valeo.ai 93 Dec 29, 2022
A research toolkit for particle swarm optimization in Python

PySwarms is an extensible research toolkit for particle swarm optimization (PSO) in Python. It is intended for swarm intelligence researchers, practit

Lj Miranda 1k Dec 30, 2022
A large-scale video dataset for the training and evaluation of 3D human pose estimation models

ASPset-510 (Australian Sports Pose Dataset) is a large-scale video dataset for the training and evaluation of 3D human pose estimation models. It contains 17 different amateur subjects performing 30

Aiden Nibali 25 Jun 20, 2021
PySOT - SenseTime Research platform for single object tracking, implementing algorithms like SiamRPN and SiamMask.

PySOT is a software system designed by SenseTime Video Intelligence Research team. It implements state-of-the-art single object tracking algorit

STVIR 4.1k Dec 29, 2022
This repository contains the database and code used in the paper Embedding Arithmetic for Text-driven Image Transformation

This repository contains the database and code used in the paper Embedding Arithmetic for Text-driven Image Transformation (Guillaume Couairon, Holger

Meta Research 31 Oct 17, 2022
Project page of the paper 'Analyzing Perception-Distortion Tradeoff using Enhanced Perceptual Super-resolution Network' (ECCVW 2018)

EPSR (Enhanced Perceptual Super-resolution Network) paper This repo provides the test code, pretrained models, and results on benchmark datasets of ou

Subeesh Vasu 78 Nov 19, 2022
The implementation of 'Image synthesis via semantic composition'.

Image synthesis via semantic synthesis [Project Page] by Yi Wang, Lu Qi, Ying-Cong Chen, Xiangyu Zhang, Jiaya Jia. Introduction This repository gives

DV Lab 71 Jan 06, 2023