Official PyTorch implementation of "Dual Path Learning for Domain Adaptation of Semantic Segmentation".

Related tags

Text Data & NLPDPL
Overview

Dual Path Learning for Domain Adaptation of Semantic Segmentation

Official PyTorch implementation of "Dual Path Learning for Domain Adaptation of Semantic Segmentation".

Accepted by ICCV 2021. Paper

Requirements

  • Pytorch 3.6
  • torch==1.5
  • torchvision==0.6
  • Pillow==7.1.2

Dataset Preparations

For GTA5->Cityscapes scenario, download:

For further evaluation on SYNTHIA->Cityscapes scenario, download:

The folder should be structured as:

|DPL
|—— DPL_master/
|—— CycleGAN_DPL/
|—— data/
│   ├—— Cityscapes/  
|   |   ├—— data/
|   |       ├—— gtFine/
|   |       ├—— leftImg8bit/
│   ├—— GTA5/
|   |   ├—— images/
|   |   ├—— labels/
|   |   ├—— ...
│   ├—— synthia/ 
|   |   ├—— RGB/
|   |   ├—— GT/
|   |   ├—— Depth/
|   |   ├—— ...

Evaluation

Download pre-trained models from Pretrained_Resnet_GTA5 [Google_Drive, BaiduYun(Code:t7t8)] and save the unzipped models in ./DPL_master/DPL_pretrained, download translated target images from DPI2I_City2GTA_Resnet [Google_Drive, BaiduYun(Code:cf5a)] and save the unzipped images in ./DPL_master/DPI2I_images/DPI2I_City2GTA_Resnet/val. Then you can evaluate DPL and DPL-Dual as following:

  • Evaluation of DPL
    cd DPL_master
    python evaluation.py --init-weights ./DPL_pretrained/Resnet_GTA5_DPLst4_T.pth --save path_to_DPL_results/results --log-dir path_to_DPL_results
    
  • Evaluation of DPL-Dual
    python evaluation_DPL.py --data-dir-targetB ./DPI2I_images/DPI2I_City2GTA_Resnet --init-weights_S ./DPL_pretrained/Resnet_GTA5_DPLst4_S.pth --init-weights_T ./DPL_pretrained/Resnet_GTA5_DPLst4_T.pth --save path_to_DPL_dual_results/results --log-dir path_to_DPL_dual_results
    

More pretrained models and translated target images on other settings can be downloaded from:

Training

The training process of DPL consists of two phases: single-path warm-up and DPL training. The training example is given on default setting: GTA5->Cityscapes, DeepLab-V2 with ResNet-101.

Quick start for DPL training

Downlad pretrained 1 and 1 [Google_Drive, BaiduYun(Code: 3ndm)], save 1 to path_to_model_S, save 1 to path_to_model_T, then you can train DPL as following:

  1. Train dual path image generation module.

    cd ../CycleGAN_DPL
    python train.py --dataroot ../data --name dual_path_I2I --A_setroot GTA5/images --B_setroot Cityscapes/leftImg8bit/train --model cycle_diff --lambda_semantic 1 --init_weights_S path_to_model_S --init_weights_T path_to_model_T
    
  2. Generate transferred images with dual path image generation module.

    • Generate transferred GTA5->Cityscapes images.
    python test.py --name dual_path_I2I --no_dropout --load_size 1024 --crop_size 1024 --preprocess scale_width --dataroot ../data/GTA5/images --model_suffix A  --results_dir DPI2I_path_to_GTA52cityscapes
    
    • Generate transferred Cityscapes->GTA5 images.
     python test.py --name dual_path_I2I --no_dropout --load_size 1024 --crop_size 1024 --preprocess scale_width --dataroot ../data/Cityscapes/leftImg8bit/train --model_suffix B  --results_dir DPI2I_path_to_cityscapes2GTA5/train
     
     python test.py --name dual_path_I2I --no_dropout --load_size 1024 --crop_size 1024 --preprocess scale_width --dataroot ../data/Cityscapes/leftImg8bit/val --model_suffix B  --results_dir DPI2I_path_to_cityscapes2GTA5/val
    
  3. Train dual path adaptive segmentation module

    3.1. Generate dual path pseudo label.

    cd ../DPL_master
    python DP_SSL.py --save path_to_dual_pseudo_label_stepi --init-weights_S path_to_model_S --init-weights_T path_to_model_T --thresh 0.9 --threshlen 0.3 --data-list-target ./dataset/cityscapes_list/train.txt --set train --data-dir-targetB DPI2I_path_to_cityscapes2GTA5 --alpha 0.5
    

    3.2. Train 1 and 1 with dual path pseudo label respectively.

    python DPL.py --snapshot-dir snapshots/DPL_modelS_step_i --data-dir-target DPI2I_path_to_cityscapes2GTA5 --data-label-folder-target path_to_dual_pseudo_label_stepi --init-weights path_to_model_S --domain S
    
    python DPL.py --snapshot-dir snapshots/DPL_modelT_step_i --data-dir DPI2I_path_to_GTA52cityscapes --data-label-folder-target path_to_dual_pseudo_label_stepi --init-weights path_to_model_T
    

    3.3. Update path_to_model_Swith path to best 1 model, update path_to_model_Twith path to best 1 model, adjust parameter threshenlen to 0.25, then repeat 3.1-3.2 for 3 more rounds.

Single path warm up

If you want to train DPL from the very begining, training example of single path warm up is also provided as below:

Single Path Warm-up

Download 1 trained with labeled source dataset Source_only [Google_Drive, BaiduYun(Code:fjdw)].

  1. Train original cycleGAN (without Dual Path Image Translation).

    cd CycleGAN_DPL
    python train.py --dataroot ../data --name ori_cycle --A_setroot GTA5/images --B_setroot Cityscapes/leftImg8bit/train --model cycle_diff --lambda_semantic 0
    
  2. Generate transferred GTA5->Cityscapes images with original cycleGAN.

    python test.py --name ori_cycle --no_dropout --load_size 1024 --crop_size 1024 --preprocess scale_width --dataroot ../data/GTA5/images --model_suffix A  --results_dir path_to_ori_cycle_GTA52cityscapes
    
  3. Before warm up, pretrain 1 without SSL and restore the best checkpoint in path_to_pretrained_T:

    cd ../DPL_master
    python DPL.py --snapshot-dir snapshots/pretrain_T --init-weights path_to_initialization_S --data-dir path_to_ori_cycle_GTA52cityscapes
    
  4. Warm up 1.

    4.1. Generate labels on source dataset with label correction.

    python SSL_source.py --set train --data-dir path_to_ori_cycle_GTA52cityscapes --init-weights path_to_pretrained_T --threshdelta 0.3 --thresh 0.9 --threshlen 0.65 --save path_to_corrected_label_step1_or_step2 
    

    4.2. Generate pseudo labels on target dataset.

    python SSL.py --set train --data-list-target ./dataset/cityscapes_list/train.txt --init-weights path_to_pretrained_T  --thresh 0.9 --threshlen 0.65 --save path_to_pseudo_label_step1_or_step2 
    

    4.3. Train 1 with label correction.

    python DPL.py --snapshot-dir snapshots/label_corr_step1_or_step2 --data-dir path_to_ori_cycle_GTA52cityscapes --source-ssl True --source-label-dir path_to_corrected_label_step1_or_step2 --data-label-folder-target path_to_pseudo_label_step1_or_step2 --init-weights path_to_pretrained_T          
    

4.4 Update path_to_pretrained_T with path to best model in 4.3, repeat 4.1-4.3 for one more round.

More Experiments

  • For SYNTHIA to Cityscapes scenario, please train DPL with "--source synthia" and change the data path.
  • For training on "FCN-8s with VGG16", please train DPL with "--model VGG".

Citation

If you find our paper and code useful in your research, please consider giving a star and citation.

@inproceedings{cheng2021dual,
  title={Dual Path Learning for Domain Adaptation of Semantic Segmentation},
  author={Cheng, Yiting and Wei, Fangyun and Bao, Jianmin and Chen, Dong and Wen, Fang and Zhang, Wenqiang},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  pages={9082--9091},
  year={2021}
}

Acknowledgment

This code is heavily borrowed from BDL.

Code for our ACL 2021 paper - ConSERT: A Contrastive Framework for Self-Supervised Sentence Representation Transfer

ConSERT Code for our ACL 2021 paper - ConSERT: A Contrastive Framework for Self-Supervised Sentence Representation Transfer Requirements torch==1.6.0

Yan Yuanmeng 478 Dec 25, 2022
TTS is a library for advanced Text-to-Speech generation.

TTS is a library for advanced Text-to-Speech generation. It's built on the latest research, was designed to achieve the best trade-off among ease-of-training, speed and quality. TTS comes with pretra

Mozilla 6.5k Jan 08, 2023
Journalism AI – Quotes extraction for modular journalism

Quote extraction for modular journalism (JournalismAI collab 2021)

Journalism AI collab 2021 207 Dec 25, 2022
An open collection of annotated voices in Japanese language

声庭 (Koniwa): オープンな日本語音声とアノテーションのコレクション Koniwa (声庭): An open collection of annotated voices in Japanese language 概要 Koniwa(声庭)は利用・修正・再配布が自由でオープンな音声とアノテ

Koniwa project 32 Dec 14, 2022
🤗 Transformers: State-of-the-art Natural Language Processing for Pytorch, TensorFlow, and JAX.

English | 简体中文 | 繁體中文 State-of-the-art Natural Language Processing for Jax, PyTorch and TensorFlow 🤗 Transformers provides thousands of pretrained mo

Hugging Face 77.2k Jan 03, 2023
原神抽卡记录数据集-Genshin Impact gacha data

提要 持续收集原神抽卡记录中 可以使用抽卡记录导出工具导出抽卡记录的json,将json文件发送至[email protected],我会在清除个人信息后

117 Dec 27, 2022
novel deep learning research works with PaddlePaddle

Research 发布基于飞桨的前沿研究工作,包括CV、NLP、KG、STDM等领域的顶会论文和比赛冠军模型。 目录 计算机视觉(Computer Vision) 自然语言处理(Natrual Language Processing) 知识图谱(Knowledge Graph) 时空数据挖掘(Spa

1.5k Jan 03, 2023
CCKS-Title-based-large-scale-commodity-entity-retrieval-top1

- 基于标题的大规模商品实体检索top1 一、任务介绍 CCKS 2020:基于标题的大规模商品实体检索,任务为对于给定的一个商品标题,参赛系统需要匹配到该标题在给定商品库中的对应商品实体。 输入:输入文件包括若干行商品标题。 输出:输出文本每一行包括此标题对应的商品实体,即给定知识库中商品 ID,

43 Nov 11, 2022
PyTorch implementation of NATSpeech: A Non-Autoregressive Text-to-Speech Framework

A Non-Autoregressive Text-to-Speech (NAR-TTS) framework, including official PyTorch implementation of PortaSpeech (NeurIPS 2021) and DiffSpeech (AAAI 2022)

760 Jan 03, 2023
A curated list of efficient attention modules

awesome-fast-attention A curated list of efficient attention modules

Sepehr Sameni 891 Dec 22, 2022
Implementation of "Adversarial purification with Score-based generative models", ICML 2021

Adversarial Purification with Score-based Generative Models by Jongmin Yoon, Sung Ju Hwang, Juho Lee This repository includes the official PyTorch imp

15 Dec 15, 2022
Datasets of Automatic Keyphrase Extraction

This repository contains 20 annotated datasets of Automatic Keyphrase Extraction made available by the research community. Following are the datasets and the original papers that proposed them. If yo

LIAAD - Laboratory of Artificial Intelligence and Decision Support 163 Dec 23, 2022
NewsMTSC: (Multi-)Target-dependent Sentiment Classification in News Articles

NewsMTSC: (Multi-)Target-dependent Sentiment Classification in News Articles NewsMTSC is a dataset for target-dependent sentiment classification (TSC)

Felix Hamborg 79 Dec 30, 2022
Code of paper: A Recurrent Vision-and-Language BERT for Navigation

Recurrent VLN-BERT Code of the Recurrent-VLN-BERT paper: A Recurrent Vision-and-Language BERT for Navigation Yicong Hong, Qi Wu, Yuankai Qi, Cristian

YicongHong 109 Dec 21, 2022
Unlimited Call - Text Bombing Tool

FastBomber Unlimited Call - Text Bombing Tool Installation On Termux

Aryan 6 Nov 10, 2022
MicBot - MicBot uses Google Translate to speak everyone's chat messages

MicBot MicBot uses Google Translate to speak everyone's chat messages. It can al

2 Mar 09, 2022
Mapping a variable-length sentence to a fixed-length vector using BERT model

Are you looking for X-as-service? Try the Cloud-Native Neural Search Framework for Any Kind of Data bert-as-service Using BERT model as a sentence enc

Han Xiao 11.1k Jan 01, 2023
Chatbot with Pytorch, Python & Nextjs

Installation Instructions Make sure that you have Python 3, gcc, venv, and pip installed. Clone the repository $ git clone https://github.com/sahr

Rohit Sah 0 Dec 11, 2022
Count the frequency of letters or words in a text file and show a graph.

Word Counter By EBUS Coding Club Count the frequency of letters or words in a text file and show a graph. Requirements Python 3.9 or higher matplotlib

EBUS Coding Club 0 Apr 09, 2022
A number of methods in order to perform Natural Language Processing on live data derived from Twitter

A number of methods in order to perform Natural Language Processing on live data derived from Twitter

1 Nov 24, 2021