CDTrans: Cross-domain Transformer for Unsupervised Domain Adaptation

Related tags

Deep LearningCDTrans
Overview

CDTrans: Cross-domain Transformer for Unsupervised Domain Adaptation [arxiv]

This is the official repository for CDTrans: Cross-domain Transformer for Unsupervised Domain Adaptation

Introduction

Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a labeled source domain to a different unlabeled target domain. Most existing UDA methods focus on learning domain-invariant feature representation, either from the domain level or category level, using convolution neural networks (CNNs)-based frameworks. With the success of Transformer in various tasks, we find that the cross-attention in Transformer is robust to the noisy input pairs for better feature alignment, thus in this paper Transformer is adopted for the challenging UDA task. Specifically, to generate accurate input pairs, we design a two-way center-aware labeling algorithm to produce pseudo labels for target samples. Along with the pseudo labels, a weight-sharing triple-branch transformer framework is proposed to apply self-attention and cross-attention for source/target feature learning and source-target domain alignment, respectively. Such design explicitly enforces the framework to learn discriminative domain-specific and domain-invariant representations simultaneously. The proposed method is dubbed CDTrans (cross-domain transformer), and it provides one of the first attempts to solve UDA tasks with a pure transformer solution. Extensive experiments show that our proposed method achieves the best performance on all public UDA datasets including Office-Home, Office-31, VisDA-2017, and DomainNet.

framework

Results

Table 1 [UDA results on Office-31]

Methods Avg. A->D A->W D->A D->W W->A W->D
Baseline(DeiT-S) 86.7 87.6 86.9 74.9 97.7 73.5 99.6
model model model
CDTrans(DeiT-S) 90.4 94.6 93.5 78.4 98.2 78 99.6
model model model model model model
Baseline(DeiT-B) 88.8 90.8 90.4 76.8 98.2 76.4 100
model model model
CDTrans(DeiT-B) 92.6 97 96.7 81.1 99 81.9 100
model model model model model model

Table 2 [UDA results on Office-Home]

Methods Avg. Ar->Cl Ar->Pr Ar->Re Cl->Ar Cl->Pr Cl->Re Pr->Ar Pr->Cl Pr->Re Re->Ar Re->Cl Re->Pr
Baseline(DeiT-S) 69.8 55.6 73 79.4 70.6 72.9 76.3 67.5 51 81 74.5 53.2 82.7
model model model model
CDTrans(DeiT-S) 74.7 60.6 79.5 82.4 75.6 81.0 82.3 72.5 56.7 84.4 77.0 59.1 85.5
model model model model model model model model model model model model
Baseline(DeiT-B) 74.8 61.8 79.5 84.3 75.4 78.8 81.2 72.8 55.7 84.4 78.3 59.3 86
model model model model
CDTrans(DeiT-B) 80.5 68.8 85 86.9 81.5 87.1 87.3 79.6 63.3 88.2 82 66 90.6
model model model model model model model model model model model model

Table 3 [UDA results on VisDA-2017]

Methods Per-class plane bcycl bus car horse knife mcycl person plant sktbrd train truck
Baseline(DeiT-B) 67.3 (model) 98.1 48.1 84.6 65.2 76.3 59.4 94.5 11.8 89.5 52.2 94.5 34.1
CDTrans(DeiT-B) 88.4 (model) 97.7 86.39 86.87 83.33 97.76 97.16 95.93 84.08 97.93 83.47 94.59 55.3

Table 4 [UDA results on DomainNet]

Base-S clp info pnt qdr rel skt Avg. CDTrans-S clp info pnt qdr rel skt Avg.
clp - 21.2 44.2 15.3 59.9 46.0 37.3 clp - 25.3 52.5 23.2 68.3 53.2 44.5
model model model model model model model
info 36.8 - 39.4 5.4 52.1 32.6 33.3 info 47.6 - 48.3 9.9 62.8 41.1 41.9
model model model model model model model
pnt 47.1 21.7 - 5.7 60.2 39.9 34.9 pnt 55.4 24.5 - 11.7 67.4 48.0 41.4
model model model model model model model
qdr 25.0 3.3 10.4 - 18.8 14.0 14.3 qdr 36.6 5.3 19.3 - 33.8 22.7 23.5
model model model model model model model
rel 54.8 23.9 52.6 7.4 - 40.1 35.8 rel 61.5 28.1 56.8 12.8 - 47.2 41.3
model model model model model model model
skt 55.6 18.6 42.7 14.9 55.7 - 37.5 skt 64.3 26.1 53.2 23.9 66.2 - 46.7
model model model model model model model
Avg. 43.9 17.7 37.9 9.7 49.3 34.5 32.2 Avg. 53.08 21.86 46.02 16.3 59.7 42.44 39.9
Base-B clp info pnt qdr rel skt Avg. CDTrans-B clp info pnt qdr rel skt Avg.
clp - 24.2 48.9 15.5 63.9 50.7 40.6 clp - 29.4 57.2 26.0 72.6 58.1 48.7
model model model model model model model
info 43.5 - 44.9 6.5 58.8 37.6 38.3 info 57.0 - 54.4 12.8 69.5 48.4 48.4
model model model model model model model
pnt 52.8 23.3 - 6.6 64.6 44.5 38.4 pnt 62.9 27.4 - 15.8 72.1 53.9 46.4
model model model model model model model
qdr 31.8 6.1 15.6 - 23.4 18.9 19.2 qdr 44.6 8.9 29.0 - 42.6 28.5 30.7
model model model model model model model
rel 58.9 26.3 56.7 9.1 - 45.0 39.2 rel 66.2 31.0 61.5 16.2 - 52.9 45.6
model model model model model model model
skt 60.0 21.1 48.4 16.6 61.7 - 41.6 skt 69.0 29.6 59.0 27.2 72.5 - 51.5
model model model model model model model
Avg. 49.4 20.2 42.9 10.9 54.5 39.3 36.2 Avg. 59.9 25.3 52.2 19.6 65.9 48.4 45.2

Requirements

Installation

pip install -r requirements.txt
(Python version is the 3.7 and the GPU is the V100 with cuda 10.1, cudatoolkit 10.1)

Prepare Datasets

Download the UDA datasets Office-31, Office-Home, VisDA-2017, DomainNet

Then unzip them and rename them under the directory like follow: (Note that each dataset floader needs to make sure that it contains the txt file that contain the path and lable of the picture, which is already in data/the_dataset of this project.)

data
├── OfficeHomeDataset
│   │── class_name
│   │   └── images
│   └── *.txt
├── domainnet
│   │── class_name
│   │   └── images
│   └── *.txt
├── office31
│   │── class_name
│   │   └── images
│   └── *.txt
├── visda
│   │── train
│   │   │── class_name
│   │   │   └── images
│   │   └── *.txt 
│   └── validation
│       │── class_name
│       │   └── images
│       └── *.txt 

Prepare DeiT-trained Models

For fair comparison in the pre-training data set, we use the DeiT parameter init our model based on ViT. You need to download the ImageNet pretrained transformer model : DeiT-Small, DeiT-Base and move them to the ./data/pretrainModel directory.

Training

We utilize 1 GPU for pre-training and 2 GPUs for UDA, each with 16G of memory.

Scripts.

Command input paradigm

bash scripts/[pretrain/uda]/[office31/officehome/visda/domainnet]/run_*.sh [deit_base/deit_small]

For example

DeiT-Base scripts

# Office-31     Source: Amazon   ->  Target: Dslr, Webcam
bash scripts/pretrain/office31/run_office_amazon.sh deit_base
bash scripts/uda/office31/run_office_amazon.sh deit_base

#Office-Home    Source: Art      ->  Target: Clipart, Product, Real_World
bash scripts/pretrain/officehome/run_officehome_Ar.sh deit_base
bash scripts/uda/officehome/run_officehome_Ar.sh deit_base

# VisDA-2017    Source: train    ->  Target: validation
bash scripts/pretrain/visda/run_visda.sh deit_base
bash scripts/uda/visda/run_visda.sh deit_base

# DomainNet     Source: Clipart  ->  Target: painting, quickdraw, real, sketch, infograph
bash scripts/pretrain/domainnet/run_domainnet_clp.sh deit_base
bash scripts/uda/domainnet/run_domainnet_clp.sh deit_base

DeiT-Small scripts Replace deit_base with deit_small to run DeiT-Small results. An example of training on office-31 is as follows:

# Office-31     Source: Amazon   ->  Target: Dslr, Webcam
bash scripts/pretrain/office31/run_office_amazon.sh deit_small
bash scripts/uda/office31/run_office_amazon.sh deit_small

Evaluation

# For example VisDA-2017
python test.py --config_file 'configs/uda.yml' MODEL.DEVICE_ID "('0')" TEST.WEIGHT "('../logs/uda/vit_base/visda/transformer_best_model.pth')" DATASETS.NAMES 'VisDA' DATASETS.NAMES2 'VisDA' OUTPUT_DIR '../logs/uda/vit_base/visda/' DATASETS.ROOT_TRAIN_DIR './data/visda/train/train_image_list.txt' DATASETS.ROOT_TRAIN_DIR2 './data/visda/train/train_image_list.txt' DATASETS.ROOT_TEST_DIR './data/visda/validation/valid_image_list.txt'  

Acknowledgement

Codebase from TransReID

Space-invaders - Simple Game created using Python & PyGame, as my Beginner Python Project

Space Invaders This is a simple SPACE INVADER game create using PYGAME whihc hav

Gaurav Pandey 2 Jan 08, 2022
Neural Oblivious Decision Ensembles

Neural Oblivious Decision Ensembles A supplementary code for anonymous ICLR 2020 submission. What does it do? It learns deep ensembles of oblivious di

25 Sep 21, 2022
Clustering is a popular approach to detect patterns in unlabeled data

Visual Clustering Clustering is a popular approach to detect patterns in unlabeled data. Existing clustering methods typically treat samples in a data

Tarek Naous 24 Nov 11, 2022
The official PyTorch code for NeurIPS 2021 ML4AD Paper, "Does Thermal data make the detection systems more reliable?"

MultiModal-Collaborative (MMC) Learning Framework for integrating RGB and Thermal spectral modalities This is the official code for NeurIPS 2021 Machi

NeurAI 12 Nov 02, 2022
A machine learning library for spiking neural networks. Supports training with both torch and jax pipelines, and deployment to neuromorphic hardware.

Rockpool Rockpool is a Python package for developing signal processing applications with spiking neural networks. Rockpool allows you to build network

SynSense 21 Dec 14, 2022
Large Scale Multi-Illuminant (LSMI) Dataset for Developing White Balance Algorithm under Mixed Illumination

Large Scale Multi-Illuminant (LSMI) Dataset for Developing White Balance Algorithm under Mixed Illumination (ICCV 2021) Dataset License This work is l

DongYoung Kim 33 Jan 04, 2023
A collection of semantic image segmentation models implemented in TensorFlow

A collection of semantic image segmentation models implemented in TensorFlow. Contains data-loaders for the generic and medical benchmark datasets.

bobby 16 Dec 06, 2019
Implementation of Convolutional LSTM in PyTorch.

ConvLSTM_pytorch This file contains the implementation of Convolutional LSTM in PyTorch made by me and DavideA. We started from this implementation an

Andrea Palazzi 1.3k Dec 29, 2022
(3DV 2021 Oral) Filtering by Cluster Consistency for Large-Scale Multi-Image Matching

Scalable Cluster-Consistency Statistics for Robust Multi-Object Matching (3DV 2021 Oral Presentation) Filtering by Cluster Consistency (FCC) is a very

Yunpeng Shi 11 Sep 28, 2022
Liver segmentation using MONAI and pytorch

Machine Learning use case in the field of Healthcare. In this project MONAI and pytorch frameworks are used for 3D Liver segmentation.

Abhishek Gajbhiye 2 May 30, 2022
Build Graph Nets in Tensorflow

Graph Nets library Graph Nets is DeepMind's library for building graph networks in Tensorflow and Sonnet. Contact DeepMind 5.2k Jan 05, 2023

Graph parsing approach to structured sentiment analysis.

Fine-grained Sentiment Analysis as Dependency Graph Parsing This repository contains the code and datasets described in following paper: Fine-grained

Jeremy Barnes 36 Dec 12, 2022
Code for paper "A Critical Assessment of State-of-the-Art in Entity Alignment" (https://arxiv.org/abs/2010.16314)

A Critical Assessment of State-of-the-Art in Entity Alignment This repository contains the source code for the paper A Critical Assessment of State-of

Max Berrendorf 16 Oct 14, 2022
Official PyTorch implementation of SyntaSpeech (IJCAI 2022)

SyntaSpeech: Syntax-Aware Generative Adversarial Text-to-Speech | | | | 中文文档 This repository is the official PyTorch implementation of our IJCAI-2022

Zhenhui YE 116 Nov 24, 2022
HybVIO visual-inertial odometry and SLAM system

HybVIO A visual-inertial odometry system with an optional SLAM module. This is a research-oriented codebase, which has been published for the purposes

Spectacular AI 320 Jan 03, 2023
Neural Scene Graphs for Dynamic Scene (CVPR 2021)

Implementation of Neural Scene Graphs, that optimizes multiple radiance fields to represent different objects and a static scene background. Learned representations can be rendered with novel object

151 Dec 26, 2022
Multi-Agent Reinforcement Learning for Active Voltage Control on Power Distribution Networks (MAPDN)

Multi-Agent Reinforcement Learning for Active Voltage Control on Power Distribution Networks (MAPDN) This is the implementation of the paper Multi-Age

Future Power Networks 83 Jan 06, 2023
[NeurIPS 2021 Spotlight] Aligning Pretraining for Detection via Object-Level Contrastive Learning

SoCo [NeurIPS 2021 Spotlight] Aligning Pretraining for Detection via Object-Level Contrastive Learning By Fangyun Wei*, Yue Gao*, Zhirong Wu, Han Hu,

Yue Gao 139 Dec 14, 2022
EigenGAN Tensorflow, EigenGAN: Layer-Wise Eigen-Learning for GANs

Gender Bangs Body Side Pose (Yaw) Lighting Smile Face Shape Lipstick Color Painting Style Pose (Yaw) Pose (Pitch) Zoom & Rotate Flush & Eye Color Mout

Zhenliang He 321 Dec 01, 2022
NDE: Climate Modeling with Neural Diffusion Equation, ICDM'21

Climate Modeling with Neural Diffusion Equation Introduction This is the repository of our accepted ICDM 2021 paper "Climate Modeling with Neural Diff

Jeehyun Hwang 5 Dec 18, 2022