Dynamic Divide-and-Conquer Adversarial Training for Robust Semantic Segmentation (ICCV2021)

Overview

Dynamic Divide-and-Conquer Adversarial Training for Robust Semantic Segmentation

This is a pytorch project for the paper Dynamic Divide-and-Conquer Adversarial Training for Robust Semantic Segmentation by Xiaogang Xu, Hengshuang Zhao and Jiaya Jia presented at ICCV2021.

paper link, arxiv

Introduction

Adversarial training is promising for improving the robustness of deep neural networks towards adversarial perturbations, especially on the classification task. The effect of this type of training on semantic segmentation, contrarily, just commences. We make the initial attempt to explore the defense strategy on semantic segmentation by formulating a general adversarial training procedure that can perform decently on both adversarial and clean samples. We propose a dynamic divide-and-conquer adversarial training (DDC-AT) strategy to enhance the defense effect, by setting additional branches in the target model during training, and dealing with pixels with diverse properties towards adversarial perturbation. Our dynamical division mechanism divides pixels into multiple branches automatically. Note all these additional branches can be abandoned during inference and thus leave no extra parameter and computation cost. Extensive experiments with various segmentation models are conducted on PASCAL VOC 2012 and Cityscapes datasets, in which DDC-AT yields satisfying performance under both white- and black-box attacks.

Project Setup

For multiprocessing training, we use apex, tested with pytorch 1.0.1.

First install Python 3. We advise you to install Python 3 and PyTorch with Anaconda:

conda create --name py36 python=3.6
source activate py36

Clone the repo and install the complementary requirements:

cd $HOME
git clone --recursive [email protected]:dvlab-research/Robust_Semantic_Segmentation.git
cd Robust_Semantic_Segmentation
pip install -r requirements.txt

The environment of our experiments is CUDA10.2 and TITAN V. And you should install apex for training.

Requirement

  • Hardware: 4-8 GPUs (better with >=11G GPU memory)

Train

  • Download related datasets and you should modify the relevant paths specified in folder "config"
  • Download ImageNet pre-trained models and put them under folder initmodel for weight initialization.

Cityscapes

  • Train the baseline model with no defense on Cityscapes with PSPNet
    sh tool_train/cityscapes/psp_train.sh
    
  • Train the baseline model with no defense on Cityscapes with DeepLabv3
    sh tool_train/cityscapes/aspp_train.sh
    
  • Train the model with SAT on Cityscapes with PSPNet
    sh tool_train/cityscapes/psp_train_sat.sh
    
  • Train the model with SAT on Cityscapes with DeepLabv3
    sh tool_train/cityscapes/aspp_train_sat.sh
    
  • Train the model with DDCAT on Cityscapes with PSPNet
    sh tool_train/cityscapes/psp_train_ddcat.sh
    
  • Train the model with DDCAT on Cityscapes with DeepLabv3
    sh tool_train/cityscapes/aspp_train_ddcat.sh
    

VOC2012

  • Train the baseline model with no defense on VOC2012 with PSPNet
    sh tool_train/voc2012/psp_train.sh
    
  • Train the baseline model with no defense on VOC2012 with DeepLabv3
    sh tool_train/voc2012/aspp_train.sh
    
  • Train the model with SAT on VOC2012 with PSPNet
    sh tool_train/voc2012/psp_train_sat.sh
    
  • Train the model with SAT on VOC2012 with DeepLabv3
    sh tool_train/voc2012/aspp_train_sat.sh
    
  • Train the model with DDCAT on VOC2012 with PSPNet
    sh tool_train/voc2012/psp_train_ddcat.sh
    
  • Train the model with DDCAT on VOC2012 with DeepLabv3
    sh tool_train/voc2012/aspp_train_ddcat.sh
    

You can use the tensorboardX to visualize the training loss, by

tensorboard --logdir=exp/path_to_log

Test

We provide the script for evaluation, reporting the miou on both clean and adversarial samples (the adversarial samples are obtained with attack whose n=2, epsilon=0.03 x 255, alpha=0.01 x 255)

Cityscapes

  • Evaluate the PSPNet trained with no defense on Cityscapes
    sh tool_test/cityscapes/psp_test.sh
    
  • Evaluate the PSPNet trained with SAT on Cityscapes
    sh tool_test/cityscapes/psp_test_sat.sh
    
  • Evaluate the PSPNet trained with DDCAT on Cityscapes
    sh tool_test/cityscapes/psp_test_ddcat.sh
    
  • Evaluate the DeepLabv3 trained with no defense on Cityscapes
    sh tool_test/cityscapes/aspp_test.sh
    
  • Evaluate the DeepLabv3 trained with SAT on Cityscapes
    sh tool_test/cityscapes/aspp_test_sat.sh
    
  • Evaluate the DeepLabv3 trained with DDCAT on Cityscapes
    sh tool_test/cityscapes/aspp_test_ddcat.sh
    

VOC2012

  • Evaluate the PSPNet trained with no defense on VOC2012
    sh tool_test/voc2012/psp_test.sh
    
  • Evaluate the PSPNet trained with SAT on VOC2012
    sh tool_test/voc2012/psp_test_sat.sh
    
  • Evaluate the PSPNet trained with DDCAT on VOC2012
    sh tool_test/voc2012/psp_test_ddcat.sh
    
  • Evaluate the DeepLabv3 trained with no defense on VOC2012
    sh tool_test/voc2012/aspp_test.sh
    
  • Evaluate the DeepLabv3 trained with SAT on VOC2012
    sh tool_test/voc2012/aspp_test_sat.sh
    
  • Evaluate the DeepLabv3 trained with DDCAT on VOC2012
    sh tool_test/voc2012/aspp_test_ddcat.sh
    

Pretrained Model

You can download the pretrained models from https://drive.google.com/file/d/120xLY_pGZlm3tqaLxTLVp99e06muBjJC/view?usp=sharing

Cityscapes with PSPNet

The model trained with no defense: pretrain/cityscapes/pspnet/no_defense
The model trained with SAT: pretrain/cityscapes/pspnet/sat
The model trained with DDCAT: pretrain/cityscapes/pspnet/ddcat

Cityscapes with DeepLabv3

The model trained with no defense: pretrain/cityscapes/deeplabv3/no_defense
The model trained with SAT: pretrain/cityscapes/deeplabv3/sat
The model trained with DDCAT: pretrain/cityscapes/deeplabv3/ddcat

VOC2012 with PSPNet

The model trained with no defense: pretrain/voc2012/pspnet/no_defense
The model trained with SAT: pretrain/voc2012/pspnet/sat
The model trained with DDCAT: pretrain/voc2012/pspnet/ddcat

VOC2012 with DeepLabv3

The model trained with no defense: pretrain/voc2012/deeplabv3/no_defense
The model trained with SAT: pretrain/voc2012/deeplabv3/sat
The model trained with DDCAT: pretrain/voc2012/deeplabv3/ddcat

Citation Information

If you find the project useful, please cite:

@inproceedings{xu2021ddcat,
  title={Dynamic Divide-and-Conquer Adversarial Training for Robust Semantic Segmentation},
  author={Xiaogang Xu, Hengshuang Zhao and Jiaya Jia},
  booktitle={ICCV},
  year={2021}
}

Acknowledgments

This source code is inspired by semseg.

Contributions

If you have any questions/comments/bug reports, feel free to e-mail the author Xiaogang Xu ([email protected]).

Owner
DV Lab
Deep Vision Lab
DV Lab
Natural Intelligence is still a pretty good idea.

Human Learn Machine Learning models should play by the rules, literally. Project Goal Back in the old days, it was common to write rule-based systems.

vincent d warmerdam 641 Dec 26, 2022
[ECCV 2020] XingGAN for Person Image Generation

Contents XingGAN or CrossingGAN Installation Dataset Preparation Generating Images Using Pretrained Model Train and Test New Models Evaluation Acknowl

Hao Tang 218 Oct 29, 2022
A C implementation for creating 2D voronoi diagrams

Branch OSX/Linux Windows master dev jc_voronoi A fast C/C++ header only implementation for creating 2D Voronoi diagrams from a point set Uses Fortune'

Mathias Westerdahl 481 Dec 29, 2022
一个多语言支持、易使用的 OCR 项目。An easy-to-use OCR project with multilingual support.

AgentOCR 简介 AgentOCR 是一个基于 PaddleOCR 和 ONNXRuntime 项目开发的一个使用简单、调用方便的 OCR 项目 本项目目前包含 Python Package 【AgentOCR】 和 OCR 标注软件 【AgentOCRLabeling】 使用指南 Pytho

AgentMaker 98 Nov 10, 2022
Devkit for 3D -- Some utils for 3D object detection based on Numpy and Pytorch

D3D Devkit for 3D: Some utils for 3D object detection and tracking based on Numpy and Pytorch Please consider siting my work if you find this library

Jacob Zhong 27 Jul 07, 2022
Advancing Self-supervised Monocular Depth Learning with Sparse LiDAR

Official implementation for paper "Advancing Self-supervised Monocular Depth Learning with Sparse LiDAR"

Ziyue Feng 72 Dec 09, 2022
《Single Image Reflection Removal Beyond Linearity》(CVPR 2019)

Single-Image-Reflection-Removal-Beyond-Linearity Paper Single Image Reflection Removal Beyond Linearity. Qiang Wen, Yinjie Tan, Jing Qin, Wenxi Liu, G

Qiang Wen 51 Jun 24, 2022
Pytorch implementation of the unsupervised object discovery method LOST.

LOST Pytorch implementation of the unsupervised object discovery method LOST. More details can be found in the paper: Localizing Objects with Self-Sup

Valeo.ai 189 Dec 25, 2022
Pytorch based library to rank predicted bounding boxes using text/image user's prompts.

pytorch_clip_bbox: Implementation of the CLIP guided bbox ranking for Object Detection. Pytorch based library to rank predicted bounding boxes using t

Sergei Belousov 50 Nov 27, 2022
Reviatalizing Optimization for 3D Human Pose and Shape Estimation: A Sparse Constrained Formulation

Reviatalizing Optimization for 3D Human Pose and Shape Estimation: A Sparse Constrained Formulation This is the implementation of the approach describ

Taosha Fan 47 Nov 15, 2022
The "breathing k-means" algorithm with datasets and example notebooks

The Breathing K-Means Algorithm (with examples) The Breathing K-Means is an approximation algorithm for the k-means problem that (on average) is bette

Bernd Fritzke 75 Nov 17, 2022
Export CenterPoint PonintPillars ONNX Model For TensorRT

CenterPoint-PonintPillars Pytroch model convert to ONNX and TensorRT Welcome to CenterPoint! This project is fork from tianweiy/CenterPoint. I impleme

CarkusL 149 Dec 13, 2022
YOLOv3 in PyTorch > ONNX > CoreML > TFLite

This repository represents Ultralytics open-source research into future object detection methods, and incorporates lessons learned and best practices

Ultralytics 9.3k Jan 07, 2023
Code for Piggyback: Adapting a Single Network to Multiple Tasks by Learning to Mask Weights

Piggyback: https://arxiv.org/abs/1801.06519 Pretrained masks and backbones are available here: https://uofi.box.com/s/c5kixsvtrghu9yj51yb1oe853ltdfz4q

Arun Mallya 165 Nov 22, 2022
wgan, wgan2(improved, gp), infogan, and dcgan implementation in lasagne, keras, pytorch

Generative Adversarial Notebooks Collection of my Generative Adversarial Network implementations Most codes are for python3, most notebooks works on C

tjwei 1.5k Dec 16, 2022
The official repository for our paper "The Neural Data Router: Adaptive Control Flow in Transformers Improves Systematic Generalization".

Codebase for learning control flow in transformers The official repository for our paper "The Neural Data Router: Adaptive Control Flow in Transformer

Csordás Róbert 24 Oct 15, 2022
A whale detector design for the Kaggle whale-detector challenge!

CNN (InceptionV1) + STFT based Whale Detection Algorithm So, this repository is my PyTorch solution for the Kaggle whale-detection challenge. The obje

Tarin Ziyaee 92 Sep 28, 2021
Multi-Output Gaussian Process Toolkit

Multi-Output Gaussian Process Toolkit Paper - API Documentation - Tutorials & Examples The Multi-Output Gaussian Process Toolkit is a Python toolkit f

GAMES 113 Nov 25, 2022
Head and Neck Tumour Segmentation and Prediction of Patient Survival Project

Head-and-Neck-Tumour-Segmentation-and-Prediction-of-Patient-Survival Welcome to the Head and Neck Tumour Segmentation and Prediction of Patient Surviv

5 Oct 20, 2022
The Malware Open-source Threat Intelligence Family dataset contains 3,095 disarmed PE malware samples from 454 families

MOTIF Dataset The Malware Open-source Threat Intelligence Family (MOTIF) dataset contains 3,095 disarmed PE malware samples from 454 families, labeled

Booz Allen Hamilton 112 Dec 13, 2022