Universal Adversarial Examples in Remote Sensing: Methodology and Benchmark

Overview

Universal Adversarial Examples in Remote Sensing: Methodology and Benchmark

Yonghao Xu and Pedram Ghamisi


This research has been conducted at the Institute of Advanced Research in Artificial Intelligence (IARAI).

This is the official PyTorch implementation of the black-box adversarial attack methods for remote sensing data in our paper Universal adversarial examples in remote sensing: Methodology and benchmark.

Table of content

  1. Dataset
  2. Supported methods and models
  3. Preparation
  4. Adversarial attacks on scene classification
  5. Adversarial attacks on semantic segmentation
  6. Performance evaluation on the UAE-RS dataset
  7. Paper
  8. Acknowledgement
  9. License

Dataset

We collect the generated universal adversarial examples in the dataset named UAE-RS, which is the first dataset that provides black-box adversarial samples in the remote sensing field.

๐Ÿ“ก Download links:  Google Drive        Baidu NetDisk (Code: 8g1r)

To build UAE-RS, we use the Mixcut-Attack method to attack ResNet18 with 1050 test samples from the UCM dataset and 5000 test samples from the AID dataset for scene classification, and use the Mixup-Attack method to attack FCN-8s with 5 test images from the Vaihingen dataset (image IDs: 11, 15, 28, 30, 34) and 5 test images from the Zurich Summer dataset (image IDs: 16, 17, 18, 19, 20) for semantic segmentation.

Example images in the UCM dataset and the corresponding adversarial examples in the UAE-RS dataset.

Example images in the AID dataset and the corresponding adversarial examples in the UAE-RS dataset.

Qualitative results of the black-box adversarial attacks from FCN-8s โ†’ SegNet on the Vaihingen dataset.

(a) The original clean test images in the Vaihingen dataset. (b) The corresponding adversarial examples in the UAE-RS dataset. (c) Segmentation results of SegNet on the clean images. (d) Segmentation results of SegNet on the adversarial images. (e) Ground-truth annotations.

Supported methods and models

This repo contains implementations of black-box adversarial attacks for remote sensing data on both scene classification and semantic segmentation tasks.

Preparation

  • Package requirements: The scripts in this repo are tested with torch==1.10 and torchvision==0.11 using two NVIDIA Tesla V100 GPUs.
  • Remote sensing datasets used in this repo:
  • Data folder structure
    • The data folder is structured as follows:
โ”œโ”€โ”€ <THE-ROOT-PATH-OF-DATA>/
โ”‚   โ”œโ”€โ”€ UCMerced_LandUse/     
|   |   โ”œโ”€โ”€ Images/
|   |   |   โ”œโ”€โ”€ agricultural/
|   |   |   โ”œโ”€โ”€ airplane/
|   |   |   |โ”€โ”€ ...
โ”‚   โ”œโ”€โ”€ AID/     
|   |   โ”œโ”€โ”€ Airport/
|   |   โ”œโ”€โ”€ BareLand/
|   |   |โ”€โ”€ ...
โ”‚   โ”œโ”€โ”€ Vaihingen/     
|   |   โ”œโ”€โ”€ img/
|   |   โ”œโ”€โ”€ gt/
|   |   โ”œโ”€โ”€ ...
โ”‚   โ”œโ”€โ”€ Zurich/    
|   |   โ”œโ”€โ”€ img/
|   |   โ”œโ”€โ”€ gt/
|   |   โ”œโ”€โ”€ ...
โ”‚   โ”œโ”€โ”€ UAE-RS/    
|   |   โ”œโ”€โ”€ UCM/
|   |   โ”œโ”€โ”€ AID/
|   |   โ”œโ”€โ”€ Vaihingen/
|   |   โ”œโ”€โ”€ Zurich/
  • Pretraining the models for scene classification
CUDA_VISIBLE_DEVICES=0,1 python pretrain_cls.py --network 'alexnet' --dataID 1 --root_dir <THE-ROOT-PATH-OF-DATA>
CUDA_VISIBLE_DEVICES=0,1 python pretrain_cls.py --network 'resnet18' --dataID 1 --root_dir <THE-ROOT-PATH-OF-DATA>
CUDA_VISIBLE_DEVICES=0,1 python pretrain_cls.py --network 'inception' --dataID 1 --root_dir <THE-ROOT-PATH-OF-DATA>
...
  • Pretraining the models for semantic segmentation
cd ./segmentation
CUDA_VISIBLE_DEVICES=0 python pretrain_seg.py --model 'fcn8s' --dataID 1 --root_dir <THE-ROOT-PATH-OF-DATA>
CUDA_VISIBLE_DEVICES=0 python pretrain_seg.py --model 'deeplabv2' --dataID 1 --root_dir <THE-ROOT-PATH-OF-DATA>
CUDA_VISIBLE_DEVICES=0 python pretrain_seg.py --model 'segnet' --dataID 1 --root_dir <THE-ROOT-PATH-OF-DATA>
...

Please replace <THE-ROOT-PATH-OF-DATA> with the local path where you store the remote sensing datasets.

Adversarial attacks on scene classification

  • Generate adversarial examples:
CUDA_VISIBLE_DEVICES=0 python attack_cls.py --surrogate_model 'resnet18' \
                                            --attack_func 'fgsm' \
                                            --dataID 1 \
                                            --root_dir <THE-ROOT-PATH-OF-DATA>
  • Performance evaluation on the adversarial test set:
CUDA_VISIBLE_DEVICES=0 python test_cls.py --surrogate_model 'resnet18' \
                                          --target_model 'inception' \
                                          --attack_func 'fgsm' \
                                          --dataID 1 \
                                          --root_dir <THE-ROOT-PATH-OF-DATA>

You can change parameters --surrogate_model, --attack_func, and --target_model to evaluate the performance with different attacking scenarios.

Adversarial attacks on semantic segmentation

  • Generate adversarial examples:
cd ./segmentation
CUDA_VISIBLE_DEVICES=0 python attack_seg.py --surrogate_model 'fcn8s' \
                                            --attack_func 'fgsm' \
                                            --dataID 1 \
                                            --root_dir <THE-ROOT-PATH-OF-DATA>
  • Performance evaluation on the adversarial test set:
CUDA_VISIBLE_DEVICES=0 python test_seg.py --surrogate_model 'fcn8s' \
                                          --target_model 'segnet' \
                                          --attack_func 'fgsm' \
                                          --dataID 1 \
                                          --root_dir <THE-ROOT-PATH-OF-DATA>

You can change parameters --surrogate_model, --attack_func, and --target_model to evaluate the performance with different attacking scenarios.

Performance evaluation on the UAE-RS dataset

  • Scene classification:
CUDA_VISIBLE_DEVICES=0 python test_cls_uae_rs.py --target_model 'inception' \
                                                 --dataID 1 \
                                                 --root_dir <THE-ROOT-PATH-OF-DATA>

Scene classification results of different deep neural networks on the clean and UAE-RS test sets:

UCM AID
Model Clean Test Set Adversarial Test Set OA Gap Clean Test Set Adversarial Test Set OA Gap
AlexNet 90.28 30.86 -59.42 89.74 18.26 -71.48
VGG11 94.57 26.57 -68.00 91.22 12.62 -78.60
VGG16 93.04 19.52 -73.52 90.00 13.46 -76.54
VGG19 92.85 29.62 -63.23 88.30 15.44 -72.86
Inception-v3 96.28 24.86 -71.42 92.98 23.48 -69.50
ResNet18 95.90 2.95 -92.95 94.76 0.02 -94.74
ResNet50 96.76 25.52 -71.24 92.68 6.20 -86.48
ResNet101 95.80 28.10 -67.70 92.92 9.74 -83.18
ResNeXt50 97.33 26.76 -70.57 93.50 11.78 -81.72
ResNeXt101 97.33 33.52 -63.81 95.46 12.60 -82.86
DenseNet121 97.04 17.14 -79.90 95.50 10.16 -85.34
DenseNet169 97.42 25.90 -71.52 95.54 9.72 -85.82
DenseNet201 97.33 26.38 -70.95 96.30 9.60 -86.70
RegNetX-400MF 94.57 27.33 -67.24 94.38 19.18 -75.20
RegNetX-8GF 97.14 40.76 -56.38 96.22 19.24 -76.98
RegNetX-16GF 97.90 34.86 -63.04 95.84 13.34 -82.50
  • Semantic segmentation:
cd ./segmentation
CUDA_VISIBLE_DEVICES=0 python test_seg_uae_rs.py --target_model 'segnet' \
                                                 --dataID 1 \
                                                 --root_dir <THE-ROOT-PATH-OF-DATA>

Semantic segmentation results of different deep neural networks on the clean and UAE-RS test sets:

Vaihingen Zurich Summer
Model Clean Test Set Adversarial Test Set mF1 Gap Clean Test Set Adversarial Test Set mF1 Gap
FCN-32s 69.48 35.00 -34.48 66.26 32.31 -33.95
FCN-16s 69.70 27.02 -42.68 66.34 34.80 -31.54
FCN-8s 82.22 22.04 -60.18 79.90 40.52 -39.38
DeepLab-v2 77.04 34.12 -42.92 74.38 45.48 -28.90
DeepLab-v3+ 84.36 14.56 -69.80 82.51 62.55 -19.96
SegNet 78.70 17.84 -60.86 75.59 35.58 -40.01
ICNet 80.89 41.00 -39.89 78.87 59.77 -19.10
ContextNet 81.17 47.80 -33.37 77.89 63.71 -14.18
SQNet 81.85 39.08 -42.77 76.32 55.29 -21.03
PSPNet 83.11 21.43 -61.68 77.55 65.39 -12.16
U-Net 83.61 16.09 -67.52 80.78 56.58 -24.20
LinkNet 82.30 24.36 -57.94 79.98 48.67 -31.31
FRRNetA 84.17 16.75 -67.42 80.50 58.20 -22.30
FRRNetB 84.27 28.03 -56.24 79.27 67.31 -11.96

Paper

Universal adversarial examples in remote sensing: Methodology and benchmark

Please cite the following paper if you use the data or the code:

@article{uaers,
  title={Universal adversarial examples in remote sensing: Methodology and benchmark}, 
  author={Xu, Yonghao and Ghamisi, Pedram},
  journal={arXiv preprint arXiv:2202.07054},
  year={2022},
}

Acknowledgement

The authors would like to thank Prof. Shawn Newsam for making the UCM dataset public available, Prof. Gui-Song Xia for providing the AID dataset, the International Society for Photogrammetry and Remote Sensing (ISPRS), and the German Society for Photogrammetry, Remote Sensing and Geoinformation (DGPF) for providing the Vaihingen dataset, and Dr. Michele Volpi for providing the Zurich Summer dataset.

Efficient-Segmentation-Networks

segmentation_models.pytorch

Adversarial-Attacks-PyTorch

License

This repo is distributed under MIT License. The UAE-RS dataset can be used for academic purposes only.

Validated, scalable, community developed variant calling, RNA-seq and small RNA analysis

Validated, scalable, community developed variant calling, RNA-seq and small RNA analysis. You write a high level configuration file specifying your in

Blue Collar Bioinformatics 917 Jan 03, 2023
Official source code of paper 'IterMVS: Iterative Probability Estimation for Efficient Multi-View Stereo'

IterMVS official source code of paper 'IterMVS: Iterative Probability Estimation for Efficient Multi-View Stereo' Introduction IterMVS is a novel lear

Fangjinhua Wang 127 Jan 04, 2023
Code accompanying the paper "How Tight Can PAC-Bayes be in the Small Data Regime?"

How Tight Can PAC-Bayes be in the Small Data Regime? This is the code to reproduce all experiments for the following paper: @inproceedings{Foong:2021:

5 Dec 21, 2021
Attention Probe: Vision Transformer Distillation in the Wild

Attention Probe: Vision Transformer Distillation in the Wild Jiahao Wang, Mingdeng Cao, Shuwei Shi, Baoyuan Wu, Yujiu Yang In ICASSP 2022 This code is

Wang jiahao 3 Oct 31, 2022
FairyTailor: Multimodal Generative Framework for Storytelling

FairyTailor: Multimodal Generative Framework for Storytelling

Eden Bens 172 Dec 30, 2022
PaddleViT: State-of-the-art Visual Transformer and MLP Models for PaddlePaddle 2.0+

PaddlePaddle Vision Transformers State-of-the-art Visual Transformer and MLP Models for PaddlePaddle ๐Ÿค– PaddlePaddle Visual Transformers (PaddleViT or

1k Dec 28, 2022
Anonymous implementation of KSL

k-Step Latent (KSL) Implementation of k-Step Latent (KSL) in PyTorch. Representation Learning for Data-Efficient Reinforcement Learning [Paper] Code i

1 Nov 10, 2021
Official PyTorch implementation of BlobGAN: Spatially Disentangled Scene Representations

BlobGAN: Spatially Disentangled Scene Representations Official PyTorch Implementation Paper | Project Page | Video | Interactive Demo BlobGAN.mp4 This

148 Dec 29, 2022
This is the official pytorch implementation for the paper: Instance Similarity Learning for Unsupervised Feature Representation.

ISL This is the official pytorch implementation for the paper: Instance Similarity Learning for Unsupervised Feature Representation, which is accepted

19 May 04, 2022
QHackโ€”the quantum machine learning hackathon

Official repo for QHackโ€”the quantum machine learning hackathon

Xanadu 72 Dec 21, 2022
Tool for working with Y-chromosome data from YFull and FTDNA

ycomp ycomp is a tool for working with Y-chromosome data from YFull and FTDNA. Run ycomp -h for information on how to use the program. Installation Th

Alexander Regueiro 2 Jun 18, 2022
Code for the ICML 2021 paper: "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision"

ViLT Code for the paper: "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision" Install pip install -r requirements.txt pip

Wonjae Kim 922 Jan 01, 2023
Pytorch code for our paper "Feedback Network for Image Super-Resolution" (CVPR2019)

Feedback Network for Image Super-Resolution [arXiv] [CVF] [Poster] Update: Our proposed Gated Multiple Feedback Network (GMFN) will appear in BMVC2019

Zhen Li 539 Jan 06, 2023
ScaleNet: A Shallow Architecture for Scale Estimation

ScaleNet: A Shallow Architecture for Scale Estimation Repository for the code of ScaleNet paper: "ScaleNet: A Shallow Architecture for Scale Estimatio

Axel Barroso 34 Nov 09, 2022
Code to use Augmented Shapiro Wilks Stopping, as well as code for the paper "Statistically Signifigant Stopping of Neural Network Training"

This codebase is being actively maintained, please create and issue if you have issues using it Basics All data files are included under losses and ea

J K Terry 32 Nov 09, 2021
Object Tracking and Detection Using OpenCV

Object tracking is one such application of computer vision where an object is detected in a video, otherwise interpreted as a set of frames, and the objectโ€™s trajectory is estimated. For instance, yo

Happy N. Monday 4 Aug 21, 2022
Official PyTorch implementation of PICCOLO: Point-Cloud Centric Omnidirectional Localization (ICCV 2021)

Official PyTorch implementation of PICCOLO: Point-Cloud Centric Omnidirectional Localization (ICCV 2021)

16 Nov 19, 2022
Memory Defense: More Robust Classificationvia a Memory-Masking Autoencoder

Memory Defense: More Robust Classificationvia a Memory-Masking Autoencoder Authors: - Eashan Adhikarla - Dan Luo - Dr. Brian D. Davison Abstract Many

Eashan Adhikarla 4 Dec 25, 2022
A Robust Non-IoU Alternative to Non-Maxima Suppression in Object Detection

Confluence: A Robust Non-IoU Alternative to Non-Maxima Suppression in Object Detection 1. ไป‹็ป ็”จไปฅๆ›ฟไปฃ NMS๏ผŒๅœจๆ‰€ๆœ‰ bbox ไธญๆŒ‘้€‰ๅ‡บๆœ€ไผ˜็š„้›†ๅˆใ€‚ NMS ไป…่€ƒ่™‘ไบ† bbox ็š„ๅพ—ๅˆ†๏ผŒ็„ถๅŽๆ นๆฎ IOU ๆฅ

44 Sep 15, 2022
This is the official repository for our paper: ''Pruning Self-attentions into Convolutional Layers in Single Path''.

Pruning Self-attentions into Convolutional Layers in Single Path This is the official repository for our paper: Pruning Self-attentions into Convoluti

Zhuang AI Group 77 Dec 26, 2022