Code for the paper: Adversarial Training Against Location-Optimized Adversarial Patches. ECCV-W 2020.

Overview

Adversarial Training Against Location-Optimized Adversarial Patches

arXiv | Paper | Code | Video | Slides

Code for the paper:

Sukrut Rao, David Stutz, Bernt Schiele. (2020) Adversarial Training Against Location-Optimized Adversarial Patches. In: Bartoli A., Fusiello A. (eds) Computer Vision – ECCV 2020 Workshops. ECCV 2020. Lecture Notes in Computer Science, vol 12539. Springer, Cham. https://doi.org/10.1007/978-3-030-68238-5_32

Setup

Requirements

  • Python 3.7 or above
  • PyTorch
  • scipy
  • h5py
  • scikit-image
  • scikit-learn

Optional requirements

To use script to convert data to HDF5 format

  • torchvision
  • Pillow
  • pandas

To use Tensorboard logging

  • tensorboard

With the exception of Python and PyTorch, all requirements can be installed directly using pip:

$ pip install -r requirements.txt

Setting the paths

In common/paths.py, set the following variables:

  • BASE_DATA: base path for datasets.
  • BASE_EXPERIMENTS: base path for trained models and perturbations after attacks.
  • BASE_LOGS: base path for tensorboard logs (if used).

Data

Data needs to be provided in the HDF5 format. To use a dataset, use the following steps:

  • In common/paths.py, set BASE_DATA to the base path where data will be stored.
  • For each dataset, create a directory named <dataset-name> in BASE_DATA
  • Place the following files in this directory:
    • train_images.h5: Training images
    • train_labels.h5: Training labels
    • test_images.h5: Test images
    • test_labels.h5: Test labels

A script create_dataset_h5.py has been provided to convert data in a comma-separated CSV file consisting of full paths to images and their corresponding labels to a HDF5 file. To use this script, first set BASE_DATA in common/paths.py. If the files containing training and test data paths and labels are train.csv and test.csv respectively, use:

$ python scripts/create_dataset_h5.py --train_csv /path/to/train.csv --test_csv /path/to/test.csv --dataset dataset_name

where dataset_name is the name for the dataset.

Training and evaluating a model

Training

To train a model, use:

$ python scripts/train.py [options]

A list of available options and their descriptions can be found by using:

$ python scripts/train.py -h

Evaluation

To evaluate a trained model, use:

$ python scripts/evaluate.py [options]

A list of available options and their descriptions can be found by using:

$ python scripts/evaluate.py -h

Using models and attacks from the paper

The following provides the arguments to use with the training and evaluation scripts to train the models and run the attacks described in the paper. The commands below assume that the dataset is named cifar10 and has 10 classes.

Models

Normal

$ python scripts/train.py --cuda --dataset cifar10 --n_classes 10 --cuda --mode normal --log_dir logs --snapshot_frequency 5 --models_dir models --use_tensorboard --use_flip

Occlusion

$ python scripts/train.py --cuda --dataset cifar10 --n_classes 10 --mask_dims 8 8 --mode adversarial --location random --exclude_box 11 11 10 10 --epsilon 0.1 --signed_grad --max_iterations 1 --log_dir logs --snapshot_frequency 5 --models_dir models --use_tensorboard --use_flip

AT-Fixed

$ python scripts/train.py --cuda --dataset cifar10 --n_classes 10 --mask_pos 3 3 --mask_dims 8 8 --mode adversarial --location fixed --exclude_box 11 11 10 10 --epsilon 0.1 --signed_grad --max_iterations 25 --log_dir logs --snapshot_frequency 5 --models_dir models --use_tensorboard --use_flip

AT-Rand

$ python scripts/train.py --cuda --dataset cifar10 --n_classes 10 --mask_dims 8 8 --mode adversarial --location random --exclude_box 11 11 10 10 --epsilon 0.1 --signed_grad --max_iterations 25 --log_dir logs --snapshot_frequency 5 --models_dir models --use_tensorboard --use_flip

AT-RandLO

$ python scripts/train.py --cuda --dataset cifar10 --n_classes 10 --mask_dims 8 8 --mode adversarial --location random --exclude_box 11 11 10 10 --epsilon 0.1 --signed_grad --max_iterations 25 --optimize_location --opt_type random --stride 2 --log_dir logs --snapshot_frequency 5 --models_dir models --use_tensorboard --use_flip

AT-FullLO

$ python scripts/train.py --cuda --dataset cifar10 --n_classes 10 --mask_dims 8 8 --mode adversarial --location random --exclude_box 11 11 10 10 --epsilon 0.1 --signed_grad --max_iterations 25 --optimize_location --opt_type full --stride 2 --log_dir logs --snapshot_frequency 5 --models_dir models --use_tensorboard --use_flip

Attacks

The arguments used here correspond to using 100 iterations and 30 attempts. These can be changed by appropriately setting --iterations and --attempts respectively.

AP-Fixed

$ python scripts/evaluate.py --cuda --dataset cifar10 --n_classes 10 --mask_pos 3 3 --mask_dims 8 8 --mode adversarial --log_dir logs --models_dir models --saved_model_file model_complete_200 --attempts 30 --location fixed --epsilon 0.05 --iterations 100 --signed_grad --perturbations_file perturbations --use_tensorboard

AP-Rand

$ python scripts/evaluate.py --cuda --dataset cifar10 --n_classes 10 --mask_dims 8 8 --mode adversarial --log_dir logs --models_dir models --saved_model_file model_complete_200 --attempts 30 --location random --epsilon 0.05 --iterations 100 --exclude_box 11 11 10 10 --signed_grad --perturbations_file perturbations --use_tensorboard

AP-RandLO

$ python scripts/evaluate.py --cuda --dataset cifar10 --n_classes 10 --mask_dims 8 8 --mode adversarial --log_dir logs --models_dir models --saved_model_file model_complete_200 --attempts 30 --location random --epsilon 0.05 --iterations 100 --exclude_box 11 11 10 10 --optimize_location --opt_type random --stride 2 --signed_grad --perturbations_file perturbations --use_tensorboard

AP-FullLO

$ python scripts/evaluate.py --cuda --dataset cifar10 --n_classes 10 --mask_dims 8 8 --mode adversarial --log_dir logs --models_dir models --saved_model_file model_complete_200 --attempts 30 --location random --epsilon 0.05 --iterations 100 --exclude_box 11 11 10 10 --optimize_location --opt_type full --stride 2 --signed_grad --perturbations_file perturbations --use_tensorboard

Citation

Please cite the paper as follows:

@InProceedings{Rao2020Adversarial,
author = {Sukrut Rao and David Stutz and Bernt Schiele},
title = {Adversarial Training Against Location-Optimized Adversarial Patches},
booktitle = {Computer Vision -- ECCV 2020 Workshops},
year = {2020},
editor = {Adrien Bartoli and Andrea Fusiello},
publisher = {Springer International Publishing},
address = {Cham},
pages = {429--448},
isbn = {978-3-030-68238-5}
}

Acknowledgement

This repository uses code from davidstutz/confidence-calibrated-adversarial-training.

License

Copyright (c) 2020 Sukrut Rao, David Stutz, Max-Planck-Gesellschaft

Please read carefully the following terms and conditions and any accompanying documentation before you download and/or use this software and associated documentation files (the "Software").

The authors hereby grant you a non-exclusive, non-transferable, free of charge right to copy, modify, merge, publish, distribute, and sublicense the Software for the sole purpose of performing non-commercial scientific research, non-commercial education, or non-commercial artistic projects.

Any other use, in particular any use for commercial purposes, is prohibited. This includes, without limitation, incorporation in a commercial product, use in a commercial service, or production of other artefacts for commercial purposes.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

You understand and agree that the authors are under no obligation to provide either maintenance services, update services, notices of latent defects, or corrections of defects with regard to the Software. The authors nevertheless reserve the right to update, modify, or discontinue the Software at any time.

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. You agree to cite the corresponding papers (see above) in documents and papers that report on research using the Software.

PyTorch implementations of algorithms for density estimation

pytorch-flows A PyTorch implementations of Masked Autoregressive Flow and some other invertible transformations from Glow: Generative Flow with Invert

Ilya Kostrikov 546 Dec 05, 2022
TensorFlow implementation of "A Simple Baseline for Bayesian Uncertainty in Deep Learning"

TensorFlow implementation of "A Simple Baseline for Bayesian Uncertainty in Deep Learning"

YeongHyeon Park 7 Aug 28, 2022
Unsupervised captioning - Code for Unsupervised Image Captioning

Unsupervised Image Captioning by Yang Feng, Lin Ma, Wei Liu, and Jiebo Luo Introduction Most image captioning models are trained using paired image-se

Yang Feng 207 Dec 24, 2022
Official PyTorch implementation of the Fishr regularization for out-of-distribution generalization

Fishr: Invariant Gradient Variances for Out-of-distribution Generalization Official PyTorch implementation of the Fishr regularization for out-of-dist

62 Dec 22, 2022
This is a simple backtesting framework to help you test your crypto currency trading. It includes a way to download and store historical crypto data and to execute a trading strategy.

You can use this simple crypto backtesting script to ensure your trading strategy is successful Minimal setup required and works well with static TP a

Andrei 154 Sep 12, 2022
A BaSiC Tool for Background and Shading Correction of Optical Microscopy Images

BaSiC Matlab code accompanying A BaSiC Tool for Background and Shading Correction of Optical Microscopy Images by Tingying Peng, Kurt Thorn, Timm Schr

Marr Lab 34 Dec 18, 2022
Lorien: A Unified Infrastructure for Efficient Deep Learning Workloads Delivery

Lorien: A Unified Infrastructure for Efficient Deep Learning Workloads Delivery Lorien is an infrastructure to massively explore/benchmark the best sc

Amazon Web Services - Labs 45 Dec 12, 2022
Unofficial reimplementation of ECAPA-TDNN for speaker recognition (EER=0.86 for Vox1_O when train only in Vox2)

Introduction This repository contains my unofficial reimplementation of the standard ECAPA-TDNN, which is the speaker recognition in VoxCeleb2 dataset

Tao Ruijie 277 Dec 31, 2022
TensorFlow GNN is a library to build Graph Neural Networks on the TensorFlow platform.

TensorFlow GNN This is an early (alpha) release to get community feedback. It's under active development and we may break API compatibility in the fut

889 Dec 30, 2022
MDMM - Learning multi-domain multi-modality I2I translation

Multi-Domain Multi-Modality I2I translation Pytorch implementation of multi-modality I2I translation for multi-domains. The project is an extension to

Hsin-Ying Lee 107 Nov 04, 2022
Improving adversarial robustness by a coupling rejection strategy

Adversarial Training with Rectified Rejection The code for the paper Adversarial Training with Rectified Rejection. Environment settings and libraries

Tianyu Pang 29 Jan 06, 2023
Instant neural graphics primitives: lightning fast NeRF and more

Instant Neural Graphics Primitives Ever wanted to train a NeRF model of a fox in under 5 seconds? Or fly around a scene captured from photos of a fact

NVIDIA Research Projects 10.6k Jan 01, 2023
Fine-grained Post-training for Improving Retrieval-based Dialogue Systems - NAACL 2021

Fine-grained Post-training for Multi-turn Response Selection Implements the model described in the following paper Fine-grained Post-training for Impr

Janghoon Han 83 Dec 20, 2022
Python SDK for building, training, and deploying ML models

Overview of Kubeflow Fairing Kubeflow Fairing is a Python package that streamlines the process of building, training, and deploying machine learning (

Kubeflow 325 Dec 13, 2022
Implementations for the ICLR-2021 paper: SEED: Self-supervised Distillation For Visual Representation.

Implementations for the ICLR-2021 paper: SEED: Self-supervised Distillation For Visual Representation.

Jacob 27 Oct 23, 2022
This is the official github repository of the Met dataset

The Met dataset This is the official github repository of the Met dataset. The official webpage of the dataset can be found here. What is it? This cod

Nikolaos-Antonios Ypsilantis 35 Dec 17, 2022
Using image super resolution models with vapoursynth and speeding them up with TensorRT

vs-RealEsrganAnime-tensorrt-docker Using image super resolution models with vapoursynth and speeding them up with TensorRT. Also a docker image since

4 Aug 23, 2022
Simple STAC Catalogs discovery tool.

STAC Catalog Discovery Simple STAC discovery tool. Just paste the STAC Catalog link and press Enter. Details STAC Discovery tool enables discovering d

Mykola Kozyr 21 Oct 19, 2022