Based on the paper "Geometry-aware Instance-reweighted Adversarial Training" ICLR 2021 oral

Overview

Geometry-aware Instance-reweighted Adversarial Training

This repository provides codes for Geometry-aware Instance-reweighted Adversarial Training (https://openreview.net/forum?id=iAX0l6Cz8ub) (ICLR oral)
Jingfeng Zhang, Jianing Zhu, Gang Niu, Bo Han, Masashi Sugiyama and Mohan Kankanhalli

What is the nature of adversarial training?

Adversarial training employs adversarial data for updating the models. For more details of the nature of adversarial training, refer to this FAT's GitHub for the preliminary.
In this repo, you will know:

FACT 1: Model Capacity is NOT enough for adversarial training.

We plot standard training error (Natural) and adversarial training error (PGD-10) over the training epochs of the standard AT (Madry's) on CIFAR-10 dataset. *Left panel*: AT on different sizes of network (blue lines) and standard training (ST, the red line) on ResNet-18. *Right panel*: AT on ResNet-18 under different perturbation bounds eps_train.

Refer to FAT's GitHub for the standard AT by setting

python FAT.py --epsilon 0.031 --net 'resnet18' --tau 10 --dynamictau False

OR using codes in this repo by setting

python GAIRAT.py --epsilon 0.031 --net 'resnet18' --Lambda 'inf'

to recover the standard AT (Madry's).

The over-parameterized models that fit nataral data entirely in the standard training (ST) are still far from enough for fitting adversarial data in adversarial training (AT). Compared with ST fitting the natural data points, AT smooths the neighborhoods of natural data, so that adversarial data consume significantly more model capacity than natural data.

The volume of this neighborhood is exponentially large w.r.t. the input dimension , even if is small.

Under the computational budget of 100 epochs, the networks hardly reach zero error on the adversarial training data.

FACT 2: Data points are inherently different.

More attackable data are closer to the decision boundary.

More guarded data are farther away from the decision boundary.

More attackable data (lighter red and blue) are closer to the decision boundary; more guarded data (darker red and blue) are farther away from the decision boundary. *Left panel*: Two toy examples. *Right panel*: The model’s output distribution of two randomly selected classes from the CIFAR-10 dataset. The degree of robustness (denoted by the color gradient) of a data point is calculated based on the least number of iterations κ that PGD needs to find its misclassified adversarial variant.

Therefore, given the limited model capacity, we should treat data differently for updating the model in adversarial training.

IDEA: Geometrically speaking, a natural data point closer to/farther from the class boundary is less/more robust, and the corresponding adversarial data point should be assigned with larger/smaller weight for updating the model.
To implement the idea, we propose geometry-aware instance-reweighted adversarial training (GAIRAT), where the weights are based on how difficult it is to attack a natural data point.
"how difficult it is to attack a natural data point" is approximated by the number of PGD steps that the PGD method requires to generate its misclassified adversarial variant.

The illustration of GAIRAT. GAIRAT explicitly gives larger weights on the losses of adversarial data (larger red), whose natural counterparts are closer to the decision boundary (lighter blue). GAIRAT explicitly gives smaller weights on the losses of adversarial data (smaller red), whose natural counterparts are farther away from the decision boundary (darker blue).

GAIRAT's Implementation

For updating the model, GAIRAT assigns instance dependent weight (reweight) on the loss of the adversarial data (found in GAIR.py).
The instance dependent weight depends on num_steps, which indicates the least PGD step numbers needed for the misclassified adversarial variant.

Preferred Prerequisites

  • Python (3.6)
  • Pytorch (1.2.0)
  • CUDA
  • numpy
  • foolbox

Running GAIRAT, GAIR-FAT on benchmark datasets (CIFAR-10 and SVHN)

Here are examples:

  • Train GAIRAT and GAIR-FAT on WRN-32-10 model on CIFAR-10 and compare our results with AT, FAT
CUDA_VISIBLE_DEVICES='0' python GAIRAT.py 
CUDA_VISIBLE_DEVICES='0' python GAIR_FAT.py 
  • How to recover the original FAT and AT using our code?
CUDA_VISIBLE_DEVICES='0' python GAIRAT.py --Lambda 'inf' --output_dir './AT_results' 
CUDA_VISIBLE_DEVICES='0' python GAIR_FAT.py --Lambda 'inf' --output_dir './FAT_results' 
  • Evaluations After running, you can find ./GAIRAT_result/log_results.txt and ./GAIR_FAT_result/log_results.txt for checking Natural Acc. and PGD-20 test Acc.
    We also evaluate our models using PGD+. PGD+ is the same as PG_ours in RST repo. PG_ours is PGD with 5 random starts, and each start has 40 steps with step size 0.01 (It has 40 × 5 = 200 iterations for each test data). Since PGD+ is computational defense, we only evaluate the best checkpoint bestpoint.pth.tar and the last checkpoint checkpoint.pth.tar in the folders GAIRAT_result and GAIR_FAT_result respectively.
CUDA_VISIBLE_DEVICES='0' python eval_PGD_plus.py --model_path './GAIRAT_result/bestpoint.pth.tar' --output_suffix='./GAIRAT_PGD_plus'
CUDA_VISIBLE_DEVICES='0' python eval_PGD_plus.py --model_path './GAIR_FAT_result/bestpoint.pth.tar' --output_suffix='./GAIR_FAT_PGD_plus'

White-box evaluations on WRN-32-10

Defense (best checkpoint) Natural Acc. PGD-20 Acc. PGD+ Acc.
AT(Madry) 86.92 % 0.24% 51.96% 0.21% 51.28% 0.23%
FAT 89.16% 0.15% 51.24% 0.14% 46.14% 0.19%
GAIRAT 85.75% 0.23% 57.81% 0.54% 55.61% 0.61%
GAIR-FAT 88.59% 0.12% 56.21% 0.52% 53.50% 0.60%
Defense (last checkpoint) Natural Acc. PGD-20 Acc. PGD+ Acc.
AT(Madry) 86.62 % 0.22% 46.73% 0.08% 46.08% 0.07%
FAT 88.18% 0.19% 46.79% 0.34% 45.80% 0.16%
GAIRAT 85.49% 0.25% 53.76% 0.49% 50.32% 0.48%
GAIR-FAT 88.44% 0.10% 50.64% 0.56% 47.51% 0.51%

For more details, refer to Table 1 and Appendix C.8 in the paper.

Benchmarking robustness with additional 500K unlabeled data on CIFAR-10 dataset.

In this repo, we unleash the full power of our geometry-aware instance-reweighted methods by incorporating 500K unlabeled data (i.e., GAIR-RST). In terms of both evaluation metrics, i.e., generalization and robustness, we can obtain the best WRN-28-10 model among all public available robust models.

  • How to create the such the superior model from scratch?
  1. Download ti_500K_pseudo_labeled.pickle containing our 500K pseudo-labeled TinyImages from this link (Auxillary data provided by Carmon et al. 2019). Store ti_500K_pseudo_labeled.pickle into the folder ./data
  2. You may need mutilple GPUs for running this.
chmod +x ./GAIR_RST/run_training.sh
./GAIR_RST/run_training.sh
  1. We evaluate the robust model using natural test accuracy on natural test data and roubust test accuracy by Auto Attack. Auto Attack is combination of two white box attacks and two black box attacks.
chmod +x ./GAIR_RST/autoattack/examples/run_eval.sh

White-box evaluations on WRN-28-10

We evaluate the robustness on CIFAR-10 dataset under auto-attack (Croce & Hein, 2020).

Here we list the results using WRN-28-10 on the leadboard and our results. In particular, we use the test eps = 0.031 which keeps the same as the training eps of our GAIR-RST.

CIFAR-10 - Linf

The robust accuracy is evaluated at eps = 8/255, except for those marked with * for which eps = 0.031, where eps is the maximal Linf-norm allowed for the adversarial perturbations. The eps used is the same set in the original papers.
Note: ‡ indicates models which exploit additional data for training (e.g. unlabeled data, pre-training).

# method/paper model architecture clean report. AA
1 (Gowal et al., 2020) authors WRN-28-10 89.48 62.76 62.80
2 (Wu et al., 2020b) available WRN-28-10 88.25 60.04 60.04
- GAIR-RST (Ours)* available WRN-28-10 89.36 59.64 59.64
3 (Carmon et al., 2019) available WRN-28-10 89.69 62.5 59.53
4 (Sehwag et al., 2020) available WRN-28-10 88.98 - 57.14
5 (Wang et al., 2020) available WRN-28-10 87.50 65.04 56.29
6 (Hendrycks et al., 2019) available WRN-28-10 87.11 57.4 54.92
7 (Moosavi-Dezfooli et al., 2019) authors WRN-28-10 83.11 41.4 38.50
8 (Zhang & Wang, 2019) available WRN-28-10 89.98 60.6 36.64
9 (Zhang & Xu, 2020) available WRN-28-10 90.25 68.7 36.45

The results show our GAIR-RST method can facilitate a competitive model by utilizing additional unlabeled data.

Wanna download our superior model for other purposes? Sure!

We welcome various attack methods to attack our defense models. For cifar-10 dataset, we normalize all images into [0,1].

Download our pretrained models checkpoint-epoch200.pt into the folder ./GAIR_RST/GAIR_RST_results through this Google Drive link.

You can evaluate this pretrained model through ./GAIR_RST/autoattack/examples/run_eval.sh

Reference

@inproceedings{
zhang2021_GAIRAT,
title={Geometry-aware Instance-reweighted Adversarial Training},
author={Jingfeng Zhang and Jianing Zhu and Gang Niu and Bo Han and Masashi Sugiyama and Mohan Kankanhalli},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=iAX0l6Cz8ub}
}

Contact

Please contact [email protected] or [email protected] and [email protected] if you have any question on the codes.

Owner
Jingfeng
Jingfeng Zhang, Postdoc (2021 - ) at RIKEN-AIP, Tokyo; Ph.D. (2016- 2020) from the School of Computing at the National University of Singapore.
Jingfeng
A Demo server serving Bert through ONNX with GPU written in Rust with <3

Demo BERT ONNX server written in rust This demo showcase the use of onnxruntime-rs on BERT with a GPU on CUDA 11 served by actix-web and tokenized wit

Xavier Tao 28 Jan 01, 2023
Official PyTorch implementation of the paper "Recycling Discriminator: Towards Opinion-Unaware Image Quality Assessment Using Wasserstein GAN", accepted to ACM MM 2021 BNI Track.

RecycleD Official PyTorch implementation of the paper "Recycling Discriminator: Towards Opinion-Unaware Image Quality Assessment Using Wasserstein GAN

Yunan Zhu 23 Nov 05, 2022
Deep Reinforcement Learning based autonomous navigation for quadcopters using PPO algorithm.

PPO-based Autonomous Navigation for Quadcopters This repository contains an implementation of Proximal Policy Optimization (PPO) for autonomous naviga

Bilal Kabas 16 Nov 11, 2022
ImVoxelNet: Image to Voxels Projection for Monocular and Multi-View General-Purpose 3D Object Detection

ImVoxelNet: Image to Voxels Projection for Monocular and Multi-View General-Purpose 3D Object Detection This repository contains implementation of the

Visual Understanding Lab @ Samsung AI Center Moscow 190 Dec 30, 2022
PolyphonicFormer: Unified Query Learning for Depth-aware Video Panoptic Segmentation

PolyphonicFormer: Unified Query Learning for Depth-aware Video Panoptic Segmentation Winner method of the ICCV-2021 SemKITTI-DVPS Challenge. [arxiv] [

Yuan Haobo 38 Jan 03, 2023
Code for Transformers Solve Limited Receptive Field for Monocular Depth Prediction

Official PyTorch code for Transformers Solve Limited Receptive Field for Monocular Depth Prediction. Guanglei Yang, Hao Tang, Mingli Ding, Nicu Sebe,

stanley 152 Dec 16, 2022
Convert BART models to ONNX with quantization. 3X reduction in size, and upto 3X boost in inference speed

fast-Bart Reduction of BART model size by 3X, and boost in inference speed up to 3X BART implementation of the fastT5 library (https://github.com/Ki6a

Siddharth Sharma 19 Dec 09, 2022
PFFDTD is an open-source FDTD simulator for 3D room acoustics

PFFDTD is an open-source FDTD simulator for 3D room acoustics

Brian Hamilton 34 Nov 24, 2022
Python with OpenCV - MediaPip Framework Hand Detection

Python HandDetection Python with OpenCV - MediaPip Framework Hand Detection Explore the docs » Contact Me About The Project It is a Computer vision pa

2 Jan 07, 2022
3DMV jointly combines RGB color and geometric information to perform 3D semantic segmentation of RGB-D scans.

3DMV 3DMV jointly combines RGB color and geometric information to perform 3D semantic segmentation of RGB-D scans. This work is based on our ECCV'18 p

Владислав Молодцов 0 Feb 06, 2022
A collection of awesome resources image-to-image translation.

awesome image-to-image translation A collection of resources on image-to-image translation. Contributing If you think I have missed out on something (

876 Dec 28, 2022
Official implementation for (Refine Myself by Teaching Myself : Feature Refinement via Self-Knowledge Distillation, CVPR-2021)

FRSKD Official implementation for Refine Myself by Teaching Myself : Feature Refinement via Self-Knowledge Distillation (CVPR-2021) Requirements Pytho

75 Dec 28, 2022
Source code, datasets and trained models for the paper Learning Advanced Mathematical Computations from Examples (ICLR 2021), by François Charton, Amaury Hayat (ENPC-Rutgers) and Guillaume Lample

Maths from examples - Learning advanced mathematical computations from examples This is the source code and data sets relevant to the paper Learning a

Facebook Research 171 Nov 23, 2022
Implementation of "Generalizable Neural Performer: Learning Robust Radiance Fields for Human Novel View Synthesis"

Generalizable Neural Performer: Learning Robust Radiance Fields for Human Novel View Synthesis Abstract: This work targets at using a general deep lea

163 Dec 14, 2022
Unsupervised Feature Ranking via Attribute Networks.

FRANe Unsupervised Feature Ranking via Attribute Networks (FRANe) converts a dataset into a network (graph) with nodes that correspond to the features

7 Sep 29, 2022
[ICRA 2022] An opensource framework for cooperative detection. Official implementation for OPV2V.

OpenCOOD OpenCOOD is an Open COOperative Detection framework for autonomous driving. It is also the official implementation of the ICRA 2022 paper OPV

Runsheng Xu 322 Dec 23, 2022
Yas CRNN model training - Yet Another Genshin Impact Scanner

Yas-Train Yet Another Genshin Impact Scanner 又一个原神圣遗物导出器 介绍 该仓库为 Yas 的模型训练程序 相关资料 MobileNetV3 CRNN 使用 假设你会设置基本的pytorch环境。 生成数据集 python main.py gen 训练

wormtql 18 Jan 08, 2023
PyTorch implementation of "Simple and Deep Graph Convolutional Networks"

Simple and Deep Graph Convolutional Networks This repository contains a PyTorch implementation of "Simple and Deep Graph Convolutional Networks".(http

chenm 253 Dec 08, 2022
PyTorch implementation of Munchausen Reinforcement Learning based on DQN and SAC. Handles discrete and continuous action spaces

Exploring Munchausen Reinforcement Learning This is the project repository of my team in the "Advanced Deep Learning for Robotics" course at TUM. Our

Mohamed Amine Ketata 10 Mar 10, 2022
Neural Architecture Search Powered by Swarm Intelligence 🐜

Neural Architecture Search Powered by Swarm Intelligence 🐜 DeepSwarm DeepSwarm is an open-source library which uses Ant Colony Optimization to tackle

288 Oct 28, 2022