Source code for our paper "Do Not Trust Prediction Scores for Membership Inference Attacks"

Overview

Do Not Trust Prediction Scores for Membership Inference Attacks

False-Positive Examples

Abstract: Membership inference attacks (MIAs) aim to determine whether a specific sample was used to train a predictive model. Knowing this may indeed lead to a privacy breach. Arguably, most MIAs, however, make use of the model's prediction scores---the probability of each output given some input---following the intuition that the trained model tends to behave differently on its training data. We argue that this is a fallacy for many modern deep network architectures, e.g., ReLU type neural networks produce almost always high prediction scores far away from the training data. Consequently, MIAs will miserably fail since this behavior leads to high false-positive rates not only on known domains but also on out-of-distribution data and implicitly acts as a defense against MIAs. Specifically, using generative adversarial networks, we are able to produce a potentially infinite number of samples falsely classified as part of the training data. In other words, the threat of MIAs is overestimated and less information is leaked than previously assumed. Moreover, there is actually a trade-off between the overconfidence of classifiers and their susceptibility to MIAs: the more classifiers know when they do not know, making low confidence predictions far away from the training data, the more they reveal the training data.
Arxiv Preprint (PDF)

Membership Inference Attacks

Membership Inference Attacks


Membership Inference Attack Preparation Process

In a general MIA setting, as usually assumed in the literature, an adversary is given an input x following distribution D and a target model which was trained on a training set with size S_train consisting of samples from D. The adversary is then facing the problem to identify whether a given x following D was part of the training set S_train. To predict the membership of x, the adversary creates an inference model h. In score-based MIAs, the input to h is the prediction score vector produced by the target model on sample x (see first figure above). Since MIAs are binary classification problems, precision, recall and false-positive rate (FPR) are used as attack evaluation metrics.

All MIAs exploit a difference in the behavior of the target model on seen and unseen data. Most attacks in the literature follow Shokri et al. and train so-called shadow models shadow models on a disjoint dataset S_shadow drawn from the same distribution D as S_train. The shadow model is used to mimic the behavior of the target model and adjust parameters of h, such as threshold values or model weights. Note that the membership status for inputs to the shadow models are known to the adversary (see second figure above).

Setup and Run Experiments

Setup StyleGAN2-ADA

To recreate our Fake datasets containing synthetic CIFAR-10 and Stanford Dog images, you need to clone the official StyleGAN-2-Pytorch repo into the folder datasets.

cd datasets
git clone https://github.com/NVlabs/stylegan2-ada-pytorch.git
rm -r --force stylegan2-ada-pytorch/.git/

You can also safely remove all folders in the /datasets/stylegan2-ada-pytorch folder but /dnnlib and /torch_utils.

Setup Docker Container

To build the Docker container run the following script:

./docker_build.sh -n confidence_mi

To start the docker container run the following command from the project's root:

docker run --rm --shm-size 16G --name my_confidence_mi --gpus '"device=0"' -v $(pwd):/workspace/confidences -it confidence_mi bash

Download Trained Models

We provide our trained models on which we performed our experiments. To automatically download and extract the files use the following command:

bash download_pretrained_models.sh

To manually download single models, please visit https://hessenbox.tu-darmstadt.de/getlink/fiBg5znMtAagRe58sCrrLtyg/pretrained_models.

Reproduce Results from the Paper

All our experiments based on CIFAR-10 and Stanford Dogs can be reproduced using the pre-trained models by running the following scripts:

python experiments/cifar10_experiments.py
python experiments/stanford_dogs_experiments.py

If you want to train the models from scratch, the following commands can be used:

python experiments/cifar10_experiments.py --train
python experiments/stanford_dogs_experiments.py --train --pretrained

We use command line arguments to specify the hyperparameters of the training and attacking process. Default values correspond to the parameters used for training the target models as stated in the paper. The same applies for the membership inference attacks. To train models with label smoothing, L2 or LLLA, run the experiments with --label_smoothing, --weight_decay or --llla. We set the seed to 42 (default value) for all experiments. For further command line arguments and details, please refer to the python files.

Attack results will be stored in csv files at /experiments/results/{MODEL_ARCH}_{DATASET_NAME}_{MODIFIERS}_attack_results.csv and state precision, recall, fpr and mmps values for the various input datasets and membership inference attacks. Results for training the target and shadow models will be stored in the first column at /experiments/results/{MODEL_ARCH}_{DATASET_NAME}_{MODIFIERS}_performance_results.csv. They state the training and test accuracy, as well as the ECE.

Datasets

All data is required to be located in /data/. To recreate the Fake datasets using StyleGAN2-ADA to generate CIFAR-10 and dog samples, use /datasets/fake_cifar10.py and /datasets/fake_dogs.py. For example, Fake Dogs samples are located at /data/fake_afhq_dogs/Images after generation. If the files are missing or corrupted (checked by MD5 checksum), the images will be regenerated to restore the identical datasets used in the paper. This process will be automatically called when running one of the experiments. We use various datasets in our experiments. The following figure gives a short overview over the content and visual styles of the datasets.

Membership Inference Attacks

Citation

If you build upon our work, please don't forget to cite us.

@misc{hintersdorf2021trust,
      title={Do Not Trust Prediction Scores for Membership Inference Attacks}, 
      author={Dominik Hintersdorf and Lukas Struppek and Kristian Kersting},
      year={2021},
      eprint={2111.09076},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}

Implementation Credits

Some of our implementations rely on other repos. We want to thank the authors for making their code publicly available. For license details refer to the corresponding files in our repo. For more details on the specific functionality, please visit the corresponding repos.

Owner
[email protected]
Machine Learning Group at TU Darmstadt
<a href=[email protected]">
Fast Scattering Transform with CuPy/PyTorch

Announcement 11/18 This package is no longer supported. We have now released kymatio: http://www.kymat.io/ , https://github.com/kymatio/kymatio which

Edouard Oyallon 289 Dec 07, 2022
From Fidelity to Perceptual Quality: A Semi-Supervised Approach for Low-Light Image Enhancement (CVPR'2020)

Under-exposure introduces a series of visual degradation, i.e. decreased visibility, intensive noise, and biased color, etc. To address these problems, we propose a novel semi-supervised learning app

Yang Wenhan 117 Jan 03, 2023
Object detection using yolo-tiny model and opencv used as backend

Object detection Algorithm used : Yolo algorithm Backend : opencv Library required: opencv = 4.5.4-dev' Quick Overview about structure 1) main.py Load

2 Jul 06, 2022
CLADE - Efficient Semantic Image Synthesis via Class-Adaptive Normalization (TPAMI 2021)

Efficient Semantic Image Synthesis via Class-Adaptive Normalization (Accepted by TPAMI)

tzt 49 Nov 17, 2022
PyTorch GPU implementation of the ES-RNN model for time series forecasting

Fast ES-RNN: A GPU Implementation of the ES-RNN Algorithm A GPU-enabled version of the hybrid ES-RNN model by Slawek et al that won the M4 time-series

Kaung 305 Jan 03, 2023
Code for Quantifying Ignorance in Individual-Level Causal-Effect Estimates under Hidden Confounding

🍐 quince Code for Quantifying Ignorance in Individual-Level Causal-Effect Estimates under Hidden Confounding 🍐 Installation $ git clone

Andrew Jesson 19 Jun 23, 2022
TensorFlow implementation of Elastic Weight Consolidation

Elastic weight consolidation Introduction A TensorFlow implementation of elastic weight consolidation as presented in Overcoming catastrophic forgetti

James Stokes 67 Oct 11, 2022
Implementation of Graph Transformer in Pytorch, for potential use in replicating Alphafold2

Graph Transformer - Pytorch Implementation of Graph Transformer in Pytorch, for potential use in replicating Alphafold2. This was recently used by bot

Phil Wang 97 Dec 28, 2022
Pytorch Implementation of "Contrastive Representation Learning for Exemplar-Guided Paraphrase Generation"

CRL_EGPG Pytorch Implementation of Contrastive Representation Learning for Exemplar-Guided Paraphrase Generation We use contrastive loss implemented b

YHR 25 Nov 14, 2022
SelfRemaster: SSL Speech Restoration

SelfRemaster: Self-Supervised Speech Restoration Official implementation of SelfRemaster: Self-Supervised Speech Restoration with Analysis-by-Synthesi

Takaaki Saeki 46 Jan 07, 2023
Yet Another Reinforcement Learning Tutorial

This repo contains self-contained RL implementations

Sungjoon 65 Dec 10, 2022
disentanglement_lib is an open-source library for research on learning disentangled representations.

disentanglement_lib disentanglement_lib is an open-source library for research on learning disentangled representation. It supports a variety of diffe

Google Research 1.3k Dec 28, 2022
Implementation for the paper SMPLicit: Topology-aware Generative Model for Clothed People (CVPR 2021)

SMPLicit: Topology-aware Generative Model for Clothed People [Project] [arXiv] License Software Copyright License for non-commercial scientific resear

Enric Corona 225 Dec 13, 2022
When are Iterative GPs Numerically Accurate?

When are Iterative GPs Numerically Accurate? This is a code repository for the paper "When are Iterative GPs Numerically Accurate?" by Wesley Maddox,

Wesley Maddox 1 Jan 06, 2022
Multilingual Image Captioning

Multilingual Image Captioning Authors: Bhavitvya Malik, Gunjan Chhablani Demo Link: https://huggingface.co/spaces/flax-community/multilingual-image-ca

Gunjan Chhablani 32 Nov 25, 2022
The pytorch implementation of SOKD (BMVC2021).

Semi-Online Knowledge Distillation Implementations of SOKD. Requirements This repo was tested with Python 3.8, PyTorch 1.5.1, torchvision 0.6.1, CUDA

4 Dec 19, 2021
Tutorials and implementations for "Self-normalizing networks"

Self-Normalizing Networks Tutorials and implementations for "Self-normalizing networks"(SNNs) as suggested by Klambauer et al. (arXiv pre-print). Vers

Institute of Bioinformatics, Johannes Kepler University Linz 1.6k Jan 07, 2023
Based on Stockfish neural network(similar to LcZero)

MarcoEngine Marco Engine - interesnaya neyronnaya shakhmatnaya set', kotoraya ispol'zuyet metod samoobucheniya(dostizheniye khoroshoy igy putem proboy

Marcus Kemaul 4 Mar 12, 2022
This is project is the implementation of the DeepShift: Towards Multiplication-Less Neural Networks paper

DeepShift This is project is the implementation of the DeepShift: Towards Multiplication-Less Neural Networks paper, that aims to replace multiplicati

Mostafa Elhoushi 88 Dec 23, 2022
Piotr - IoT firmware emulation instrumentation for training and research

Piotr: Pythonic IoT exploitation and Research Introduction to Piotr Piotr is an emulation helper for Qemu that provides a convenient way to create, sh

Damien Cauquil 51 Nov 09, 2022