GradAttack is a Python library for easy evaluation of privacy risks in public gradients in Federated Learning

Overview

GradAttack

GradAttack CI

GradAttack is a Python library for easy evaluation of privacy risks in public gradients in Federated Learning, as well as corresponding mitigation strategies. The current version focuses on the gradient inversion attack in the image classification task, which recovers private images from public gradients.

Motivation

Recent research shows that sending gradients instead of data in Federated Learning can leak private information (see this growing list of attack paper). These attacks demonstrate that an adversary eavesdropping on a client’s communications (i.e. observing the global modelweights and client update) can accurately reconstruct a client’s private data using a class of techniques known as “gradient inversion attacks", which raise serious concerns about such privacy leakage.

To counter these attacks, researchers have proposed defense mechanisms (see this growing list of defense paper). We are developing this framework to evaluate different defense mechanisms against state-of-the-art attacks.

Why GradAttack?

There are lots of reasons to use GradAttack:

  • 😈   Evaluate the privacy risk of your Federated Learning pipeline by running on it various attacks supported by GradAttack

  • 💊   Enhance the privacy of your Federated Learning pipeline by applying defenses supported by GradAttack in a plug-and-play fashion

  • 🔧   Research and develop new gradient attacks and defenses by reusing the simple and extensible APIs in GradAttack

Slack Channel

For help and realtime updates related to GradAttack, please join the GradAttack Slack!

Installation

You may install GradAttack directly from PyPi using pip:

pip install gradattack

You can also install directly from the source for the latest features:

git clone https://github.com/Princeton-SysML/GradAttack
cd GradAttack
pip install -e .

Getting started

To evaluate your model's privacy leakage against the gradient inversion attack, all you need to do is to:

  1. Define your deep learning pipeline
datamodule = CIFAR10DataModule()
model = create_lightning_module(
        'ResNet18',
        training_loss_metric=loss,
        **hparams,
    )
trainer = pl.Trainer(
        gpus=devices,
        check_val_every_n_epoch=1,
        logger=logger,
        max_epochs=args.n_epoch,
        callbacks=[early_stop_callback],
    )
pipeline = TrainingPipeline(model, datamodule, trainer)
  1. (Optional) Apply defenses to the pipeline
defense_pack = DefensePack(args, logger)
defense_pack.apply_defense(pipeline)
  1. Run training with the pipeline (see detailed example scripts and bashes in examples)
pipeline.run()
pipeline.test()

You may use the tensorboard logs to track your training and to compare results of different runs:

tensorboard --logdir PATH_TO_TRAIN_LOGS

Example of training logs

  1. Run attack on the pipeline (see detailed example scripts and bashes in examples)
# Fetch a victim batch and define an attack instance
example_batch = pipeline.get_datamodule_batch()
batch_gradients, step_results = pipeline.model.get_batch_gradients(
        example_batch, 0)
batch_inputs_transform, batch_targets_transform = step_results[
    "transformed_batch"]
attack_instance = GradientReconstructor(
    pipeline,
    ground_truth_inputs=batch_inputs_transform,
    ground_truth_gradients=batch_gradients,
    ground_truth_labels=batch_targets_transform,
)

# Define the attack instance and launch the attack
attack_trainer = pl.Trainer(
    max_epochs=10000,
)
attack_trainer.fit(attack_instance,)

You may use the tensorboard logs to track your attack and to compare results of different runs:

tensorboard --logdir PATH_TO_ATTACK_LOGS

Example of training logs

  1. Evalute the attack results (see examples)
python examples/calc_metric.py --dir PATH_TO_ATTACK_RESULTS

Contributing to GradAttack

GradAttack is currently in an "alpha" stage in which we are working to improve its capabilities and design.

Contributions are welcome! See the contributing guide for detailed instructions on how to contribute to our project.

Citing GradAttack

If you want to use GradAttack for your research (much appreciated!), you can cite it as follows:

@inproceedings{huang2021evaluating,
  title={Evaluating Gradient Inversion Attacks and Defenses in Federated Learning},
  author={Huang, Yangsibo and Gupta, Samyak and Song, Zhao and Li, Kai and Arora, Sanjeev},
  booktitle={NeurIPS},
  year={2021}
}

Acknowledgement

This project is supported in part by Ma Huateng Foundation, Schmidt Foundation, NSF, Simons Foundation, ONR and DARPA/SRC. Yangsibo Huang and Samyak Gupta are supported in part by the Princeton Graduate Fellowship. We would like to thank Quanzheng Li, Xiaoxiao Li, Hongxu Yin and Aoxiao Zhong for helpful discussions, and members of Kai Li’s and Sanjeev Arora’s research groups for comments on early versions of this library.

Publication describing 3 ML examples at NSLS-II and interfacing into Bluesky

Machine learning enabling high-throughput and remote operations at large-scale user facilities. Overview This repository contains the source code and

BNL 4 Sep 24, 2022
Shallow Convolutional Neural Networks for Human Activity Recognition using Wearable Sensors

-IEEE-TIM-2021-1-Shallow-CNN-for-HAR [IEEE TIM 2021-1] Shallow Convolutional Neural Networks for Human Activity Recognition using Wearable Sensors All

Wenbo Huang 1 May 17, 2022
Code for visualizing the loss landscape of neural nets

Visualizing the Loss Landscape of Neural Nets This repository contains the PyTorch code for the paper Hao Li, Zheng Xu, Gavin Taylor, Christoph Studer

Tom Goldstein 2.2k Jan 09, 2023
Rename Images with Auto Generated Neural Image Captions

Recaption Images with Generated Neural Image Caption Example Usage: Commandline: Recaption all images from folder /home/feng/Downloads/images to folde

feng wang 3 May 01, 2022
RoboDesk A Multi-Task Reinforcement Learning Benchmark

RoboDesk A Multi-Task Reinforcement Learning Benchmark If you find this open source release useful, please reference in your paper: @misc{kannan2021ro

Google Research 66 Oct 07, 2022
The datasets and code of ACL 2021 paper "Aspect-Category-Opinion-Sentiment Quadruple Extraction with Implicit Aspects and Opinions".

Aspect-Category-Opinion-Sentiment (ACOS) Quadruple Extraction This repo contains the data sets and source code of our paper: Aspect-Category-Opinion-S

NUSTM 144 Jan 02, 2023
A Factor Model for Persistence in Investment Manager Performance

Factor-Model-Manager-Performance A Factor Model for Persistence in Investment Manager Performance I apply methods and processes similar to those used

Omid Arhami 1 Dec 01, 2021
Face Alignment using python

Face Alignment Face Alignment using python Input Image Aligned Face Aligned Face Aligned Face Input Image Aligned Face Input Image Aligned Face Instal

Sajjad Aemmi 28 Nov 23, 2022
Testability-Aware Low Power Controller Design with Evolutionary Learning, ITC2021

Testability-Aware Low Power Controller Design with Evolutionary Learning This repo contains the source code of Testability-Aware Low Power Controller

Lee Man 1 Dec 26, 2021
Evolutionary Scale Modeling (esm): Pretrained language models for proteins

Evolutionary Scale Modeling This repository contains code and pre-trained weights for Transformer protein language models from Facebook AI Research, i

Meta Research 1.6k Jan 09, 2023
PRIME: A Few Primitives Can Boost Robustness to Common Corruptions

PRIME: A Few Primitives Can Boost Robustness to Common Corruptions This is the official repository of PRIME, the data agumentation method introduced i

Apostolos Modas 34 Oct 30, 2022
A full pipeline AutoML tool for tabular data

HyperGBM Doc | 中文 We Are Hiring! Dear folks,we are offering challenging opportunities located in Beijing for both professionals and students who are k

DataCanvas 240 Jan 03, 2023
FuseDream: Training-Free Text-to-Image Generationwith Improved CLIP+GAN Space OptimizationFuseDream: Training-Free Text-to-Image Generationwith Improved CLIP+GAN Space Optimization

FuseDream This repo contains code for our paper (paper link): FuseDream: Training-Free Text-to-Image Generation with Improved CLIP+GAN Space Optimizat

XCL 191 Dec 31, 2022
Package for working with hypernetworks in PyTorch.

Package for working with hypernetworks in PyTorch.

Christian Henning 71 Jan 05, 2023
Enhancing Column Generation by a Machine-Learning-BasedPricing Heuristic for Graph Coloring

Enhancing Column Generation by a Machine-Learning-BasedPricing Heuristic for Graph Coloring (to appear at AAAI 2022) We propose a machine-learning-bas

YunzhuangS 2 May 02, 2022
PyTorch Implementation of Realtime Multi-Person Pose Estimation project.

PyTorch Realtime Multi-Person Pose Estimation This is a pytorch version of Realtime_Multi-Person_Pose_Estimation, origin code is here Realtime_Multi-P

Dave Fang 157 Nov 12, 2022
Automatic Differentiation Multipole Moment Molecular Forcefield

Automatic Differentiation Multipole Moment Molecular Forcefield Performance notes On a single gpu, using waterbox_31ang.pdb example from MPIDplugin wh

4 Jan 07, 2022
3DMV jointly combines RGB color and geometric information to perform 3D semantic segmentation of RGB-D scans.

3DMV 3DMV jointly combines RGB color and geometric information to perform 3D semantic segmentation of RGB-D scans. This work is based on our ECCV'18 p

Владислав Молодцов 0 Feb 06, 2022
UniMoCo: Unsupervised, Semi-Supervised and Full-Supervised Visual Representation Learning

UniMoCo: Unsupervised, Semi-Supervised and Full-Supervised Visual Representation Learning This is the official PyTorch implementation for UniMoCo pape

dddzg 49 Jan 02, 2023
A tensorflow implementation of GCN-LPA

GCN-LPA This repository is the implementation of GCN-LPA (arXiv): Unifying Graph Convolutional Neural Networks and Label Propagation Hongwei Wang, Jur

Hongwei Wang 83 Nov 28, 2022