This codebase facilitates fast experimentation of differentially private training of Hugging Face transformers.

Overview

private-transformers

This codebase facilitates fast experimentation of differentially private training of Hugging Face transformers.


What is this? Why an extra codebase?

  • This codebase provides a privacy engine that builds off Opacus, but works way more smoothly with Hugging Face's transformers library.
  • Additionally, we support the ghost clipping technique (see Section 4 of this preprint on how it works) which allows privately training large transformers with considerably reduced memory cost -- in many cases, almost as light as non-private training -- at a modest run-time overhead.
  • With this codebase, we have fine-tuned very large pretrained models, yielding some of the best performing differentially private NLP models to date. Some of these models have performance matching strong non-private baseline approaches. We see strong empirical evidence that highly performant DP NLP models could be built on modest datasets.

Installation

Make sure you have python>=3.8; run the following command:

pip install git+https://github.com/lxuechen/private-transformers.git

To check the package is installed properly, be sure to run the test suite (requires pytest and a GPU) via the following command:

pytest -s tests

Usage

Basic usage

Privately training Hugging Face transformers with our codebase simply consists of 4 steps:

  1. Create your favourite transformer model and optimizer; attach this optimizer to a PrivacyEngine
  2. Compute a per-example loss (1-D tensor) for a mini-batch of data
  3. Pass the loss to optimizer.step or optimizer.virtual_step as a keyword argument
  4. Repeat from step 2

Below is a quick example:

import transformers, torch
from private_transformers import PrivacyEngine
import torch.nn.functional as F

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = transformers.GPT2LMHeadModel.from_pretrained('distilgpt2').to(device)
optimizer = torch.optim.Adam(params=model.parameters(), lr=1e-4)
privacy_engine = PrivacyEngine(
    model,
    batch_size=10,
    sample_size=50000,
    epochs=3,
    max_grad_norm=0.1,
    target_epsilon=3,
)
privacy_engine.attach(optimizer)

batch_size, seq_len = 10, 20
# Inputs are batch-first format, i.e., the first dimension of tensors must be batch dimension.
input_ids = torch.randint(size=[batch_size, seq_len], low=0, high=100, device=device)
# Calling `.train()` is very important; otherwise underlying forward and backward hooks don't run.
model.train()
outputs = model(input_ids=input_ids, return_dict=True)
labels = input_ids[:, 1:, ]
logits = outputs.logits[:, :-1, :].permute(0, 2, 1)
# `loss` is a 1-D tensor of shape (batch_size,).
loss = F.cross_entropy(logits, labels, reduction="none").mean(dim=1)
# This step is different from existing workflows: 
#   Don't call `loss.backward`; leave it to `optimizer.step` to handle backward.
optimizer.step(loss=loss)

The biggest differences compared to Opacus are:

  • We require the per-example loss (a 1-D tensor) be passed into optimizer.step (or optimizer.virtual_step)
  • The per-example loss must be passed in as a keyword argument.
  • loss.backward() shouldn't be called on the user end; it's called internally in optimizer.step ( or optimizer.virtual_step).
  • Inputs should be in batch-first format; there isn't a toggle to switch between different formats in the engine.

Ghost clipping: memory saving differentially private learning

Turning on ghost clipping requires changing only 1 line. You should notice a drastic reduction in peak GPU memory usage once this is turned on, at a potential cost of slower training speed. One might find this especially useful when constrained to only use older GPUs with small VRAMs or fitting super large models.

import transformers, torch
from private_transformers import PrivacyEngine

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = transformers.GPT2LMHeadModel.from_pretrained('distilgpt2').to(device)
optimizer = torch.optim.Adam(params=model.parameters(), lr=1e-4)
privacy_engine = PrivacyEngine(
    model,
    batch_size=10,
    sample_size=50000,
    epochs=3,
    max_grad_norm=0.1,
    target_epsilon=3,
    ghost_clipping=True,  # The only change you need to make!
)
privacy_engine.attach(optimizer)

We ran stringent numerical tests to ensure the double-backward implementation is correct. Check out files in the tests folder for more on this.

Examples

Code in the examples folder roughly reproduces our results for the table-to-text and classification tasks. There may be some minor discrepancies, since hyperparameters there aren't exactly what's used in the paper. Nevertheless, it should be sufficient to get things started. Detailed instructions are in the readme file of each subfolder.

Currently supported Hugging Face models

Not all models in the Hugging Face library are supported. The main additional work here is to

  1. support per-example gradients for bespoke modules (e.g., T5LayerNorm), and
  2. ensure position_ids are repeated.

We plan to support more models in the future if there's such a need. Feel free to open an issue if you may want to try out specific models that aren't in the current list.

FAQ

I wrote some answers to potential questions here.

Acknowledgements

It would have been impossible to develop this codebase without cool past works and existing codebases. We roughly follow the PrivacyEngine design in Opacus==0.13.0. We directly use an off-the-shelf package for tightly tracking tradeoff functions while composing multiple private mechanisms.

Disclaimer

  • This codebase is not yet production-grade, e.g., cryptographically secure PRNGs are required for sampling noise -- our codebase currently does not use these strong PRNGs.
  • This codebase is born out of the need to experiment with various things for differentially private NLP in rapidly succession. I've tried my best to write clean code, though parts of this codebase may be less tidy than I had hoped given the extremely tight timeline.

Citation

If you found this codebase useful in your research, please consider citing:

@misc{li2021large,
      title={Large Language Models Can Be Strong Differentially Private Learners}, 
      author={Xuechen Li and Florian Tramèr and Percy Liang and Tatsunori Hashimoto},
      year={2021},
      eprint={2110.05679},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}
Comments
  • Support BART model

    Support BART model

    Hi, I'm trying to apply your code in BART model. But I got the error like the below:

    ValueError: Ghost clipping does not support parameter sharing. Parameter sharing may be due to default parameter sharing between lm_head and embedding.Please use a model without parameter sharing for ghost clipping.
    

    Does it not support BART model yet??

    opened by SeolhwaLee 7
  • Set another seed won't change the result

    Set another seed won't change the result

    Hi Xuechen,

    I have another issue with the training seed. I would like to relax the random seed so that I can get some statistical results. Tried many different ways but even comment out the set_seed() function, the eva acc is the same until the last digit. May I ask how to relax the random seed? I'm doing experiments on examples/classification.

    Thanks!

    opened by JunyiZhu-AI 6
  • Customize loss function / adding regularizer under privacy setting?

    Customize loss function / adding regularizer under privacy setting?

    Hi, thanks again for the great work and the codebase!

    I have a question -- how I'd want to customize loss function in the codebase? I've been trying to do that, e.g. adding a per-example L1 regularization term to vector_loss in trainer, but I didn't manage to get it running after several attempts.

    There's a related discussion/PR in Opacus codebase https://github.com/pytorch/opacus/issues/249.

    However, there're a few tricky things I can see: -- In private-transformers, backward() behavior is not managed on the user end. -- also, 1-D vector_loss is required for private gradient update - optimizer.stepor optimizer.virtual_step

    My intuition is that I can add to vector_loss (per-example loss) at this line before the loss gets passed to the privacy engine.

    However I am afraid privacy concern is also an issue. I am aware of that Private-Transformers overrides compute_loss() in HF trainer, to exclude regularization terms that might mess up with privacy accounting.

    Sorry my question is not super detailed but I hope this makes sense and really appreciate for any comments.

    Thank you!

    opened by shi-kejian 4
  • Using dataloader with fixed batch size

    Using dataloader with fixed batch size

    Hi, thanks for providing this codebase!

    So for a while I've been using Opacus to experiment with DP-SGD and RoBERTa, but I wanted to check out your PrivacyEngine, mainly because of the training speed and memory optimizations. With Opacus, I always trained with their UniformWithReplacementSampler for accurate RDP accounting and as far as I can tell, you're training with fixed size batches in your examples. I'm wondering if there's a reason the UniformWithReplacementSampler isn't needed in your codebase anymore, and if the uniform sampler is compatible with your modified PrivacyEngine because the optimizer needs to be able to deal with variations in batch size?

    opened by xplip 4
  • How to set max_compositions

    How to set max_compositions

    Hi Chen, do you know how to set the max_compositions/steps param? The default at https://github.com/lxuechen/private-transformers/blob/684e27fcd9978539fbabc357c7ea506c0353c771/private_transformers/privacy_utils/privacy_engine.py#L148 is 0 but would raise an error

    private-transformers/private_transformers/privacy_utils/accounting/gdp_accounting.py:33: RuntimeWarning: invalid value encountered in double_scalars return norm.cdf(-eps / mu + mu / 2) - np.exp(eps) * norm.cdf(-eps / mu - mu / 2)
    rv_accountant/accountant.py:55: RuntimeWarning: divide by zero encountered in double_scalars mesh_size = 2*eps_error / np.sqrt(2*max_compositions*np.log(2/eta0))
    
    opened by hlzhang109 3
  • Setting small target epsilon like 0.1 fails training

    Setting small target epsilon like 0.1 fails training

    Hi, @lxuechen I tried to set epsilon as 0.1 on SST-2, but it results in a large noise_multiplier: 20853.95 and fails the training where the accuracy is near 0.5 However, setting epsilon as 1 works well. Any idea about this?

    opened by LinkToPast1900 3
  • Private gradient seemingly has been overwritten by non-private gradient.

    Private gradient seemingly has been overwritten by non-private gradient.

    Hi Xuechen, thanks for providing this codebase!

    I tried tweaking the code in examples/classification but the network does not perform as expected. In particular, I tried zeroing out all gradient by this command in _step() and _ghost_step() functions in privacy_engine.py:

    param.grad /= self.batch_size
    param.grad.mul_(0)
    

    After adding this multiplication the network has been trained as normally. And because with the same seed, the network has been even trained to give out the same eval acc. Could you reproduce this result at your private repo? If it behaves like this, then I suppose that the private gradient has been overwritten by the non-private one.

    opened by JunyiZhu-AI 3
  • Questions about sigma search and epsilon from composed tradeoff functions

    Questions about sigma search and epsilon from composed tradeoff functions

    (Making a new issue for this because you probably weren't notified of my comment in the closed original issue)

    Sorry for having to reopen this, but I do have two more (perhaps related) questions after all and would really appreciate if you could help clarify them.

    1. When using the automated sigma search (based on a specified target epsilon and N epochs), the final epsilon computed by the PrivacyEngine after training for N epochs is always much higher than the target epsilon, so it seems that the sigma chosen by get_sigma_from_rdp is too high. This also happens when I run the sentence classification and the table2test examples in the repo. E.g., instead of my target epsilon 8, I will end up with something like epsilon 10-11. How did you get your final epsilon to match the target epsilon in the experiments in your paper?

    2. How do you compute the converted epsilon from composed tradeoff functions when let's say training SST-2 with the default hyperparameters from the examples? Do you reduce the num_compositions=1000 in _eps_from_glw to something way lower than 1000 because the script only runs for ~400 optimization steps and would otherwise always throw the Numerical composition of tradeoff functions failed! Double check privacy parameters. error?

    Originally posted by @xplip in https://github.com/lxuechen/private-transformers/issues/7#issuecomment-987020758

    opened by xplip 3
  • What is the best way to handle large models?

    What is the best way to handle large models?

    Hi all, I was trying to fine-tune GPT-J 6B but I encounter Out Of Memory errors if I use a single-gpu, for non-private training I managed to solve them by using deepspeed but it seems that I cannot use that with opacus or with this codebase. Do you know how I could solve this problem? Thank you in advance:)

    opened by Pier297 2
  • No such file or directory

    No such file or directory

    I want to finetune qqp and here comes an error:

    File "/private-transformers-main/examples/classification/run_classification.py", line 545, in main train_dataset = FewShotDataset(data_args, tokenizer=tokenizer, mode="train", use_demo=use_demo) File "/private-transformers-main/examples/classification/src/dataset.py", line 377, in init with FileLock(lock_path): File "/home/anaconda3/envs/fuck/lib/python3.8/site-packages/filelock/_api.py", line 214, in enter self.acquire() File "/home/anaconda3/envs/fuck/lib/python3.8/site-packages/filelock/_api.py", line 170, in acquire self._acquire() File "/home/anaconda3/envs/fuck/lib/python3.8/site-packages/filelock/_unix.py", line 35, in _acquire fd = os.open(self._lock_file, open_mode) FileNotFoundError: [Errno 2] No such file or directory: 'classification/data/original/QQP/cached_train_RobertaTokenizer_256_qqp_few_shot.lock'

    how can I get this file? thanks.

    opened by trestad 2
  • [DistilBERT] RuntimeError: stack expects each tensor to be equal size

    [DistilBERT] RuntimeError: stack expects each tensor to be equal size

    Hi, @lxuechen, thanks for your repo.

    I met a problem as follows when I tied to finetune DistilBERT. Both BERT and Roberta work well. Any idea about this? Thanks!

    Traceback (most recent call last): ... File "/opt/conda/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "/opt/conda/lib/python3.8/site-packages/private_transformers/privacy_utils/privacy_engine.py", line 360, in step self._ghost_step(loss=kwargs.pop("loss")) File "/opt/conda/lib/python3.8/site-packages/private_transformers/privacy_utils/privacy_engine.py", line 261, in _ghost_step self._ghost_helper(loss) File "/opt/conda/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "/opt/conda/lib/python3.8/site-packages/private_transformers/privacy_utils/privacy_engine.py", line 334, in _ghost_helper coef_sample = self.get_coef_sample() File "/opt/conda/lib/python3.8/site-packages/private_transformers/privacy_utils/privacy_engine.py", line 348, in get_coef_sample norm_sample = self.get_norm_sample() File "/opt/conda/lib/python3.8/site-packages/private_transformers/privacy_utils/privacy_engine.py", line 343, in get_norm_sample norm_sample = torch.stack([param.norm_sample for name, param in self.named_params], dim=0).norm(2, dim=0) RuntimeError: stack expects each tensor to be equal size, but got [50] at entry 0 and [1] at entry 1

    (50 is my batch size)

    opened by LinkToPast1900 1
  • v0.3.0 fixes

    v0.3.0 fixes

    Non-structural fixes.

    • [ ] Convert to make_private style to avoid bad syntax highlighting during static analysis
    • [ ] Improve the cleanliness of examples
    • [ ] Refactor test file and use functorch to simplify ground truth gradients' logic
    • [ ] Don't compute per-sample gradients for weights which don't require gradients
    • [ ] Use the new smart resizer for tokenizer and model
    • [ ] Refactor decoding to use new left padding based construction
    opened by lxuechen 0
Releases(v0.2.3)
Owner
Xuechen Li
learning to learn
Xuechen Li
A framework for cleaning Chinese dialog data

A framework for cleaning Chinese dialog data

Yida 136 Dec 20, 2022
KakaoBrain KoGPT (Korean Generative Pre-trained Transformer)

KoGPT KoGPT (Korean Generative Pre-trained Transformer) https://github.com/kakaobrain/kogpt https://huggingface.co/kakaobrain/kogpt Model Descriptions

Kakao Brain 797 Dec 26, 2022
Reformer, the efficient Transformer, in Pytorch

Reformer, the Efficient Transformer, in Pytorch This is a Pytorch implementation of Reformer https://openreview.net/pdf?id=rkgNKkHtvB It includes LSH

Phil Wang 1.8k Dec 30, 2022
:id: A python library for accurate and scalable fuzzy matching, record deduplication and entity-resolution.

Dedupe Python Library dedupe is a python library that uses machine learning to perform fuzzy matching, deduplication and entity resolution quickly on

Dedupe.io 3.6k Jan 02, 2023
Python Implementation of ``Modeling the Influence of Verb Aspect on the Activation of Typical Event Locations with BERT'' (Findings of ACL: ACL 2021)

BERT-for-Surprisal Python Implementation of ``Modeling the Influence of Verb Aspect on the Activation of Typical Event Locations with BERT'' (Findings

7 Dec 05, 2022
A Python module made to simplify the usage of Text To Speech and Speech Recognition.

Nav Module The solution for voice related stuff in Python Nav is a Python module which simplifies voice related stuff in Python. Just import the Modul

Snm Logic 1 Dec 20, 2021
A retro text-to-speech bot for Discord

hawking A retro text-to-speech bot for Discord, designed to work with all of the stuff you might've seen in Moonbase Alpha, using the existing command

Nick Schorr 23 Dec 25, 2022
Fixes mojibake and other glitches in Unicode text, after the fact.

ftfy: fixes text for you print(fix_encoding("(ง'⌣')ง")) (ง'⌣')ง Full documentation: https://ftfy.readthedocs.org Testimonials “My life is li

Luminoso Technologies, Inc. 3.4k Dec 29, 2022
Japanese synonym library

chikkarpy chikkarpyはchikkarのPython版です。 chikkarpy is a Python version of chikkar. chikkarpy は Sudachi 同義語辞書を利用し、SudachiPyの出力に同義語展開を追加するために開発されたライブラリです。

Works Applications 48 Dec 14, 2022
Open source annotation tool for machine learning practitioners.

doccano doccano is an open source text annotation tool for humans. It provides annotation features for text classification, sequence labeling and sequ

7.1k Jan 01, 2023
An Explainable Leaderboard for NLP

ExplainaBoard: An Explainable Leaderboard for NLP Introduction | Website | Download | Backend | Paper | Video | Bib Introduction ExplainaBoard is an i

NeuLab 319 Dec 20, 2022
Train 🤗transformers with DeepSpeed: ZeRO-2, ZeRO-3

Fork from https://github.com/huggingface/transformers/tree/86d5fb0b360e68de46d40265e7c707fe68c8015b/examples/pytorch/language-modeling at 2021.05.17.

Junbum Lee 12 Oct 26, 2022
Natural Language Processing Best Practices & Examples

NLP Best Practices In recent years, natural language processing (NLP) has seen quick growth in quality and usability, and this has helped to drive bus

Microsoft 6.1k Dec 31, 2022
An extensive UI tool built using new data scraped from BBC News

BBC-News-Analyzer An extensive UI tool built using new data scraped from BBC New

Antoreep Jana 1 Dec 31, 2021
A website which allows you to play with the GPT-2 transformer

transformers A website which allows you to play with the GPT-2 model Built with ❤️ by raphtlw Table of contents Model Setup About Contributors Model T

raphtlw 2 Jan 27, 2022
Chinese Named Entity Recognization (BiLSTM with PyTorch)

BiLSTM-CRF for Name Entity Recognition PyTorch version A PyTorch implemention of Bi-LSTM-CRF model for Chinese Named Entity Recognition. 使用 PyTorch 实现

5 Jun 01, 2022
Milaan Parmar / Милан пармар / _米兰 帕尔马 170 Dec 13, 2022
Calibre recipe to convert latest issue of Analyse & Kritik into an ebook

Calibre Recipe für "Analyse & Kritik" Dies ist ein "Recipe" für die Konvertierung der aktuellen Ausgabe der Zeitung Analyse & Kritik in ein Ebook. Es

Henning 3 Jan 04, 2022
Natural language Understanding Toolkit

Natural language Understanding Toolkit TOC Requirements Installation Documentation CLSCL NER References Requirements To install nut you need: Python 2

Peter Prettenhofer 119 Oct 08, 2022
Tool to add main subject to items on Wikidata using a WMFs CirrusSearch for named entity recognition or a manually supplied list of QIDs

ItemSubjector Tool made to add main subject statements to items based on the title using a home-brewed CirrusSearch-based Named Entity Recognition alg

Dennis Priskorn 9 Nov 17, 2022