EMNLP 2021: Single-dataset Experts for Multi-dataset Question-Answering

Related tags

Deep LearningMADE
Overview

MADE (Multi-Adapter Dataset Experts)

This repository contains the implementation of MADE (Multi-adapter dataset experts), which is described in the paper Single-dataset Experts for Multi-dataset Question Answering.

MADE combines a shared Transformer with a collection of adapters that are specialized to different reading comprehension datasets. See our paper for details.

Quick links

Requirements

The code uses Python 3.8, PyTorch, and the adapter-transformers library. Install the requirements with:

pip install -r requirements.txt

Download the data

You can download the datasets used in the paper from the repository for the MRQA 2019 shared task.

The datasets should be stored in directories ending with train or dev. For example, download the in-domain training datasets to a directory called data/train/ and download the in-domain development datasets to data/dev/.

For zero-shot and few-shot experiments, download the MRQA out-of-domain development datasets to a separate directory and split them into training and development splits using scripts/split_datasets.py. For example, download the datasets to data/transfer/ and run

ls data/transfer/* -1 | xargs -l python scripts/split_datasets.py

Use the default random seed (13) to replicate the splits used in the paper.

Download the trained models

The trained models are stored on the HuggingFace model hub at this URL: https://huggingface.co/princeton-nlp/MADE. All of the models are based on the RoBERTa-base model. They are:

To download just the MADE Transformer and adapters:

mkdir made_transformer
wget https://huggingface.co/princeton-nlp/MADE/resolve/main/made_transformer/model.pt -O made_transformer/model.pt

mkdir made_tuned_adapters
for d in SQuAD HotpotQA TriviaQA SearchQA NewsQA NaturalQuestions; do
  mkdir "made_tuned_adapters/${d}"
  wget "https://huggingface.co/princeton-nlp/MADE/resolve/main/made_tuned_adapters/${d}/model.pt" -O "made_tuned_adapters/${d}/model.pt"
done;

You can download all of the models at once by cloning the repository (first installing Git LFS):

git lfs install
git clone https://huggingface.co/princeton-nlp/MADE
mv MADE models

Run the model

The scripts in scripts/train/ and scripts/transfer/ provide examples of how to run the code. For more details, see the descriptions of the command line flags in run.py.

Train

You can use the scripts in scripts/train/ to train models on the MRQA datasets. For example, to train MADE:

./scripts/train/made_training.sh

And to tune the MADE adapters separately on individual datasets:

for d in SQuAD HotpotQA TriviaQA SearchQA NewsQA NaturalQuestions; do
  ./scripts/train/made_adapter_tuning.sh $d
done;

See run.py for details about the command line arguments.

Evaluate

A single fine-tuned model:

python run.py \
    --eval_on BioASQ DROP DuoRC RACE RelationExtraction TextbookQA \
    --load_from multi_dataset_ft \
    --output_dir output/zero_shot/multi_dataset_ft

An individual MADE adapter (e.g. SQuAD):

python run.py \
    --eval_on BioASQ DROP DuoRC RACE RelationExtraction TextbookQA \
    --load_from made_transformer \
    --load_adapters_from made_tuned_adapters \
    --adapter \
    --adapter_name SQuAD \
    --output_dir output/zero_shot/made_tuned_adapters/SQuAD

An individual single-dataset adapter (e.g. SQuAD):

python run.py \
    --eval_on BioASQ DROP DuoRC RACE RelationExtraction TextbookQA \
    --load_adapters_from single_dataset_adapters/ \
    --adapter \
    --adapter_name SQuAD \
    --output_dir output/zero_shot/single_dataset_adapters/SQuAD

An ensemble of MADE adapters. This will run a forward pass through every adapter in parallel.

python run.py \
    --eval_on BioASQ DROP DuoRC RACE RelationExtraction TextbookQA \
    --load_from made_transformer \
    --load_adapters_from made_tuned_adapters \
    --adapter_names SQuAD HotpotQA TriviaQA SearchQA NewsQA NaturalQuestions \
    --made \
    --parallel_adapters  \
    --output_dir output/zero_shot/made_ensemble

Averaging the parameters of the MADE adapters:

python run.py \
    --eval_on BioASQ DROP DuoRC RACE RelationExtraction TextbookQA \
    --load_from made_transformer \
    --load_adapters_from made_tuned_adapters \
    --adapter_names SQuAD HotpotQA TriviaQA SearchQA NewsQA NaturalQuestions \
    --adapter \
    --average_adapters  \
    --output_dir output/zero_shot/made_avg

Running UnifiedQA:

python run.py \
    --eval_on BioASQ DROP DuoRC RACE RelationExtraction TextbookQA \
    --seq2seq \
    --model_name_or_path allenai/unifiedqa-t5-base \
    --output_dir output/zero_shot/unifiedqa

Transfer

The scripts in scripts/transfer/ provide examples of how to run the few-shot transfer learning experiments described in the paper. For example, the following command will repeat for three random seeds: (1) sample 64 training examples from BioASQ, (2) calculate the zero-shot loss of all the MADE adapters on the training examples, (3) average the adapter parameters in proportion to zero-shot loss, (4) hold out 32 training examples for validation data, (5) train the adapter until performance stops improving on the 32 validation examples, and (6) evaluate the adapter on the full development set.

python run.py \
    --train_on BioASQ \
    --adapter_names SQuAD HotpotQA TriviaQA NewsQA SearchQA NaturalQuestions \
    --made \
    --parallel_made \
    --weighted_average_before_training \
    --adapter_learning_rate 1e-5 \
    --steps 200 \
    --patience 10 \
    --eval_before_training \
    --full_eval_after_training \
    --max_train_examples 64 \
    --few_shot \
    --criterion "loss" \
    --negative_examples \
    --save \
    --seeds 7 19 29 \
    --load_from "made_transformer" \
    --load_adapters_from "made_tuned_adapters" \
    --name "transfer/made_preaverage/BioASQ/64"

Bugs or questions?

If you have any questions related to the code or the paper, feel free to email Dan Friedman ([email protected]). If you encounter any problems when using the code, or want to report a bug, you can open an issue. Please try to specify the problem with details so we can help you better and quicker!

Citation

@inproceedings{friedman2021single,
   title={Single-dataset Experts for Multi-dataset QA},
   author={Friedman, Dan and Dodge, Ben and Chen, Danqi},
   booktitle={Empirical Methods in Natural Language Processing (EMNLP)},
   year={2021}
}
Owner
Princeton Natural Language Processing
Princeton Natural Language Processing
FcaNet: Frequency Channel Attention Networks

FcaNet: Frequency Channel Attention Networks PyTorch implementation of the paper "FcaNet: Frequency Channel Attention Networks". Simplest usage Models

327 Dec 27, 2022
Accompanying code for the paper "A Kernel Test for Causal Association via Noise Contrastive Backdoor Adjustment".

#backdoor-HSIC (bd_HSIC) Accompanying code for the paper "A Kernel Test for Causal Association via Noise Contrastive Backdoor Adjustment". To generate

Robert Hu 0 Nov 25, 2021
OneFlow is a performance-centered and open-source deep learning framework.

OneFlow OneFlow is a performance-centered and open-source deep learning framework. Latest News Version 0.5.0 is out! First class support for eager exe

OneFlow 4.2k Jan 07, 2023
LieTransformer: Equivariant Self-Attention for Lie Groups

LieTransformer This repository contains the implementation of the LieTransformer used for experiments in the paper LieTransformer: Equivariant Self-At

OxCSML (Oxford Computational Statistics and Machine Learning) 50 Dec 28, 2022
[Preprint] ConvMLP: Hierarchical Convolutional MLPs for Vision, 2021

Convolutional MLP ConvMLP: Hierarchical Convolutional MLPs for Vision Preprint link: ConvMLP: Hierarchical Convolutional MLPs for Vision By Jiachen Li

SHI Lab 143 Jan 03, 2023
PolyphonicFormer: Unified Query Learning for Depth-aware Video Panoptic Segmentation

PolyphonicFormer: Unified Query Learning for Depth-aware Video Panoptic Segmentation Winner method of the ICCV-2021 SemKITTI-DVPS Challenge. [arxiv] [

Yuan Haobo 38 Jan 03, 2023
Implementing a simplified copy of Shazam application from scratch using MinHashing and LSH.

Building Shazam from scratch In this repository we tried to implement a simplified copy of the Shazam application able to tell you the name of a song

Arturo Ghinassi 0 Nov 17, 2022
Free-duolingo-plus - Duolingo account creator that uses your invite code to get you free duolingo plus

free-duolingo-plus duolingo account creator that uses your invite code to get yo

1 Jan 06, 2022
NPBG++: Accelerating Neural Point-Based Graphics

[CVPR 2022] NPBG++: Accelerating Neural Point-Based Graphics Project Page | Paper This repository contains the official Python implementation of the p

Ruslan Rakhimov 57 Dec 03, 2022
YOLOv4 / Scaled-YOLOv4 / YOLO - Neural Networks for Object Detection (Windows and Linux version of Darknet )

Yolo v4, v3 and v2 for Windows and Linux (neural networks for object detection) Paper YOLO v4: https://arxiv.org/abs/2004.10934 Paper Scaled YOLO v4:

Alexey 20.2k Jan 09, 2023
KSAI Lite is a deep learning inference framework of kingsoft, based on tensorflow lite

KSAI Lite is a deep learning inference framework of kingsoft, based on tensorflow lite

80 Dec 27, 2022
Deep Surface Reconstruction from Point Clouds with Visibility Information

Data, code and pretrained models for the paper Deep Surface Reconstruction from Point Clouds with Visibility Information.

Raphael Sulzer 23 Jan 04, 2023
[BMVC'21] Official PyTorch Implementation of Grounded Situation Recognition with Transformers

Grounded Situation Recognition with Transformers Paper | Model Checkpoint This is the official PyTorch implementation of Grounded Situation Recognitio

Junhyeong Cho 18 Jul 19, 2022
Training BERT with Compute/Time (Academic) Budget

Training BERT with Compute/Time (Academic) Budget This repository contains scripts for pre-training and finetuning BERT-like models with limited time

Intel Labs 263 Jan 07, 2023
Python interface for SmartRF Sniffer 2 Firmware

#TI SmartRF Packet Sniffer 2 Python Interface TI Makes available a nice packet sniffer firmware, which interfaces to Wireshark. You can see this proje

Colin O'Flynn 3 May 18, 2021
Official PyTorch implementation of the paper "Recycling Discriminator: Towards Opinion-Unaware Image Quality Assessment Using Wasserstein GAN", accepted to ACM MM 2021 BNI Track.

RecycleD Official PyTorch implementation of the paper "Recycling Discriminator: Towards Opinion-Unaware Image Quality Assessment Using Wasserstein GAN

Yunan Zhu 23 Nov 05, 2022
Method for facial emotion recognition compitition of Xunfei and Datawhale .

人脸情绪识别挑战赛-第3名-W03KFgNOc-源代码、模型以及说明文档 队名:W03KFgNOc 排名:3 正确率: 0.75564 队员:yyMoming,xkwang,RichardoMu。 比赛链接:人脸情绪识别挑战赛 文章地址:link emotion 该项目分别训练八个模型并生成csv文

6 Oct 17, 2022
Official Implementation of PCT

Official Implementation of PCT Prerequisites python == 3.8.5 Please make sure you have the following libraries installed: numpy torch=1.4.0 torchvisi

32 Nov 21, 2022
Official repository for "PAIR: Planning and Iterative Refinement in Pre-trained Transformers for Long Text Generation"

pair-emnlp2020 Official repository for the paper: Xinyu Hua and Lu Wang: PAIR: Planning and Iterative Refinement in Pre-trained Transformers for Long

Xinyu Hua 31 Oct 13, 2022
Official Pytorch Implementation of: "ImageNet-21K Pretraining for the Masses"(2021) paper

ImageNet-21K Pretraining for the Masses Paper | Pretrained models Official PyTorch Implementation Tal Ridnik, Emanuel Ben-Baruch, Asaf Noy, Lihi Zelni

574 Jan 02, 2023