Code for ACL 21: Generating Query Focused Summaries from Query-Free Resources

Overview

marge

This repository releases the code for Generating Query Focused Summaries from Query-Free Resources.

Please cite the following paper [bib] if you use this code,

Xu, Yumo, and Mirella Lapata. "Generating Query Focused Summaries from Query-Free Resources." In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 6096–6109. 2021.

The availability of large-scale datasets has driven the development of neural models that create generic summaries from single or multiple documents. In this work we consider query focused summarization (QFS), a task for which training data in the form of queries, documents, and summaries is not readily available. We propose to decompose QFS into (1) query modeling (i.e., finding supportive evidence within a set of documents for a query) and (2) conditional language modeling (i.e., summary generation). We introduce MaRGE, a Masked ROUGE Regression framework for evidence estimation and ranking which relies on a unified representation for summaries and queries, so that summaries in generic data can be converted into proxy queries for learning a query model. Experiments across QFS benchmarks and query types show that our model achieves state-of-the-art performance despite learning from weak supervision.

Should you have any query please contact me at [email protected].

Preliminary setup

Project structure

marge
└───requirements.txt
└───README.md
└───log        # logging files
└───run        # scripts for MaRGE training
└───src        # source files
└───data       # generic data for training; qfs data for test/dev
└───graph      # graph components for query expansion
└───model      # MaRGE models for inference
└───rank       # ranking results
└───text       # summarization results
└───unilm_in   # input files to UniLM
└───unilm_out  # output files from UniLM

After cloning this project, use the following command to initialize the structure:

mkdir log data graph model rank text unilm_in unilm_out

Creating environment

cd ..
virtualenv -p python3.6 marge
cd marge
. bin/activate
pip install -r requirements.txt

You need to install apex:

cd ..
git clone https://www.github.com/nvidia/apex
cd apex
python3 setup.py install

Also, you need to setup ROUGE evaluation if you have not yet done it. Please refer to this repository. After finishing the setup, specify the ROUGE path in frame/utils/config_loader.py as an attribute of PathParser:

self.rouge_dir = '~/ROUGE-1.5.5/data'  # specify your ROUGE dir

Preparing benchmark data

Since we are not allowed to distribute DUC clusters and summaries, you can request DUC 2005-2007 from NIST. After acquiring the data, gather each year's clusters and summaries under data/duc_cluster and data/duc_summary, respectively. For instance, DUC 2006's clusters and summaries should be found under data/duc_cluster/2006/ and data/duc_summary/2006/, respectively. For DUC queries: you don't have to prepare queries by yourself; we have put 3 json files for DUC 2005-2007 under data/masked_query, which contain a raw query and a masked query for each cluster. Queries will be fetched from these files at test time.

TD-QFS data can be downloaded from here. You can also use the processed version here.

After data preparation, you should have the following directory structure with the right files under each folder:

marge
└───data
│   └───duc_clusters   # DUC clusters 
│   └───duc_summaries  # DUC reference summaries 
│   └───masked_query   # DUC queries (raw and masked)
│   └───tdqfs          # TD-QFS clusters, queries and reference summaries

MaRGE: query modeling

Preparing training data

Source files for building training data are under src/sripts. For each dataset (Multi-News or CNN/DM), there are three steps create MaRGE training data.

A training sample for Marge can be represented as {sentence, masked summary}->ROUGE(sentence, summary). So we need to get the ROUGE scores for all sentences (step 1) and creating masked summaries (step 2). Then we put them together (step 3).

  1. Calculate ROUGE scores for all sentences:
python src/sripts/dump_sentence_rouge_mp.py
  1. Build masked summaries:
python src/sripts/mask_summary_with_ratio.py
  1. Build train/val/test datasets:
python src/sripts/build_marge_dataset_mn.py

In our experiments, Marge trained on data from Multi-News yielded the best performance in query modeling. If you want to build training data from CNN/DM:

  1. Use the function gathered_mp_dump_sentence_cnndm() in the first step (otherwise, use the function gathered_mp_dump_sentence_mn() )
  2. Set dataset='cnndm' in the second step (otherwise, dataset='mn')
  3. Use build_marge_dataset_cnndm.py instead for the last step

Model training

Depending on which training data you have built, you can run either one of the following two scripts:

. ./run/run_rr_cnndm.sh   # train MaRGE with data from CNN/DM
. ./run/run_rr_mn.sh  # train MaRGE with data from Multi-News

Configs specified in these two files are used in our experiments, but feel free to change them for further experimentation.

Inference and evaluation

Use src/frame/rr/main.py for DUC evaluation and src/frame/rr/main_tdqfs.py for TD-QFS evalaution. We will take DUC evaluation for example.

In src/frame/rr/main.py, run the following methods in order (or at once):

init()
dump_rel_scores()  # inference with MaRGE
rel_scores2rank()  # turn sentence scores to sentence rank
rr_rank2records()  # take top sentences

To evaluate evidence rank, in src/frame/rr/main.py, run:

select_e2e()

MaRGESum: summary generation

Prepare training data from Multi-News

To train a controllable generator, we make the following three changes to the input from Multi-News (and CNN/DM):

  1. Re-order input sentences according to their ROUGE scores, so the top ones will be biased over:
python scripts/selector_for_train.py
  1. Prepend a summary-length token
  2. Prepend a masked summary (UMR-S)

Prepare training data from CNN/DM

Our best generation result is obtained with CNN/DM data. To train MargeSum on CNN/DM data, apart from the above-mentioned three customizations, we need an extra step: build a multi-document version of CNN/DM.

This is mainly because the summaries in the original CNN/DM are fairly short, while testing on QFS requires 250 words as output. To fix this issue, we concatenate summaries from a couple of relevant samples to get a long enough summary. Therefore, the input is now a cluster of the documents from these relevant samples.

This involves in Dr.QA to index all summaries in CNN/DM. After indexing, you can use the following script to cluster samples via retrieving similar summaries:

python scripts/build_cnndm_clusters.py
  • upload the training data, so you can use this multi-document CNN/DM without making it from scratch.

Inference and evaluation

Setting up UniLM environment

To evaluate abstractive summarization, you need to setup an UniLM evironment following the instructions here.

After setting up UnILM, in src/frame/rr/main.py, run:

build_unilm_input(src='rank')

This turns ranked evidence from Marge into MargeSum input files.

Now You can evaluate the trained UniLM model for developement and testing. Go to the UniLM project root, set the correct input directory, and deocode the summaries.

  • add detailed documentation for setting up UniLM.
  • add detailed documentation for decoding.

To evaluate the output, use the following function in src/frame/rr/main.py:

eval_unilm_out()

You can specifiy inference configs in src/frame/rr/rr_config.py.

Owner
Yumo Xu
PhD student @EdinburghNLP.
Yumo Xu
👨‍💻 run nanosaur in simulation with Gazebo/Ingnition

🦕 👨‍💻 nanosaur_gazebo nanosaur The smallest NVIDIA Jetson dinosaur robot, open-source, fully 3D printable, based on ROS2 & Isaac ROS. Designed & ma

nanosaur 9 Jul 19, 2022
Facial Expression Detection In The Realtime

The human's facial expressions is very important to detect thier emotions and sentiment. It can be very efficient to use to make our computers make interviews. Furthermore, we have robots now can det

Adel El-Nabarawy 4 Mar 01, 2022
Learning the Beauty in Songs: Neural Singing Voice Beautifier; ACL 2022 (Main conference); Official code

Learning the Beauty in Songs: Neural Singing Voice Beautifier Jinglin Liu, Chengxi Li, Yi Ren, Zhiying Zhu, Zhou Zhao Zhejiang University ACL 2022 Mai

Jinglin Liu 257 Dec 30, 2022
TensorFlow-LiveLessons - "Deep Learning with TensorFlow" LiveLessons

TensorFlow-LiveLessons Note that the second edition of this video series is now available here. The second edition contains all of the content from th

Deep Learning Study Group 830 Jan 03, 2023
[CVPR 2022] Semi-Supervised Semantic Segmentation Using Unreliable Pseudo-Labels

Using Unreliable Pseudo Labels Official PyTorch implementation of Semi-Supervised Semantic Segmentation Using Unreliable Pseudo Labels, CVPR 2022. Ple

Haochen Wang 268 Dec 24, 2022
Scripts for training an AI to play the endless runner Subway Surfers using a supervised machine learning approach by imitation and a convolutional neural network (CNN) for image classification

About subwAI subwAI - a project for training an AI to play the endless runner Subway Surfers using a supervised machine learning approach by imitation

82 Jan 01, 2023
Implementation of Pix2Seq in PyTorch

pix2seq-pytorch Implementation of Pix2Seq paper Different from the paper image input size 1280 bin size 1280 LambdaLR scheduler used instead of Linear

Tony Shin 9 Dec 15, 2022
Calculates carbon footprint based on fuel mix and discharge profile at the utility selected. Can create graphs and tabular output for fuel mix based on input file of series of power drawn over a period of time.

carbon-footprint-calculator Conda distribution ~/anaconda3/bin/conda install anaconda-client conda-build ~/anaconda3/bin/conda config --set anaconda_u

Seattle university Renewable energy research 7 Sep 26, 2022
TorchX is a library containing standard DSLs for authoring and running PyTorch related components for an E2E production ML pipeline.

TorchX is a library containing standard DSLs for authoring and running PyTorch related components for an E2E production ML pipeline

193 Dec 22, 2022
DWIPrep is a robust and easy-to-use pipeline for preprocessing of diverse dMRI data.

DWIPrep: A Robust Preprocessing Pipeline for dMRI Data DWIPrep is a robust and easy-to-use pipeline for preprocessing of diverse dMRI data. The transp

Gal Ben-Zvi 1 Jan 09, 2023
An official implementation of "SFNet: Learning Object-aware Semantic Correspondence" (CVPR 2019, TPAMI 2020) in PyTorch.

PyTorch implementation of SFNet This is the implementation of the paper "SFNet: Learning Object-aware Semantic Correspondence". For more information,

CV Lab @ Yonsei University 87 Dec 30, 2022
A testcase generation tool for Persistent Memory Programs.

PMFuzz PMFuzz is a testcase generation tool to generate high-value tests cases for PM testing tools (XFDetector, PMDebugger, PMTest and Pmemcheck) If

Systems Research at ShiftLab 14 Jul 24, 2022
SCNet: Learning Semantic Correspondence

SCNet Code Region matching code is contributed by Kai Han ([email protected]). Dense

Kai Han 34 Sep 06, 2022
Narya API allows you track soccer player from camera inputs, and evaluate them with an Expected Discounted Goal (EDG) Agent

Narya The Narya API allows you track soccer player from camera inputs, and evaluate them with an Expected Discounted Goal (EDG) Agent. This repository

Paul Garnier 121 Dec 30, 2022
Lightweight Python library for adding real-time object tracking to any detector.

Norfair is a customizable lightweight Python library for real-time 2D object tracking. Using Norfair, you can add tracking capabilities to any detecto

Tryolabs 1.7k Jan 05, 2023
The official implementation of NeurIPS 2021 paper: Finding Optimal Tangent Points for Reducing Distortions of Hard-label Attacks

Introduction This repository includes the source code for "Finding Optimal Tangent Points for Reducing Distortions of Hard-label Attacks", which is pu

machen 11 Nov 27, 2022
A collection of Reinforcement Learning algorithms from Sutton and Barto's book and other research papers implemented in Python.

Reinforcement-Learning-Notebooks A collection of Reinforcement Learning algorithms from Sutton and Barto's book and other research papers implemented

Pulkit Khandelwal 1k Dec 28, 2022
MIMO-UNet - Official Pytorch Implementation

MIMO-UNet - Official Pytorch Implementation This repository provides the official PyTorch implementation of the following paper: Rethinking Coarse-to-

Sungjin Cho 248 Jan 02, 2023
Unofficial PyTorch implementation of Google AI's VoiceFilter system

VoiceFilter Note from Seung-won (2020.10.25) Hi everyone! It's Seung-won from MINDs Lab, Inc. It's been a long time since I've released this open-sour

MINDs Lab 883 Jan 07, 2023
Securetar - A streaming wrapper around python tarfile and allow secure handling files and support encryption

Secure Tar Secure Tarfile library It's a streaming wrapper around python tarfile

Pascal Vizeli 2 Dec 09, 2022