Codebase for the Summary Loop paper at ACL2020

Overview

Summary Loop

This repository contains the code for ACL2020 paper: The Summary Loop: Learning to Write Abstractive Summaries Without Examples.

Training Procedure

We provide pre-trained models for each component needed in the Summary Loop Release:

  • keyword_extractor.joblib: An sklearn pipeline that will extract can be used to compute tf-idf scores of words according to the BERT vocabulary, which is used by the Masking Procedure,
  • bert_coverage.bin: A bert-base-uncased finetuned model on the task of Coverage for the news domain,
  • fluency_news_bs32.bin: A GPT2 (base) model finetuned on a large corpus of news articles, used as the Fluency model,
  • gpt2_copier23.bin: A GPT2 (base) model that can be used as an initial point for the Summarizer model.

In the release, we also provide:

  • pretrain_coverage.py script to train a coverage model from scratch,
  • train_generator.py to train a fluency model from scratch (we recommend Fluency model on domain of summaries, such as news, legal, etc.)

Once all the pretraining models are ready, training a summarizer can be done using the train_summary_loop.py:

python train_summary_loop.py --experiment wikinews_test --dataset_file data/wikinews.db

Scorer Models

The Coverage and Fluency model and Guardrails scores can be used separately for analysis, evaluation, etc. They are respectively in model_coverage.py and model_guardrails.py, each model is implemented as a class with a score(document, summary) function. The Fluency model is a Language model, which is also the generator (in model_generator.py). Examples of how to run each model are included in the class files, at the bottom of the files.

Bringing in your own data

Want to test out the Summary Loop on a different language/type of text? A Jupyter Notebook can help you bring your own data into the SQLite format we use in the pre-training scripts. Otherwise you can modify the scripts' data loading (DataLoader) and collate function (collate_fn).

Cite the work

If you make use of the code, models, or algorithm, please cite our paper:

@inproceedings{laban2020summary,
  title={The Summary Loop: Learning to Write Abstractive Summaries Without Examples},
  author={Laban, Philippe and Hsi, Andrew and Canny, John and Hearst, Marti A},
  booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics},
  volume={1},
  year={2020}
}

Contributing

If you'd like to contribute, or have questions or suggestions, you can contact us at [email protected]. All contributions welcome! For example, if you have a type of text data on which you want to apply the Summary Loop.

Comments
  • Error Loading Model  RuntimeError: Error(s) in loading state_dict for GPT2LMHeadModel:

    Error Loading Model RuntimeError: Error(s) in loading state_dict for GPT2LMHeadModel:

    Traceback (most recent call last):
      File "train_summary_loop.py", line 59, in <module>
        summarizer = GeneTransformer(max_output_length=args.max_output_length, device=args.device, tokenizer_type='gpt2', starter_model=summarizer_model_start)
      File "/home/tait-dev-0/summary_loop/summary_loop/model_generator.py", line 30, in __init__
        self.reload(starter_model)
      File "/home/tait-dev-0/summary_loop/summary_loop/model_generator.py", line 39, in reload
        print(self.model.load_state_dict(torch.load(from_file)))
      File "/home/tait-dev-0/anaconda2/envs/summary_loop/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1045, in load_state_dict
        self.__class__.__name__, "\n\t".join(error_msgs)))
    RuntimeError: Error(s) in loading state_dict for GPT2LMHeadModel:
    	Missing key(s) in state_dict: "transformer.h.0.attn.masked_bias", "transformer.h.1.attn.masked_bias", "transformer.h.2.attn.masked_bias", "transformer.h.3.attn.masked_bias", "transformer.h.4.attn.masked_bias", "transformer.h.5.attn.masked_bias", "transformer.h.6.attn.masked_bias", "transformer.h.7.attn.masked_bias", "transformer.h.8.attn.masked_bias", "transformer.h.9.attn.masked_bias", "transformer.h.10.attn.masked_bias", "transformer.h.11.attn.masked_bias". 
    
    
    opened by raviolli 6
  • Missing models for training

    Missing models for training

    Dear author, I tried to load the fluency_news_model_file models but failed. It seems that the "news_gpt2_bs32.bin" is not provided in the release.

    I tried to replace it with "fluency_news_bs32.bin", but it does not seem to match the GeneTransformer. I.e. when I tried to load the fluency model using modelf=GeneTransformer(max_output_length=args.max_output_length, device=args.device, starter_model=fluency_news_model_file) it shows "IncompatibleKeys(missing_keys=['transformer.h.0.attn.masked_bias', 'transformer.h.1.attn.masked_bias', 'transformer.h.2.attn.masked_bias', 'transformer.h.3.attn.masked_bias', 'transformer.h.4.attn.masked_bias', 'transformer.h.5.attn.masked_bias', 'transformer.h.6.attn.masked_bias', 'transformer.h.7.attn.masked_bias', 'transformer.h.8.attn.masked_bias', 'transformer.h.9.attn.masked_bias', 'transformer.h.10.attn.masked_bias', 'transformer.h.11.attn.masked_bias'], unexpected_keys=[]) "

    Is this fine?

    In addition, when I tried to load the key word coverage model, the keys do not match either I.e. When running modelc = KeywordCoverage(args.device, keyword_model_file=coverage_keyword_model_file, model_file=coverage_model_file)} It shows IncompatibleKeys(missing_keys=['bert.embeddings.position_ids', 'cls.predictions.decoder.bias'], unexpected_keys=[])

    Wondering how I could deal with this situation

    opened by pengshancai 2
  • IndexError when decode with beam_size > 1

    IndexError when decode with beam_size > 1

    Followed the instruction from here and changed the beam_size to more than 1. IndexError occur:

    ~/summary_loop/model_generator.py in decode(self, bodies, max_output_length, max_batch_size, beam_size, return_scores, sample, progress)
        232             with torch.no_grad():
        233                 if beam_size > 1:
    --> 234                     batch_outputs = self.decode_beam_batch(batch_bodies, beam_size=beam_size, max_output_length=max_output_length, sample=sample)
        235                 else:
        236                     batch_outputs = self.decode_batch(batch_bodies, max_output_length=max_output_length, sample=sample, return_scores=return_scores)
    
    ~/summary_loop/model_generator.py in decode_beam_batch(self, bodies, beam_size, max_output_length, sample)
        200             if build_up is not None:
        201                 build_up = build_up[tracks, :]
    --> 202             past = [p[:, tracks, :] for p in past]
        203 
        204             # Update the latest scores, and the current_build
    
    ~/summary_loop/model_generator.py in <listcomp>(.0)
        200             if build_up is not None:
        201                 build_up = build_up[tracks, :]
    --> 202             past = [p[:, tracks, :] for p in past]
        203 
        204             # Update the latest scores, and the current_build
    
    IndexError: tensors used as indices must be long, byte or bool tensors
    
    opened by s103321048 2
  • cannot reshape tensor of 0 elements into shape [-1, 0]

    cannot reshape tensor of 0 elements into shape [-1, 0]

    I followed the instruction training model with the provided example wikinews.db: python train_summary_loop.py --experiment wikinews_test --dataset_file data/wikinews.db

    It did start training but later stop due to Runtimeerror:

    Traceback (most recent call last):
      File "train_summary_loop.py", line 138, in <module>
        sampled_summaries, sampled_logprobs, sampled_tokens, input_past, sampled_end_idxs = summarizer.decode_batch(bodies, max_output_length=args.max_output_length, return_logprobs=True, sample=True)
      File "/home/robin/TrySomethingNew/summary_loop/model_generator.py", line 100, in decode_batch
        _, input_past = self.model(input_ids=inputs, past_key_values=None)
      File "/home/robin/virtual-env/summary-loop/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
        result = self.forward(*input, **kwargs)
      File "/home/robin/virtual-env/summary-loop/lib/python3.6/site-packages/transformers/modeling_gpt2.py", line 731, in forward
        return_dict=return_dict,
      File "/home/robin/virtual-env/summary-loop/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
        result = self.forward(*input, **kwargs)
      File "/home/robin/virtual-env/summary-loop/lib/python3.6/site-packages/transformers/modeling_gpt2.py", line 533, in forward
        input_ids = input_ids.view(-1, input_shape[-1])
    RuntimeError: cannot reshape tensor of 0 elements into shape [-1, 0] because the unspecified
    dimension size -1 can be any value and is ambiguous
    
    opened by s103321048 2
  • Code for summary generation from the given model is not provided

    Code for summary generation from the given model is not provided

    You mentioned "Releasing the 11,490 summaries generated by the Summary Loop model (summary_loop_length46.bin) on the CNN/DM test set." and provided the json file "cnndm_test_summary_loop.json". Is there any code to get the json file(summaries) from the given model(.bin). If you have such code, then please share.

    opened by tarunyadav 1
  • Resuming training

    Resuming training

    Is resuming training simply starting from the checkpoint instead of the gpt3 copier bin?

    For example:

    #summarizer_model_start = os.path.join(models_folder, "gpt2_copier23.bin")
    summarizer_model_start = os.path.join(models_folder, "summarizer_wikinews_test_0_ckpt.bin")
    
    opened by RevanthRameshkumar 1
  • Encoding error in bin file

    Encoding error in bin file

    (dlenv) D:\summary loop\summary_loop-0.1>python summary_loop_length10.bin --experiment wikinews_test --dataset_file data/wikinews.db File "summary_loop_length10.bin", line 1 SyntaxError: Non-UTF-8 code starting with '\x80' in file summary_loop_length10.bin on line 1, but no encoding declared; see http://python.org/dev/peps/pep-0263/ for details

    Anyone else get this issue? Currently debugging

    opened by RevanthRameshkumar 1
  • a sample of data in hdf5 format

    a sample of data in hdf5 format

    Hi,

    I'm trying to train the models from scratch since I'd like to use them on a different language. It seems that one needs a dataset in hdf5 format instead of SQL to do that. Can you please release a sample of data in hdf5 format?

    Thanks

    opened by azagsam 1
  • Missing Model to run example

    Missing Model to run example

    I'm trying to run the example:

    python train_summary_loop.py --experiment wikinews_test --dataset_file ../data/wikinews.db --root_folder ../ --device cuda

    but it seems I'm missing the ../models/fluency_news_bs32.bin

    it doesn't seem to be in the list of downloadable models. Mistake??

    opened by raviolli 1
  • Error running training_summary example

    Error running training_summary example

    python train_summary_loop.py --experiment wikinews_test --dataset_file ../data/wikinews.db

    Traceback (most recent call last):
      File "train_summary_loop.py", line 56, in <module>
        bert_tokenizer = utils_tokenizer.BERTCacheTokenizer()
      File "/home/tait-dev-0/summary_loop/summary_loop/utils_tokenizer.py", line 88, in __init__
        self.tokenizer.max_len = 10000
    AttributeError: can't set attribute
    

    transformers 3.0.2 py_0 conda-forge

    I created a separate conda environment. Is this a transformer version issue?

    opened by raviolli 1
  • updated torch.load params

    updated torch.load params

    Updated occurrences of torch.load to include map_location parameter. When attempting to train with --device set to cpu, torch.load may attempt to load a file with GPU tensors, which will lead to loading to GPU by default (see: https://pytorch.org/docs/stable/generated/torch.load.html). If --device is set to cpu, this will error on a cpu-only machine. Otherwise, it will go against desired functionality. This pull request resolves this issue.

    opened by bsh98 0
Releases(0.3)
  • 0.3(Jun 11, 2021)

    Releasing the 11,490 summaries generated by the Summary Loop model (summary_loop_length46.bin) on the CNN/DM test set. Each summary is released attached with the CNN/DM id. The following code snippet can be used to evaluate ROUGE scores:

    from datasets import load_dataset, load_metric
    import json
    with open("/home/phillab/data/cnndm_test_summary_loop.json", "r") as f:
        summary_loop_gens = json.load(f)
    rouge = load_metric("rouge")
    dataset_test = load_dataset("cnn_dailymail", "3.0.0")["test"]
    id2summary_loop = {d["id"]: d["summary_loop_gen"] for d in summary_loop_gens}
    candidates, references = [], []
    for d in dataset_test:
        references.append(d["highlights"])
        candidates.append(id2summary_loop[d["id"]])
    print(len(references), len(candidates))
    print(rouge.compute(predictions=candidates, references=references))
    

    Notes: (1) this relies on HuggingFace's datasets repository (https://github.com/huggingface/datasets) to load the CNN/DM dataset, and the ROUGE metric. (2) The ROUGE metric implementation used in the above example is not the original, PERL-based implementation of ROUGE used for official numbers in the paper. This serves for demonstration purposes to show how to use the file.

    Source code(tar.gz)
    Source code(zip)
    cnndm_test_summary_loop.json(3.40 MB)
  • 0.2(Sep 8, 2020)

    We release an upgraded set of initial models for the training script that are compatible with transformers==3.1.0 to make it easier to get started. The original release (0.1) used version 2.8.0 of transformers, and there were some breaking changed introduced since, which leads to some model loading failing. The requirements.txt in the latest release has been updated with compatible library versions to simplify installation.

    Initial Models

    These sets of models work using Python 3.6.10, Transformers 3.1.0 and Sklearn 0.22.1:

    • keyword_extractor.joblib: An sklearn pipeline that will extract can be used to compute tf-idf scores of words according to the BERT vocabulary, which is used by the Masking Procedure,
    • bert_coverage.bin: A bert-base-uncased finetuned model on the task of Coverage for the news domain,
    • fluency_news_bs32.bin: A GPT2 (base) model finetuned on a large corpus of news articles, used as the Fluency model,
    • gpt2_copier23.bin: A GPT2 (base) model that can be used as an initial point for the Summarizer model.

    Final Models

    Unfortunately, the three final models (trained summarizers) released in v0.1 do not work anymore in the latest transformers library, and only work in versions 2.8.0 and before. Once we retrain these models, we will reupload them. If this is of interest, feel free to add an issue or contact us.

    Source code(tar.gz)
    Source code(zip)
    bert_coverage.bin(420.06 MB)
    fluency_news_bs32.bin(486.73 MB)
    gpt2_copier23.bin(633.97 MB)
    keyword_extractor.joblib(667.33 KB)
  • v0.1(Jun 25, 2020)

    We release models and data needed to run the Summary Loop and use the models we trained.

    Initial models

    Here are the models needed to run the train_summary_loop.py:

    • keyword_extractor.joblib: An sklearn pipeline that will extract can be used to compute tf-idf scores of words according to the BERT vocabulary, which is used by the Masking Procedure,
    • bert_coverage.bin: A bert-base-uncased finetuned model on the task of Coverage for the news domain,
    • fluency_news_bs32.bin: A GPT2 (base) model finetuned on a large corpus of news articles, used as the Fluency model,
    • gpt2_copier23.bin: A GPT2 (base) model that can be used as an initial point for the Summarizer model.

    Sample dataset

    We release a sample dataset of Wikinews news articles to get researchers started using the Summary Loop: wikinews.db. We cannot release the full dataset we used for copyright reasons. We note that we do not expect this to be enough to train to best performance, and recommend finding larger datasets (such as Newsroom or CNN/DM) for full-fledged training.

    Final models

    We release 3 Summarizer models obtained through the Summary Loop procedure for 3 target lengths: summary_loop_length_12.bin, summary_loop_length_27.bin, summary_loop_length_61.bin

    Source code(tar.gz)
    Source code(zip)
    bert_coverage.bin(420.06 MB)
    fluency_news_bs32.bin(522.73 MB)
    gpt2_copier23.bin(633.97 MB)
    keyword_extractor.joblib(667.33 KB)
    summary_loop_length10.bin(522.73 MB)
    summary_loop_length24.bin(522.73 MB)
    summary_loop_length46.bin(522.73 MB)
    wikinews.db(91.20 MB)
Owner
Canny Lab @ The University of California, Berkeley
Canny Lab @ The University of California, Berkeley
An open software package to develop BCI based brain and cognitive computing technology for recognizing user's intention using deep learning

An open software package to develop BCI based brain and cognitive computing technology for recognizing user's intention using deep learning

deepbci 272 Jan 08, 2023
MatryODShka: Real-time 6DoF Video View Synthesis using Multi-Sphere Images

Main repo for ECCV 2020 paper MatryODShka: Real-time 6DoF Video View Synthesis using Multi-Sphere Images. visual.cs.brown.edu/matryodshka

Brown University Visual Computing Group 75 Dec 13, 2022
The Pytorch code of "Joint Distribution Matters: Deep Brownian Distance Covariance for Few-Shot Classification", CVPR 2022 (Oral).

DeepBDC for few-shot learning        Introduction In this repo, we provide the implementation of the following paper: "Joint Distribution Matters: Dee

FeiLong 116 Dec 19, 2022
[NeurIPS 2021] Towards Better Understanding of Training Certifiably Robust Models against Adversarial Examples | ⛰️⚠️

Towards Better Understanding of Training Certifiably Robust Models against Adversarial Examples This repository is the official implementation of "Tow

Sungyoon Lee 4 Jul 12, 2022
AI-Bot - 一个基于watermelon改造的OpenAI-GPT-2的智能机器人

AI-Bot 一个基于watermelon改造的OpenAI-GPT-2的智能机器人 在Binder上直接运行测试 目前有两种实现方式 TF2的GPT-2 TF

9 Nov 16, 2022
Fast and robust clustering of point clouds generated with a Velodyne sensor.

Depth Clustering This is a fast and robust algorithm to segment point clouds taken with Velodyne sensor into objects. It works with all available Velo

Photogrammetry & Robotics Bonn 957 Dec 21, 2022
Official implementation of "Learning Not to Reconstruct" (BMVC 2021)

Official PyTorch implementation of "Learning Not to Reconstruct Anomalies" This is the implementation of the paper "Learning Not to Reconstruct Anomal

Marcella Astrid 13 Dec 04, 2022
Official repository of DeMFI (arXiv.)

DeMFI This is the official repository of DeMFI (Deep Joint Deblurring and Multi-Frame Interpolation). [ArXiv_ver.] Coming Soon. Reference Jihyong Oh a

Jihyong Oh 56 Dec 14, 2022
LTR_CrossEncoder: Legal Text Retrieval Zalo AI Challenge 2021

LTR_CrossEncoder: Legal Text Retrieval Zalo AI Challenge 2021 We propose a cross encoder model (LTR_CrossEncoder) for information retrieval, re-retrie

Hieu Duong 7 Jan 12, 2022
CRNN With PyTorch

CRNN-PyTorch Implementation of https://arxiv.org/abs/1507.05717

Vadim 4 Sep 01, 2022
Instant Real-Time Example-Based Style Transfer to Facial Videos

FaceBlit: Instant Real-Time Example-Based Style Transfer to Facial Videos The official implementation of FaceBlit: Instant Real-Time Example-Based Sty

Aneta Texler 131 Dec 19, 2022
The official implementation of CVPR 2021 Paper: Improving Weakly Supervised Visual Grounding by Contrastive Knowledge Distillation.

Improving Weakly Supervised Visual Grounding by Contrastive Knowledge Distillation This repository is the official implementation of CVPR 2021 paper:

9 Nov 14, 2022
Copy Paste positive polyp using poisson image blending for medical image segmentation

Copy Paste positive polyp using poisson image blending for medical image segmentation According poisson image blending I've completely used it for bio

Phạm Vũ Hùng 2 Oct 19, 2021
Parris, the automated infrastructure setup tool for machine learning algorithms.

README Parris, the automated infrastructure setup tool for machine learning algorithms. What Is This Tool? Parris is a tool for automating the trainin

Joseph Greene 319 Aug 02, 2022
Robot Servers and Server Manager software for robo-gym

robo-gym-server-modules Robot Servers and Server Manager software for robo-gym. For info on how to use this package please visit the robo-gym website

JR ROBOTICS 4 Aug 16, 2021
Self-Supervised Vision Transformers Learn Visual Concepts in Histopathology (LMRL Workshop, NeurIPS 2021)

Self-Supervised Vision Transformers Learn Visual Concepts in Histopathology Self-Supervised Vision Transformers Learn Visual Concepts in Histopatholog

Richard Chen 95 Dec 24, 2022
A PyTorch implementation: "LASAFT-Net-v2: Listen, Attend and Separate by Attentively aggregating Frequency Transformation"

LASAFT-Net-v2 Listen, Attend and Separate by Attentively aggregating Frequency Transformation Woosung Choi, Yeong-Seok Jeong, Jinsung Kim, Jaehwa Chun

Woosung Choi 29 Jun 04, 2022
git《Tangent Space Backpropogation for 3D Transformation Groups》(CVPR 2021) GitHub:1]

LieTorch: Tangent Space Backpropagation Introduction The LieTorch library generalizes PyTorch to 3D transformation groups. Just as torch.Tensor is a m

Princeton Vision & Learning Lab 482 Jan 06, 2023
Official implementation of cosformer-attention in cosFormer: Rethinking Softmax in Attention

cosFormer Official implementation of cosformer-attention in cosFormer: Rethinking Softmax in Attention Update log 2022/2/28 Add core code License This

120 Dec 15, 2022
ChatBot-Pytorch - A GPT-2 ChatBot implemented using Pytorch and Huggingface-transformers

ChatBot-Pytorch A GPT-2 ChatBot implemented using Pytorch and Huggingface-transf

ParZival 42 Dec 09, 2022