An implementation for `Text2Event: Controllable Sequence-to-Structure Generation for End-to-end Event Extraction`

Overview

Text2Event

Update

  • [2021-08-03] Update pre-trained models

Quick links

Requirements

General

  • Python (verified on 3.8)
  • CUDA (verified on 11.1)

Python Packages

  • see requirements.txt
conda create -n text2event python=3.8
conda activate text2event
pip install -r requirements.txt

Quick Start

Data Format

Data folder contains four files:

data/text2tree/one_ie_ace2005_subtype
├── event.schema
├── test.json
├── train.json
└── val.json

train/val/test.json are data files, and each line is a JSON instance. Each JSON instance contains text and event fields, in which text is plain text, and event is event linearized form. If you want to use other key names, it is easy to change the input format in run_seq2seq.py.

{"text": "He also owns a television and a radio station and a newspaper .", "event": "<extra_id_0>  <extra_id_1>"}
{"text": "' ' For us the United Natgions is the key authority '' in resolving the Iraq crisis , Fischer told reporters opn arrival at the EU meeting .", "event": "<extra_id_0> <extra_id_0> Meet meeting <extra_id_0> Entity EU <extra_id_1> <extra_id_1> <extra_id_1>"}

Note:

  • Use the extra character of T5 as the structure indicators, such as <extra_id_0>, <extra_id_1>, etc.

  • event.schema is the event schema file for building the trie of constrained decoding. It contains three lines: the first line is event type name list, the second line is event role name list, the third line is type-to-role dictionary.

    ["Declare-Bankruptcy", "Convict", ...]
    ["Plaintiff", "Target", ...]
    {"End-Position": ["Place", "Person", "Entity"], ...}
    

Model Training

Training scripts as follows:

  • run_seq2seq.py: Python code entry, modified from the transformers/examples/seq2seq/run_seq2seq.py
  • run_seq2seq.bash: Model training script logging to the log file.
  • run_seq2seq_verbose.bash: Same model training script as run_seq2seq.bash but output to the screen directly.
  • run_seq2seq_with_pretrain.bash: Model training script for curriculum learning, which contains substructure learning and full structure learning.

The command for the training is as follows (see bash scripts and Python files for the corresponding command-line arguments):

bash run_seq2seq_verbose.bash -d 0 -f tree -m t5-base --label_smoothing 0 -l 1e-4 --lr_scheduler linear --warmup_steps 2000 -b 16
  • -d refers to the GPU device id.
  • -m t5-base refers to using T5-base.
  • Currently, constrained decoding algorithms do not support use_fast_tokenizer=True and beam search yet.

Trained models are saved in the models/ folder.

Model Evaluation

Offset-level Evaluation

python evaluation.py -g <data-folder-path> -r <offset-folder-path> -p <model-folder-path> -f <data-format>
  • This evaluation script converts the eval_preds_seq2seq.txt and test_preds_seq2seq.txt in the model folder <model-folder-path> into the corresponding offset prediction results for model evaluation.
  • -f <data-format> refers to dyiepp or oneie

Record-level Evaluation (approximate, used in training)

bash run_eval.bash -d 0 -m <model-folder-path> -i <data-folder-path> -c -b 8
  • -d refers to the GPU device id.
  • -c represents the use of constrained decoding, otherwise not apply
  • -b 8 represents batch_size=8

How to expand to other tasks

  1. prepare the corresponding data format
  2. Writ the code for reading corresponding data format: elif data_args.task.startswith("event") in seq2seq.py
  3. Writ the code for evaluating the corresponding task result: def compute_metrics(eval_preds) in seq2seq.py

Completing the above process can finish the simple Seq2Seq training and inference process.

If you need to use constrained decoding, you need to write the corresponding decoding mode (decoding_format), refer to extraction.extract_constraint.get_constraint_decoder

Pre-trained Model

You can find the pre-trained models as following google drive links or download models using command gdown (pip install gdown).

dyiepp_ace2005_en_t5_base.zip

gdown --id 1_fOmnSatNfceL9DZPxpof5AT9Oo7vTrC && unzip dyiepp_ace2005_en_t5_base.zip

dyiepp_ace2005_en_t5_large.zip

gdown --id 10iY1obkbgJtTKwfoOFevqL5AwG-hLvhU && unzip dyiepp_ace2005_en_t5_large.zip

oneie_ace2005_en_t5_large.zip

gdown --id 1zwnptRbdZntPT4ucqSANeaJ3vvwKliUe && unzip oneie_ace2005_en_t5_large.zip

oneie_ere_en_t5_large.zip

gdown --id 1WG7-pTZ3K49VMbQIONaDq_0pUXAcoXrZ && unzip oneie_ere_en_t5_large.zip

Event Datasets Preprocessing

We first refer to the following code and environments [dygiepp] and [oneie v0.4.7] for data preprocessing. Thanks to them!

After data preprocessing and we get the following data files:

 $ tree data/raw_data/
data/raw_data/
├── ace05-EN
│   ├── dev.oneie.json
│   ├── test.oneie.json
│   └── train.oneie.json
├── dyiepp_ace2005
│   ├── dev.json
│   ├── test.json
│   └── train.json
└── ERE-EN
    ├── dev.oneie.json
    ├── test.oneie.json
    └── train.oneie.json

We then convert the above data files to tree format. The following scripts generate the corresponding data folder in data/text2tree. The conversion will automatically generate train/dev/test JSON files and event.schema file.

bash scripts/processing_data.bash
data/text2tree
├── dyiepp_ace2005_subtype
│   ├── event.schema
│   ├── test.json
│   ├── train.json
│   └── val.json
├── dyiepp_ace2005_subtype_span
│   ├── event.schema
│   ├── test.json
│   ├── train.json
│   └── val.json
├── one_ie_ace2005_subtype
│   ├── event.schema
│   ├── test.json
│   ├── train.json
│   └── val.json
├── one_ie_ace2005_subtype_span
│   ├── event.schema
│   ├── test.json
│   ├── train.json
│   └── val.json
├── one_ie_ere_en_subtype
│   ├── event.schema
│   ├── test.json
│   ├── train.json
│   └── val.json
└── one_ie_ere_en_subtype_span
    ├── event.schema
    ├── test.json
    ├── train.json
    └── val.json
  • dyiepp_ace2005_subtype for Full Structure Learning and dyiepp_ace2005_subtype_span for Substructure Learning.

Citation

If this repository helps you, please cite this paper:

Yaojie Lu, Hongyu Lin, Jin Xu, Xianpei Han, Jialong Tang, Annan Li, Le Sun, Meng Liao, Shaoyi Chen. Text2Event: Controllable Sequence-to-Structure Generation for End-to-end Event Extraction. The Joint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL-IJCNLP 2021).

@inproceedings{lu-etal-2021-text2event,
    title = "{T}ext2{E}vent: Controllable Sequence-to-Structure Generation for End-to-end Event Extraction",
    author = "Lu, Yaojie  and
      Lin, Hongyu  and
      Xu, Jin  and
      Han, Xianpei  and
      Tang, Jialong  and
      Li, Annan  and
      Sun, Le  and
      Liao, Meng  and
      Chen, Shaoyi",
    booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
    month = aug,
    year = "2021",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2021.acl-long.217",
    pages = "2795--2806",
    abstract = "Event extraction is challenging due to the complex structure of event records and the semantic gap between text and event. Traditional methods usually extract event records by decomposing the complex structure prediction task into multiple subtasks. In this paper, we propose Text2Event, a sequence-to-structure generation paradigm that can directly extract events from the text in an end-to-end manner. Specifically, we design a sequence-to-structure network for unified event extraction, a constrained decoding algorithm for event knowledge injection during inference, and a curriculum learning algorithm for efficient model learning. Experimental results show that, by uniformly modeling all tasks in a single model and universally predicting different labels, our method can achieve competitive performance using only record-level annotations in both supervised learning and transfer learning settings.",
}
Owner
Roger
Roger
End-to-end machine learning project for rices detection

Basmatinet Welcome to this project folks ! Whether you like it or not this project is all about riiiiice or riz in french. It is also about Deep Learn

Béranger 47 Jun 18, 2022
Large Scale Multi-Illuminant (LSMI) Dataset for Developing White Balance Algorithm under Mixed Illumination

Large Scale Multi-Illuminant (LSMI) Dataset for Developing White Balance Algorithm under Mixed Illumination (ICCV 2021) Dataset License This work is l

DongYoung Kim 33 Jan 04, 2023
Code for TIP 2017 paper --- Illumination Decomposition for Photograph with Multiple Light Sources.

Illumination_Decomposition Code for TIP 2017 paper --- Illumination Decomposition for Photograph with Multiple Light Sources. This code implements the

QAY 7 Nov 15, 2020
A Light in the Dark: Deep Learning Practices for Industrial Computer Vision

A Light in the Dark: Deep Learning Practices for Industrial Computer Vision This is the repository for our Paper/Contribution to the WI2022 in Nürnber

Maximilian Harl 6 Jan 17, 2022
An Unbiased Learning To Rank Algorithms (ULTRA) toolbox

Unbiased Learning to Rank Algorithms (ULTRA) This is an Unbiased Learning To Rank Algorithms (ULTRA) toolbox, which provides a codebase for experiment

back 3 Nov 18, 2022
This is a Keras implementation of a CNN for estimating age, gender and mask from a camera.

face-detector-age-gender This is a Keras implementation of a CNN for estimating age, gender and mask from a camera. Before run face detector app, expr

Devdreamsolution 2 Dec 04, 2021
ShinRL: A Library for Evaluating RL Algorithms from Theoretical and Practical Perspectives

Status: Under development (expect bug fixes and huge updates) ShinRL: A Library for Evaluating RL Algorithms from Theoretical and Practical Perspectiv

37 Dec 28, 2022
Unsupervised Foreground Extraction via Deep Region Competition

Unsupervised Foreground Extraction via Deep Region Competition [Paper] [Code] The official code repository for NeurIPS 2021 paper "Unsupervised Foregr

28 Nov 06, 2022
Kaggle competition: Springleaf Marketing Response

PruebaEnel Prueba Kaggle-Springleaf-master Prueba Kaggle-Springleaf Kaggle competition: Springleaf Marketing Response Competencia de Kaggle: Marketing

1 Feb 09, 2022
An intelligent, flexible grammar of machine learning.

An english representation of machine learning. Modify what you want, let us handle the rest. Overview Nylon is a python library that lets you customiz

Palash Shah 79 Dec 02, 2022
The implementation of the algorithm in the paper "Safe Deep Semi-Supervised Learning for Unseen-Class Unlabeled Data" published in ICML 2020.

DS3L This is the code for paper "Safe Deep Semi-Supervised Learning for Unseen-Class Unlabeled Data" published in ICML 2020. Setups The code is implem

Guolz 36 Oct 19, 2022
Use unsupervised and supervised learning to predict stocks

AIAlpha: Multilayer neural network architecture for stock return prediction This project is meant to be an advanced implementation of stacked neural n

Vivek Palaniappan 1.5k Jan 06, 2023
Style transfer, deep learning, feature transform

FastPhotoStyle License Copyright (C) 2018 NVIDIA Corporation. All rights reserved. Licensed under the CC BY-NC-SA 4.0 license (https://creativecommons

NVIDIA Corporation 10.9k Jan 02, 2023
Deep motion generator collections

GenMotion GenMotion (/gen’motion/) is a Python library for making skeletal animations. It enables easy dataset loading and experiment sharing for synt

23 May 24, 2022
Reproducing-BowNet: Learning Representations by Predicting Bags of Visual Words

Reproducing-BowNet Our reproducibility effort based on the 2020 ML Reproducibility Challenge. We are reproducing the results of this CVPR 2020 paper:

6 Mar 16, 2022
PyTorch implementation of the paper: "Preference-Adaptive Meta-Learning for Cold-Start Recommendation", IJCAI, 2021.

PAML PyTorch implementation of the paper: "Preference-Adaptive Meta-Learning for Cold-Start Recommendation", IJCAI, 2021. (Continuously updating ) Int

15 Nov 18, 2022
Supercharging Imbalanced Data Learning WithCausal Representation Transfer

ECRT: Energy-based Causal Representation Transfer Code for Supercharging Imbalanced Data Learning With Energy-basedContrastive Representation Transfer

Zidi Xiu 11 May 02, 2022
PyG (PyTorch Geometric) - A library built upon PyTorch to easily write and train Graph Neural Networks (GNNs)

PyG (PyTorch Geometric) is a library built upon PyTorch to easily write and train Graph Neural Networks (GNNs) for a wide range of applications related to structured data.

PyG 16.5k Jan 08, 2023
[SIGGRAPH 2022 Journal Track] AvatarCLIP: Zero-Shot Text-Driven Generation and Animation of 3D Avatars

AvatarCLIP: Zero-Shot Text-Driven Generation and Animation of 3D Avatars Fangzhou Hong1*  Mingyuan Zhang1*  Liang Pan1  Zhongang Cai1,2,3  Lei Yang2 

Fangzhou Hong 749 Jan 04, 2023
CUda Matrix Multiply library.

cumm CUda Matrix Multiply library. cumm is developed during learning of CUTLASS, which use too much c++ template and make code unmaintainable. So I de

49 Dec 27, 2022