Code and data form the paper BERT Got a Date: Introducing Transformers to Temporal Tagging

Overview

BERT Got a Date: Introducing Transformers to Temporal Tagging

Satya Almasian*, Dennis Aumiller*, and Michael Gertz
Heidelberg University
Contact us via: <lastname>@informatik.uni-heidelberg.de

Code and data for the paper BERT Got a Date: Introducing Transformers to Temporal Tagging. Temporal tagging is the task of identification of temporal mentions in text; these expressions can be further divided into different type categories, which is what we refer to as expression (type) classification. This repository describes two different types of transformer-based temporal taggers, which are both additionally capable of expression classification. We follow the TIMEX3 schema definitions in their styling and expression classes (notably, the latter are one of TIME, DATE, SET, DURATION). The available data sources for temporal tagging are in the TimeML format, which is essentially a form of XML with tags encapsulating temporal expressions.
An example can be seen below:

Due to lockdown restrictions, 2020 might go down as the worst economic year in over <TIMEX3 tid="t2" type="DURATION" value="P1DE">a decade</TIMEX3>.

For more data instances, look at the content of data.zip. Refer to the README file in the respective unzipped folder for more information.
This repository contains code for data preparation and training of a seq2seq model (encoder-decoder architectured initialized from encoder-only architectures, specifically BERT or RoBERTa), as well as three token classification encoders (BERT-based).
The output of the models discussed in the paper is in the results folder. Refer to the README file in the folder for more information.

Data Preparation

The scripts to generate training data is in the subfolder data_preparation. For more usage information, refer to the README file in the subfolder. The data used for training and evaluation are provided in zipped form in data.zip.

Evaluation

For evaluation, we use a slightly modified version of the TempEval-3 evaluation toolkit (original source here). We refactored the code to be compatible with Python3, and incorporated additional evaluation metrics, such as a confusion matrix for type classification. We cross-referenced results to ensure full backward-compatibility and all runs result in the exact same results for both versions. Our adjusted code, as well as scripts to convert the output of transformer-based tagging models are in the evaluation subfolder. For more usage information, refer to the README file in the respective subfolder.

Temporal models

We train and evaluate two types of setups for joint temporal tagging and classification:

  • Token Classification: We define three variants of simple token classifiers; all of them are based on Huggingface's BertForTokenClassification. We adapt their "token classification for named entity recognition script" to train these models. All the models are trained using bert-base-uncased as their pre-trained checkpoint.
  • Text-to-Text Generation (Seq2Seq): These models are encoder-decoder architectures using BERT or RoBERTa for initial weights. We use Huggingface's EncoderDecoder class for initialization of weights, starting from bert-base-uncased and roberta-base, respectively.

Seq2seq

To train the seq2seq models, use run_seq2seq_bert_roberta.py. Example usage is as follows:

python3 run_seq2seq_bert_roberta.py --model_name roberta-base --pre_train True \
--model_dir ./test --train_data ./data/seq2seq/train/tempeval_train.json \ 
--eval_data ./data/seq2seq/test/tempeval_test.json --num_gpu 2 --num_train_epochs 1 \
warmup_steps 100 --seed 0 --eval_steps 200

Which trains a roberta2roberta model defined by model_name for num_train_epochs epochs on the gpu with ID num_gpu. The random seed is set by seed and the number of warmup steps by warmup_steps. Train data should be specified in train_data and model_dir defines where the model is saved. set eval_data if you want intermediate evaluation defined by eval_steps. If the pre_train flag is set to true it will load the checkpoints from the hugginface hub and fine-tune on the dataset given. If the pre_train is false, we are in the fine-tuning mode and you can provide the path to the pre-trained model with pretrain_path. We used the pre_train mode to train on weakly labeled data provided by the rule-based system of HeidelTime and set the pre_train to false for fine-tunning on the benchmark datasets. If you wish to simply fine-tune the benchmark datasets using the huggingface checkpoints you can set the pre_train to ture, as displayed in the example above. For additional arguments such as length penalty, the number of beams, early stopping, and other model specifications, please refer to the script.

Token Classifiers

As mentioned above all token classifiers are trained using an adaptation of the NER script from hugginface. To train these models use
run_token_classifier.py like the following example:

python3 run_token_classifier.py --data_dir /data/temporal/BIO/wikiwars \ 
--labels ./data/temporal/BIO/train_staging/labels.txt \ 
--model_name_or_path bert-base-uncased \ 
--output_dir ./fine_tune_wikiwars/bert_tagging_with_date_no_pretrain_8epochs/bert_tagging_with_date_layer_seed_19 --max_seq_length  512  \
--num_train_epochs 8 --per_device_train_batch_size 34 --save_steps 3000 --logging_steps 300 --eval_steps 3000 \ 
--do_train --do_eval --overwrite_output_dir --seed 19 --model_date_extra_layer    

We used bert-base-uncased as the base of all our token classification models for pre-training as defined by model_name_or_path. For fine-tuning on the datasets model_name_or_path should point to the path of the pre-trained model. labels file is created during data preparation for more information refer to the subfolder. data_dir points to a folder that contains train.txt, test.txt and dev.txt and output_dir points to the saving location. You can define the number of epochs by num_train_epochs, set the seed with seed and batch size on each GPU with per_device_train_batch_size. For more information on the parameters refer to the Hugginface script. In our paper, we introduce 3 variants of token classification, which are defined by flags in the script. If no flag is set the model trains the vanilla BERT for token classification. The flag model_date_extra_layer trains the model with an extra date layer and model_crf adds the extra crf layer. To train the extra date embedding you need to download the vocabulary file and specify its path in date_vocab argument. The description and model definition of the BERT variants are in folder temporal_models. Please refer to the README file for further information. For training different model types on the same data, make sure to remove the cached dataset, since the feature generation is different for each model type.

Load directly from the Huggingface Model Hub

We uploaded our best-performing version of each architecture to the Huggingface Model Hub. The weights for the other four seeding runs are available upon request. We upload the variants that were fine-tuned on the concatenation of all three evaluation sets for better generalization to various domains. Token classification models are variants without pre-training. Both seq2seq models are pretrained on the weakly labled corpus and fine-tuned on the mixed data.

Overall we upload the following five models. For other model configurations and checkpoints please get in contact with us:

  • satyaalmasian/temporal_tagger_roberta2roberta: Our best perfoming model from the paper, an encoder-decoder architecture using RoBERTa. The model is pre-trained on weakly labeled news articles, tagged with HeidelTime, and fined-tuned on the train set of TempEval-3, Tweets, and Wikiwars.
  • satyaalmasian/temporal_tagger_bert2bert: Our second seq2seq model , an encoder-decoder architecture using BERT. The model is pre-trained on weakly labeled news articles, tagged with HeidelTime, and fined-tuned on the train set of TempEval-3, Tweets, and Wikiwars.
  • satyaalmasian/temporal_tagger_BERT_tokenclassifier: BERT for token classification model or vanilla BERT model from the paper. This model is only trained on the train set of TempEval-3, Tweets, and Wikiwars.
  • satyaalmasian/temporal_tagger_DATEBERT_tokenclassifier: BERT for token classification with an extra date embedding, that encodes the reference date of the document. If the document does not have a reference date, it is best to avoid this model. Moreover, since the architecture is a modification of a default hugginface model, the usage is not as straightforward and requires the classes defined in the temporal_model module. This model is only trained on the train set of TempEval-3, Tweets, and Wikiwars.
  • satyaalmasian/temporal_tagger_BERTCRF_tokenclassifier :BERT for token classification with a CRF layer on the output. Moreover, since the architecture is a modification of a default huggingface model, the usage is not as straightforward and requires the classes defined in the temporal_model module. This model is only trained on the train set of TempEval-3, Tweets, and Wikiwars.

In the examples module, you find two scripts model_hub_seq2seq_examples.py and model_hub_tokenclassifiers_examples.py for seq2seq and token classification examples using the hugginface model hub. The examples load the models and use them on example sentences for tagging. The seq2seq example uses the pre-defined post-processing from the tempeval evaluation and contains rules for the cases we came across in the benchmark dataset. If you plan to use these models on new data, it is best to observe the raw output of the first few samples to detect possible format problems that are easily fixable. Further fine-tuning of the models is also possible. For seq2seq models you can simply load the models with

tokenizer = AutoTokenizer.from_pretrained("satyaalmasian/temporal_tagger_roberta2roberta")
model = EncoderDecoderModel.from_pretrained("satyaalmasian/temporal_tagger_roberta2roberta")

and use the DataProcessor from temporal_models.seq2seq_utils to preprocess the json dataset. The model can be fine-tuned using Seq2SeqTrainer (same as in run_seq2seq_bert_roberta.py). For token classifiers the model and the tokenizers are loaded as follows:

tokenizer = AutoTokenizer.from_pretrained("satyaalmasian/temporal_tagger_BERT_tokenclassifier", use_fast=False)
model = BertForTokenClassification.from_pretrained("satyaalmasian/temporal_tagger_BERT_tokenclassifier")

Classifiers need a BIO-tagged file that can be loaded using TokenClassificationDataset and fine-tuned with the hugginface Trainer. For more information on the usage of these models refer to their model hub page.

Citation

If you use our models in your work, we would appreciate attribution with the following citation:

@article{almasian2021bert,
  title={{BERT got a Date: Introducing Transformers to Temporal Tagging}},
  author={Almasian, Satya and Aumiller, Dennis and Gertz, Michael},
  journal={arXiv},
  year={2021}
}
Robustness via Cross-Domain Ensembles

Robustness via Cross-Domain Ensembles [ICCV 2021, Oral] This repository contains tools for training and evaluating: Pretrained models Demo code Traini

Visual Intelligence & Learning Lab, Swiss Federal Institute of Technology (EPFL) 27 Dec 23, 2022
MINERVA: An out-of-the-box GUI tool for offline deep reinforcement learning

MINERVA is an out-of-the-box GUI tool for offline deep reinforcement learning, designed for everyone including non-programmers to do reinforcement learning as a tool.

Takuma Seno 80 Nov 06, 2022
Fedlearn支持前沿算法研发的Python工具库 | Fedlearn algorithm toolkit for researchers

FedLearn-algo Installation Development Environment Checklist python3 (3.6 or 3.7) is required. To configure and check the development environment is c

89 Nov 14, 2022
Auto-updating data to assist in investment to NEPSE

Symbol Ratios Summary Sector LTP Undervalued Bonus % MEGA Strong Commercial Banks 368 5 10 JBBL Strong Development Banks 568 5 10 SIFC Strong Finance

Amit Chaudhary 16 Nov 01, 2022
Sign Language Translation with Transformers (COLING'2020, ECCV'20 SLRTP Workshop)

transformer-slt This repository gathers data and code supporting the experiments in the paper Better Sign Language Translation with STMC-Transformer.

Kayo Yin 107 Dec 27, 2022
Cluttered MNIST Dataset

Cluttered MNIST Dataset A setup script will download MNIST and produce mnist/*.t7 files: luajit download_mnist.lua Example usage: local mnist_clutter

DeepMind 50 Jul 12, 2022
REBEL: Relation Extraction By End-to-end Language generation

REBEL: Relation Extraction By End-to-end Language generation This is the repository for the Findings of EMNLP 2021 paper REBEL: Relation Extraction By

Babelscape 222 Jan 06, 2023
PyTorch Implementation of ECCV 2020 Spotlight TuiGAN: Learning Versatile Image-to-Image Translation with Two Unpaired Images

TuiGAN-PyTorch Official PyTorch Implementation of "TuiGAN: Learning Versatile Image-to-Image Translation with Two Unpaired Images" (ECCV 2020 Spotligh

181 Dec 09, 2022
ExCon: Explanation-driven Supervised Contrastive Learning

ExCon: Explanation-driven Supervised Contrastive Learning Contributors of this repo: Zhibo Zhang ( Zhibo (Darren) Zhang 18 Nov 01, 2022

Blender Add-on that sets a Material's Base Color to one of Pantone's Colors of the Year

Blender PCOY (Pantone Color of the Year) MCMC (Mid-Century Modern Colors) HG71 (House & Garden Colors 1971) Blender Add-ons That Assign a Custom Color

Don Schnitzius 15 Nov 20, 2022
Fortuitous Forgetting in Connectionist Networks

Fortuitous Forgetting in Connectionist Networks Introduction This repository includes reference code for the paper Fortuitous Forgetting in Connection

Hattie Zhou 14 Nov 26, 2022
PyTorch code for Composing Partial Differential Equations with Physics-Aware Neural Networks

FInite volume Neural Network (FINN) This repository contains the PyTorch code for models, training, and testing, and Python code for data generation t

Cognitive Modeling 20 Dec 18, 2022
ByteTrack with ReID module following the paradigm of FairMOT, tracking strategy is borrowed from FairMOT/JDE.

ByteTrack_ReID ByteTrack is the SOTA tracker in MOT benchmarks with strong detector YOLOX and a simple association strategy only based on motion infor

Han GuangXin 46 Dec 29, 2022
NAACL2021 - COIL Contextualized Lexical Retriever

COIL Repo for our NAACL paper, COIL: Revisit Exact Lexical Match in Information Retrieval with Contextualized Inverted List. The code covers learning

Luyu Gao 108 Dec 31, 2022
Official repository for "Restormer: Efficient Transformer for High-Resolution Image Restoration". SOTA for motion deblurring, image deraining, denoising (Gaussian/real data), and defocus deblurring.

Restormer: Efficient Transformer for High-Resolution Image Restoration Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan,

Syed Waqas Zamir 906 Dec 30, 2022
[NeurIPS 2021] Introspective Distillation for Robust Question Answering

Introspective Distillation (IntroD) This repository is the Pytorch implementation of our paper "Introspective Distillation for Robust Question Answeri

Yulei Niu 13 Jul 26, 2022
PyTorch Code for "Generalization in Dexterous Manipulation via Geometry-Aware Multi-Task Learning"

Generalization in Dexterous Manipulation via Geometry-Aware Multi-Task Learning [Project Page] [Paper] Wenlong Huang1, Igor Mordatch2, Pieter Abbeel1,

Wenlong Huang 40 Nov 22, 2022
Neural Scene Flow Prior (NeurIPS 2021 spotlight)

Neural Scene Flow Prior Xueqian Li, Jhony Kaesemodel Pontes, Simon Lucey Will appear on Thirty-fifth Conference on Neural Information Processing Syste

Lilac Lee 85 Jan 03, 2023
A library for optimization on Riemannian manifolds

TensorFlow RiemOpt A library for manifold-constrained optimization in TensorFlow. Installation To install the latest development version from GitHub:

Oleg Smirnov 83 Dec 27, 2022
Pytorch implementations of popular off-policy multi-agent reinforcement learning algorithms, including QMix, VDN, MADDPG, and MATD3.

Off-Policy Multi-Agent Reinforcement Learning (MARL) Algorithms This repository contains implementations of various off-policy multi-agent reinforceme

183 Dec 28, 2022