Learning Logic Rules for Document-Level Relation Extraction

Related tags

Deep LearningLogiRE
Overview

LogiRE

Learning Logic Rules for Document-Level Relation Extraction

We propose to introduce logic rules to tackle the challenges of doc-level RE.

Equipped with logic rules, our LogiRE framework can not only explicitly capture long-range semantic dependencies, but also show more interpretability.

We combine logic rules and outputs of neural networks for relation extraction.

drawing

As shown in the example, the relation between kate and Britain can be identified according to the other relations and the listed logic rule.

The overview of LogiRE framework is shown below.

drawing

Data

  • Download the preprocessing script and meta data

    DWIE
    ├── data
    │   ├── annos
    │   └── annos_with_content
    ├── en_core_web_sm-2.3.1
    │   ├── build
    │   ├── dist
    │   ├── en_core_web_sm
    │   ├── en_core_web_sm.egg-info
    │   ├── MANIFEST.in
    │   ├── meta.json
    │   ├── PKG-INFO
    │   ├── setup.cfg
    │   └── setup.py
    ├── glove.6B.100d.txt
    ├── md5sum.txt
    └── read_docred_style.py
    
  • Install Spacy (en_core_web_sm-2.3.1)

    cd en_core_web_sm-2.3.1
    pip install .
  • Download the original data from DWIE

  • Generate docred-style data

    python3 read_docred_style.py

    The docred-style doc-RE data will be generated at DWIE/data/docred-style. Please compare the md5sum codes of generated files with the records in md5sum.txt to make sure you generate the data correctly.

Train & Eval

Requirements

  • pytorch >= 1.7.1
  • tqdm >= 4.62.3
  • transformers >= 4.4.2

Backbone Preparation

The LogiRE framework requires a backbone NN model for the initial probabilistic assessment on each triple.

The probabilistic assessments of the backbone model and other related meta data should be organized in the following format. In other words, please train any doc-RE model with the docred-style RE data before and dump the outputs as below.

{
    'train': [
        {
            'N': <int>,
            'logits': <torch.FloatTensor of size (N, N, R)>,
            'labels': <torch.BoolTensor of size (N, N, R)>,
            'in_train': <torch.BoolTensor of size (N, N, R)>,
        },
        ...
    ],
    'dev': [
        ...
    ]
    'test': [
        ...
    ]
}

Each example contains four items:

  • N: the number of entities in this example.
  • logits: the logits of all triples as a tensor of size (N, N, R). R is the number of relation types (Na excluded)
  • labels: the labels of all triples as a tensor of size (N, N, R).
  • in_train: the in_train masks of all triples as a tensor of size(N, N, R), used for ign f1 evaluation. True indicates the existence of the triple in the training split.

For convenience, we provide the dump of ATLOP as examples. Feel free to download and try it directly.

Train

python3 main.py --mode train \
    --save_dir <the directory for saving logs and checkpoints> \
    --rel_num <the number of relation types (Na excluded)> \
    --ent_num <the number of entity types> \
    --n_iters <the number of iterations for optimization> \
    --max_depth <max depths of the logic rules> \
    --data_dir <the directory of the docred-style data> \
    --backbone_path <the path of the backbone model dump>

Evaluation

python3 main.py --mode test \
    --save_dir <the directory for saving logs and checkpoints> \
    --rel_num <the number of relation types (Na excluded)> \
    --ent_num <the number of entity types> \
    --n_iters <the number of iterations for optimization> \
    --max_depth <max depths of the logic rules> \
    --data_dir <the directory of the docred-style data> \
    --backbone_path <the path of the backbone model dump>

Results

  • LogiRE framework outperforms strong baselines on both relation performance and logical consistency.

    drawing
  • Injecting logic rules can improve long-range dependencies modeling, we show the relation performance on each interval of different entity pair distances. LogiRE framework outperforms the baseline and the gap becomes larger when entity pair distances increase. Logic rules actually serve as shortcuts for capturing long-range semantics in concept-level instead of token-level.

    drawing

Acknowledgements

We sincerely thank RNNLogic which largely inspired us and DWIE & DocRED for providing the benchmarks.

Reference

@inproceedings{ru-etal-2021-learning,
    title = "Learning Logic Rules for Document-Level Relation Extraction",
    author = "Ru, Dongyu  and
      Sun, Changzhi  and
      Feng, Jiangtao  and
      Qiu, Lin  and
      Zhou, Hao  and
      Zhang, Weinan  and
      Yu, Yong  and
      Li, Lei",
    booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
    month = nov,
    year = "2021",
    address = "Online and Punta Cana, Dominican Republic",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2021.emnlp-main.95",
    pages = "1239--1250",
}
Implementation of ConvMixer in TensorFlow and Keras

ConvMixer ConvMixer, an extremely simple model that is similar in spirit to the ViT and the even-more-basic MLP-Mixer in that it operates directly on

Sayan Nath 8 Oct 03, 2022
PyTorch implementation of DUL (Data Uncertainty Learning in Face Recognition, CVPR2020)

PyTorch implementation of DUL (Data Uncertainty Learning in Face Recognition, CVPR2020)

Mouxiao Huang 20 Nov 15, 2022
Over-the-Air Ensemble Inference with Model Privacy

Over-the-Air Ensemble Inference with Model Privacy This repository contains simulations for our private ensemble inference method. Installation Instal

Selim Firat Yilmaz 1 Jun 29, 2022
Complementary Patch for Weakly Supervised Semantic Segmentation, ICCV21 (poster)

CPN (ICCV2021) This is an implementation of Complementary Patch for Weakly Supervised Semantic Segmentation, which is accepted by ICCV2021 poster. Thi

Ferenas 20 Dec 12, 2022
Code for our NeurIPS 2021 paper 'Exploiting the Intrinsic Neighborhood Structure for Source-free Domain Adaptation'

Exploiting the Intrinsic Neighborhood Structure for Source-free Domain Adaptation (NeurIPS 2021) Code for our NeurIPS 2021 paper 'Exploiting the Intri

Shiqi Yang 53 Dec 25, 2022
A graphical Semi-automatic annotation tool based on labelImg and Yolov5

💕YOLOV5 semi-automatic annotation tool (Based on labelImg)

EricFang 247 Jan 05, 2023
For holding anime-related object classification and detection models

Animesion An end-to-end framework for anime-related object classification, detection, segmentation, and other models. Update: 01/22/2020. Due to time-

Edwin Arkel Rios 72 Nov 30, 2022
Learning to Draw: Emergent Communication through Sketching

Learning to Draw: Emergent Communication through Sketching This is the official code for the paper "Learning to Draw: Emergent Communication through S

19 Jul 22, 2022
Code for ICLR2018 paper: Improving GAN Training via Binarized Representation Entropy (BRE) Regularization - Y. Cao · W Ding · Y.C. Lui · R. Huang

code for "Improving GAN Training via Binarized Representation Entropy (BRE) Regularization" (ICLR2018 paper) paper: https://arxiv.org/abs/1805.03644 G

21 Oct 12, 2020
"Structure-Augmented Text Representation Learning for Efficient Knowledge Graph Completion"(WWW 2021)

STAR_KGC This repo contains the source code of the paper accepted by WWW'2021. "Structure-Augmented Text Representation Learning for Efficient Knowled

Bo Wang 60 Dec 26, 2022
A pytorch &keras implementation and demo of Fastformer.

Fastformer Notes from the authors Pytorch/Keras implementation of Fastformer. The keras version only includes the core fastformer attention part. The

153 Dec 28, 2022
A state of the art of new lightweight YOLO model implemented by TensorFlow 2.

CSL-YOLO: A New Lightweight Object Detection System for Edge Computing This project provides a SOTA level lightweight YOLO called "Cross-Stage Lightwe

Miles Zhang 54 Dec 21, 2022
Official implementation of NeuralFusion: Online Depth Map Fusion in Latent Space

NeuralFusion This is the official implementation of NeuralFusion: Online Depth Map Fusion in Latent Space. We provide code to train the proposed pipel

53 Jan 01, 2023
A Unified Framework and Analysis for Structured Knowledge Grounding

UnifiedSKG 📚 : Unifying and Multi-Tasking Structured Knowledge Grounding with Text-to-Text Language Models Code for paper UnifiedSKG: Unifying and Mu

HKU NLP Group 370 Dec 21, 2022
Security evaluation module with onnx, pytorch, and SecML.

🚀 🐼 🔥 PandaVision Integrate and automate security evaluations with onnx, pytorch, and SecML! Installation Starting the server without Docker If you

Maura Pintor 11 Apr 12, 2022
This is the official repository of the paper Stocastic bandits with groups of similar arms (NeurIPS 2021). It contains the code that was used to compute the figures and experiments of the paper.

Experiments How to reproduce experimental results of Stochastic bandits with groups of similar arms submitted paper ? Section 5 of the paper To reprod

Fabien 0 Oct 25, 2021
3rd Place Solution of the Traffic4Cast Core Challenge @ NeurIPS 2021

3rd Place Solution of Traffic4Cast 2021 Core Challenge This is the code for our solution to the NeurIPS 2021 Traffic4Cast Core Challenge. Paper Our so

7 Jul 25, 2022
TSDF++: A Multi-Object Formulation for Dynamic Object Tracking and Reconstruction

TSDF++: A Multi-Object Formulation for Dynamic Object Tracking and Reconstruction TSDF++ is a novel multi-object TSDF formulation that can encode mult

ETHZ ASL 130 Dec 29, 2022
Source code for paper "Deep Superpixel-based Network for Blind Image Quality Assessment"

DSN-IQA Source code for paper "Deep Superpixel-based Network for Blind Image Quality Assessment" Requirements Python =3.8.0 Pytorch =1.7.1 Usage wit

7 Oct 13, 2022
Dynamical Wasserstein Barycenters for Time Series Modeling

Dynamical Wasserstein Barycenters for Time Series Modeling This is the code related for the Dynamical Wasserstein Barycenter model published in Neurip

8 Sep 09, 2022