A framework for evaluating Knowledge Graph Embedding Models in a fine-grained manner.

Related tags

Text Data & NLPKGEval
Overview

KGEval

A framework for evaluating Knowledge Graph Embedding Models in a fine-grained manner.

The framework and experimental results are described in Ben Rim et al. 2021 (Outstanding Paper Award, AKBC 2021).

Instructions

Create a virtual environment

virtualenv -p python3.6 eval_env
source eval_env/bin/activate
pip install -r requirements.txt

Download data

In the main folder, run:

source data/download.sh

Download model

If you want to test the framework immediately, you can download pre-trained Pykeen models by running:

source download_models.sh

Generate behavioral tests

Symmetry Tests

Can choose --dataset FB15K237, WN18RR, YAGO310

python tests/run.py --dataset FB15K237 --mode generate --capability symmetry

This should result into the following output, and the files for each test set will be added under behavioral_tests\dataset\symmetry:

2021-10-03 23:37:35,060 - [INFO] - Preparing test sets for the dataset FB15K237
2021-10-03 23:37:37,621 - [INFO] - ########################## <----TRAIN---> ############################
2021-10-03 23:37:37,621 - [INFO] - 0 repetitions removed
2021-10-03 23:37:37,621 - [INFO] - 272115 triples remaining in train set
2021-10-03 23:37:37,621 - [INFO] - 6778 symmetric triples found in train set
2021-10-03 23:37:37,786 - [INFO] - ########################## <----TEST---> ############################
2021-10-03 23:37:37,786 - [INFO] - 0 repetitions removed
2021-10-03 23:37:37,786 - [INFO] - 20466 triples remaining in test set
2021-10-03 23:37:37,786 - [INFO] - 113 symmetric triples found in test set
2021-10-03 23:37:37,806 - [INFO] - ########################## <----VALID---> ############################
2021-10-03 23:37:37,806 - [INFO] - 0 repetitions removed
2021-10-03 23:37:37,806 - [INFO] - 17535 triples remaining in valid set
2021-10-03 23:37:37,806 - [INFO] - 113 symmetric triples found in valid set
2021-10-03 23:37:39,106 - [INFO] - #################### <---TEST SET 1: MEMORIZATION ---> ##########################
2021-10-03 23:37:39,106 - [INFO] - There are 5470 entries in the memorization set (occur in both directions)
2021-10-03 23:37:39,106 - [INFO] - #################### <---TEST SET 2: ONE DIRECTION SEEN ---> ##########################
2021-10-03 23:37:39,106 - [INFO] - There are 1308 entries not shown in both directions (to be reversed for testing)
2021-10-03 23:37:39,836 - [INFO] - #################### <--- SYMMETRIC RELATIONS ---> ##########################
2021-10-03 23:37:39,836 - [INFO] - TRAIN SET contains 6778 symmetric entries
2021-10-03 23:37:39,836 - [INFO] - TEST SET contains  113 symmetric entries with 113 not in training
2021-10-03 23:37:39,836 - [INFO] - VALID SET contains 113 symmetric entries with 113 not in training
2021-10-03 23:37:39,839 - [INFO] - #################### <---TEST SET 3: UNSEEN INSTANCES ---> ##########################
2021-10-03 23:37:39,840 - [INFO] - There are 226 entries that are not seen in any direction in training
2021-10-03 23:37:40,267 - [INFO] - #################### <---TEST SET 4: ASYMMETRY ---> ##########################
2021-10-03 23:37:40,267 - [INFO] - There are 3000 asymmetric entries in test set added to test 4

Hierarchy Tests

Only available for FB15K237 dataset

python tests/run.py --dataset FB15K237 --mode generate --capability hierarchy

The output should be and will be available under behavioral_tests/dataset/hierarchy/, the naming of the files corresponds to triples where the tail belongs to a specified level. For example, 1.txt contains triples where the tail has a type of level 1 in the entity type hierarchy :

2021-10-04 01:38:13,517 - [INFO] - Results of Hierarchy Behavioral Tests for FB15K237
2021-10-04 01:38:20,367 - [INFO] - <--------------- Entity Hiararchy statistics ----------------->
2021-10-04 01:38:20,568 - [INFO] - Level 0 contains 1 types and 3415 triples
2021-10-04 01:38:20,887 - [INFO] - Level 1 contains 66 types and 2006 triples
2021-10-04 01:38:20,900 - [INFO] - Level 2 contains 136 types and 4273 triples
2021-10-04 01:38:20,913 - [INFO] - Level 3 contains 213 types and 3560 triples
2021-10-04 01:38:20,923 - [INFO] - Level 4 contains 262 types and 3369 triples

Run Tests (pykeen models)

Symmetry behavioral tests on distmult or rotate:

python tests/run.py --dataset FB15K237 --mode test --model_name rotate

The output will be printed as shown below, and will also be available in the results folder under dataset/symmetry:

2021-10-04 14:00:57,100 - [INFO] - Starting test1 with rotate model
2021-10-04 14:03:23,249 - [INFO] - On test1, MR: 1.2407678244972578, MRR: 0.9400152688974949, [email protected]: 0.9014624953269958, [email protected]: 0.988482654094696, [email protected]: 0.9965264797210693
2021-10-04 14:03:23,249 - [INFO] - Starting test2 with rotate model
2021-10-04 14:04:15,614 - [INFO] - On test2, MR: 23.446483180428135, MRR: 0.4409348919640765, [email protected]: 0.30351680517196655, [email protected]: 0.5894495248794556, [email protected]: 0.7025994062423706
2021-10-04 14:04:15,614 - [INFO] - Starting test3 with rotate model
2021-10-04 14:04:25,364 - [INFO] - On test3, MR: 1018.9469026548672, MRR: 0.04786047740344238, [email protected]: 0.008849557489156723, [email protected]: 0.06194690242409706, [email protected]: 0.12389380484819412
2021-10-04 14:04:25,365 - [INFO] - Starting test4 with rotate model
2021-10-04 14:05:38,900 - [INFO] - On test4, MR: 4901.459, MRR: 0.07606098649786266, [email protected]: 0.9496666789054871, [email protected]: 0.893666684627533, [email protected]: 0.8823333382606506

Hierarchy behavioral tests on distmult or rotate:

   python tests/run.py --dataset FB15K237 --mode test --capability hierarchy --model_name rotate

Run Tests on other models and other frameworks

(To be added)

Owner
NEC Laboratories Europe
Research software developed at NEC Laboratories Europe
NEC Laboratories Europe
nlpcommon is a python Open Source Toolkit for text classification.

nlpcommon nlpcommon, Python Text Tool. Guide Feature Install Usage Dataset Contact Cite Reference Feature nlpcommon is a python Open Source

xuming 3 May 29, 2022
BMInf (Big Model Inference) is a low-resource inference package for large-scale pretrained language models (PLMs).

BMInf (Big Model Inference) is a low-resource inference package for large-scale pretrained language models (PLMs).

OpenBMB 377 Jan 02, 2023
AI and Machine Learning workflows on Anthos Bare Metal.

Hybrid and Sovereign AI on Anthos Bare Metal Table of Contents Overview Terraform as IaC Substrate ABM Cluster on GCE using Terraform TensorFlow ResNe

Google Cloud Platform 8 Nov 26, 2022
NLTK Source

Natural Language Toolkit (NLTK) NLTK -- the Natural Language Toolkit -- is a suite of open source Python modules, data sets, and tutorials supporting

Natural Language Toolkit 11.4k Jan 04, 2023
A 30000+ Chinese MRC dataset - Delta Reading Comprehension Dataset

Delta Reading Comprehension Dataset 台達閱讀理解資料集 Delta Reading Comprehension Dataset (DRCD) 屬於通用領域繁體中文機器閱讀理解資料集。 本資料集期望成為適用於遷移學習之標準中文閱讀理解資料集。 本資料集從2,108篇

272 Dec 15, 2022
iBOT: Image BERT Pre-Training with Online Tokenizer

Image BERT Pre-Training with iBOT Official PyTorch implementation and pretrained models for paper iBOT: Image BERT Pre-Training with Online Tokenizer.

Bytedance Inc. 435 Jan 06, 2023
Sentiment Analysis Project using Count Vectorizer and TF-IDF Vectorizer

Sentiment Analysis Project This project contains two sentiment analysis programs for Hotel Reviews using a Hotel Reviews dataset from Datafiniti. The

Simran Farrukh 0 Mar 28, 2022
REST API for sentence tokenization and embedding using Multilingual Universal Sentence Encoder.

What is MUSE? MUSE stands for Multilingual Universal Sentence Encoder - multilingual extension (16 languages) of Universal Sentence Encoder (USE). MUS

Dani El-Ayyass 47 Sep 05, 2022
Bpe algorithm can finetune tokenizer - Bpe algorithm can finetune tokenizer

"# bpe_algorithm_can_finetune_tokenizer" this is an implyment for https://github

张博 1 Feb 02, 2022
[EMNLP 2021] Mirror-BERT: Converting Pretrained Language Models to universal text encoders without labels.

[EMNLP 2021] Mirror-BERT: Converting Pretrained Language Models to universal text encoders without labels.

Cambridge Language Technology Lab 61 Dec 10, 2022
Skipgram Negative Sampling in PyTorch

PyTorch SGNS Word2Vec's SkipGramNegativeSampling in Python. Yet another but quite general negative sampling loss implemented in PyTorch. It can be use

Jamie J. Seol 287 Dec 14, 2022
基于GRU网络的句子判断程序/A program based on GRU network for judging sentences

SentencesJudger SentencesJudger 是一个基于GRU神经网络的句子判断程序,基本的功能是判断文章中的某一句话是否为一个优美的句子。 English 如何使用SentencesJudger 确认Python运行环境 安装pyTorch与LTP python3 -m pip

8 Mar 24, 2022
Graph Coloring - Weighted Vertex Coloring Problem

Graph Coloring - Weighted Vertex Coloring Problem This project proposes several local searches and an MCTS algorithm for the weighted vertex coloring

Cyril 1 Jul 08, 2022
Conversational text Analysis using various NLP techniques

Conversational text Analysis using various NLP techniques

Rita Anjana 159 Jan 06, 2023
Bidirectional LSTM-CRF and ELMo for Named-Entity Recognition, Part-of-Speech Tagging and so on.

anaGo anaGo is a Python library for sequence labeling(NER, PoS Tagging,...), implemented in Keras. anaGo can solve sequence labeling tasks such as nam

Hiroki Nakayama 1.5k Dec 05, 2022
Trains an OpenNMT PyTorch model and SentencePiece tokenizer.

Trains an OpenNMT PyTorch model and SentencePiece tokenizer. Designed for use with Argos Translate and LibreTranslate.

Argos Open Tech 61 Dec 13, 2022
PyTorch code for EMNLP 2019 paper "LXMERT: Learning Cross-Modality Encoder Representations from Transformers".

LXMERT: Learning Cross-Modality Encoder Representations from Transformers Our servers break again :(. I have updated the links so that they should wor

Hao Tan 838 Dec 19, 2022
Princeton NLP's pre-training library based on fairseq with DeepSpeed kernel integration 🚃

This repository provides a library for efficient training of masked language models (MLM), built with fairseq. We fork fairseq to give researchers mor

Princeton Natural Language Processing 92 Dec 27, 2022
NLP command-line assistant powered by OpenAI

NLP command-line assistant powered by OpenAI

Axel 16 Dec 09, 2022
A paper list of pre-trained language models (PLMs).

Large-scale pre-trained language models (PLMs) such as BERT and GPT have achieved great success and become a milestone in NLP.

RUCAIBox 124 Jan 02, 2023