TPlinker for NER 中文/英文命名实体识别

Overview

TPLinker-NER

喜欢本项目的话,欢迎点击右上角的star,感谢每一个点赞的你。

项目介绍

本项目是参考 TPLinker 中HandshakingTagging思想,将TPLinker由原来的关系抽取(RE)模型修改为命名实体识别(NER)模型。

【注意】 事实上,本项目使用的base模型是TPLinker_plus,这是因为若严格地按照TPLinker的设计思想,在NER任务上几乎无法使用。具体原因,在Q&A部分有介绍。

TPLinker-NER相比于之前的序列标注、半指针-半标注等NER模型,更加有效的解决了实体嵌套问题。因为TPLinker本身在RE领域已经取得了优异的成绩,而TPLinker-NER作为从中提取的子功能,理论上效果也会太差。 由于本人拥有的算力有限,无法在大规模语料库上进行试验,此次只在 CLUENER 数据集上做了实验。

CLUENER验证集F1

Best F1 on dev: 0.9111

Usage

实验环境

本次实验进行时Python版本为3.6,其他主要的第三方库包括:

  • pytorch==1.8.1
  • wandb==0.10.26 #for logging the result
  • glove-python-binary==0.1.0
  • transformers==4.1.1
  • tqdm==4.54.1

NOTE:

  1. wandb 是一款优秀的机器学习可视化库。本项目默认未启用wandb,如果想使用wandb管理日志,请在tplinker_plus_ner/config.py文件中修改相关配置即可。
  2. 如果你使用的Windows系统且尚未安装Glove库,或者只想用BERT作编码器,主文件请使用train_only_bert.py

数据准备

格式要求

TPLinker-NER约定数据集的的格式如下:

  • 训练集train_data.json与验证集valid_data.json
[
    {
        "id":"",
        "text":"原始语句",
        "entity_list":[{"text":"实体","type":"实体类型","char_span":"实体char级别的span","token_span":"实体token级别的span"}]
    },
    ...
]
  • 测试集test_data.json
[
    {
        "id":"",
        "text":"原始语句"
    },
    ...
]

数据转换

如果需要将其他格式的数据集转换到TPLinker-NER,请参考raw_data/convert_dataset.py的转换逻辑。

数据存放

准备好的数据需放在data4bert/{exp_name}data4bilstm/{exp_name}中,其中exp_nametplinker_plus_ner/config.py中配置的实验名。

预训练模型与词向量

请下载Bert的中文预训练模型bert-base-chinese存放至pretrained_models/,并在tplinker_plus_ner/config.py中配置正确的bert_path

如果你想使用BiLSTM,需要准备预训练word embeddings存放至pretrained_emb/,如何预训练请参考preprocess/Pretrain_Word_Embedding.ipynb

Train

请阅读tplinker_plus_ner/config.py中的内容,并根据自己的需求修改配置与超参数。

然后开始训练

cd tplinker_plus_ner
python train.py

Evaluation

你仍然需要在tplinker_plus_ner/config.py中配置Evaluation相关参数。尤其注意eval_config中的model_state_dict_dir参数值与你所用的日志模块一致。

然后开始Evaluate

cd tplinker_plus_ner
python evaluate.py

Q&A

以下问题为个人在改写项目的想法,仅供参考,如有错误,欢迎指正。

  1. 为什么TPLinker不适合直接用在NER上,而要用TPLinker_plus?

    个人理解:讨论这个问题就要先了解最初的TPLinker设计模式,除了HandShaking外,作者还预定义了三大种类型ent, head_rel, tail_rel,每个类型下又有子类型,ent:{"O":0,"ENT-H2T":1}, head_rel:{"O":0, "REL-SH2OH":1, "REL-OH2SH":2}, head_tail:{"O":0, "REL-ST2OT":1, "REL-OT2ST":2}。在模型实际做分类时,三大类之间是独立的。以head_rel为例,其原数据整理得y_true矩阵shape为(batch_size, rel_size, shaking_seq_len),这里rel_size即有多少种关系。模型预测的结果y_pred矩阵shape为(batch_size, rel_size, shaking_seq_len, 3)。可以想象,这样的y_true矩阵已经很稀疏了,只有0,1,2三种标签。而如果换做NER,这样(batch_size, ent_size, shaking_seq_len)的矩阵将更加稀疏(只有0,1两种标签),对于一个(ent_size,shaking_seq_len)的矩阵来说,可能只有1至2个地方为1,这将导致模型无限地将预测结果都置为0,从而学习失败(事实实验也是这样)。作者在TPLinker中是如何解决这一问题的呢?其实作者用了个小trick回避了这一问题,具体做法是不再区分实体的类型,将所有实体都看作是DEFAULT类型,这样就把y_true压缩成了(batch_size,shaking_seq_len),降低了矩阵的稀疏性。作者对于这一做法的解释是"Because it is not necessary to recognize the type of entities for the relation extraction task since a predefined relation usually has fixed types for its subject and object.",即实体类别信息对关系抽取不太重要,因为每种关系某种程度上已经预定义了实体类型。综上,如果想直接把TPLinker应用到NER上是不合适的。

    而TPLinker_plus改变了这一做法,他不再将ent, head_rel, tail_rel当做三个独立任务,而是将所有的关系与标签组合,形成一个大的标签库,只用一个HandShaking矩阵表示句子中的所有关系。举个例子,假设有以下3个关系(或实体类型):主演、出生于、作者,那么其与标记标签EH-ET,SH-OH,OH-SH,ST-OT,OT-ST组合后会产生15种tag,这极大地扩充了标签库。相应的,TPLinker_plus的输入也就变成了(batch_size,shaking_seq_len,tag_size)。这样的改变让矩阵中的非0值相对增多,降低了矩阵的稀疏性。(这只是一方面原因,更加重要原因的请参考问题2)

  2. TPLinker_plus还做了哪些优化?

    • 任务模式的转变:从问题1最后的结论可以看出,TPLinker_plus扩充标签库的同时,也将模型任务由原来的多分类任务转变成了多标签分类任务,即每个句子形成的shaking_seq可以出现多个的标签,且出现的数量不确定。形如
    # 设句子的seq_len=10,那么shaking_seq=55
    # 标签组合有8种tag_size=8
    [
        [0,0,1,0,1,0,1,0],
        [1,0,1,0,0,0,0,1],
        ...
        # 剩下的53行
    ]
  3. TPLinker-NER中几个关键词怎么理解?

    对于一个text中含有n个token的情况

    • shaking_matrixn*n的矩阵,若shaking_maxtrix[i][j]=1表示从第i个token到第j个token为一个实体。(实际用到的只有上三角矩阵,以为实体的起始位置一定在结束位置前。)
    • matrix_index:上三角矩阵的坐标,(0,0),(0,1),(0,2)...(0,n-1),(1,1),(1,2)...(1,n-1)...(n-1,n-1)
    • shaking_index:上三角矩阵的索引,长度为$\frac{n(n+1)}{2}$,即[0,1,2,...,n(n+1)/2 - 1]
    • shaking_ind2matrix_ind:将索引映射到矩阵坐标,即[(0,0),(0,1),...,(n-1,n-1)]
    • matrix_ind2shaking_ind:将坐标映射到索引,即
      [[0, 1, 2,    ...,        n-1],
      [0, n, n+1, n+2,  ...,  2n-2]
      ...
      [0, 0, 0, ...,  n(n+1)/2 - 1]]
      
    • spot:一个实体对应的起止span和类型id,例如实体“北京”在矩阵中起始位置在7,终止位置在9,类型为LOC"(id:3),那么其对应spot为(7, 9, 3)。

致谢

Owner
GodK
GodK
A model library for exploring state-of-the-art deep learning topologies and techniques for optimizing Natural Language Processing neural networks

A Deep Learning NLP/NLU library by Intel® AI Lab Overview | Models | Installation | Examples | Documentation | Tutorials | Contributing NLP Architect

Intel Labs 2.9k Dec 31, 2022
Sentence Embeddings with BERT & XLNet

Sentence Transformers: Multilingual Sentence Embeddings using BERT / RoBERTa / XLM-RoBERTa & Co. with PyTorch This framework provides an easy method t

Ubiquitous Knowledge Processing Lab 9.1k Jan 02, 2023
Implementation of ProteinBERT in Pytorch

ProteinBERT - Pytorch (wip) Implementation of ProteinBERT in Pytorch. Original Repository Install $ pip install protein-bert-pytorch Usage import torc

Phil Wang 92 Dec 25, 2022
Scikit-learn style model finetuning for NLP

Scikit-learn style model finetuning for NLP Finetune is a library that allows users to leverage state-of-the-art pretrained NLP models for a wide vari

indico 665 Dec 17, 2022
문장단위로 분절된 나무위키 데이터셋. Releases에서 다운로드 받거나, tfds-korean을 통해 다운로드 받으세요.

Namuwiki corpus 문장단위로 미리 분절된 나무위키 코퍼스. 목적이 LM등에서 사용하기 위한 데이터셋이라, 링크/이미지/테이블 등등이 잘려있습니다. 문장 단위 분절은 kss를 활용하였습니다. 라이선스는 나무위키에 명시된 바와 같이 CC BY-NC-SA 2.0

Jeong Ukjae 16 Apr 02, 2022
Legal text retrieval for python

legal-text-retrieval Overview This system contains 2 steps: generate training data containing negative sample found by mixture score of cosine(tfidf)

Nguyễn Minh Phương 22 Dec 06, 2022
🤖 Basic Financial Chatbot with handoff ability built with Rasa

Financial Services Example Bot This is an example chatbot demonstrating how to build AI assistants for financial services and banking with Rasa. It in

Mohammad Javad Hossieni 4 Aug 10, 2022
This is a modification of the OpenAI-CLIP repository of moein-shariatnia

This is a modification of the OpenAI-CLIP repository of moein-shariatnia

Sangwon Beak 2 Mar 04, 2022
BERN2: an advanced neural biomedical namedentity recognition and normalization tool

BERN2 We present BERN2 (Advanced Biomedical Entity Recognition and Normalization), a tool that improves the previous neural network-based NER tool by

DMIS Laboratory - Korea University 99 Jan 06, 2023
Russian GPT3 models.

Russian GPT-3 models (ruGPT3XL, ruGPT3Large, ruGPT3Medium, ruGPT3Small) trained with 2048 sequence length with sparse and dense attention blocks. We also provide Russian GPT-2 large model (ruGPT2Larg

Sberbank AI 1.6k Jan 05, 2023
AudioCLIP Extending CLIP to Image, Text and Audio

AudioCLIP Extending CLIP to Image, Text and Audio This repository contains implementation of the models described in the paper arXiv:2106.13043. This

458 Jan 02, 2023
An implementation of model parallel GPT-3-like models on GPUs, based on the DeepSpeed library. Designed to be able to train models in the hundreds of billions of parameters or larger.

GPT-NeoX An implementation of model parallel GPT-3-like models on GPUs, based on the DeepSpeed library. Designed to be able to train models in the hun

EleutherAI 3.1k Jan 08, 2023
Built for cleaning purposes in military institutions

Ferramenta do AL Construído para fins de limpeza em instituições militares. Instalação Requer python = 3.2 pip install -r requirements.txt Usagem Exe

0 Aug 13, 2022
📜 GPT-2 Rhyming Limerick and Haiku models using data augmentation

Well-formed Limericks and Haikus with GPT2 📜 GPT-2 Rhyming Limerick and Haiku models using data augmentation In collaboration with Matthew Korahais &

Bardia Shahrestani 2 May 26, 2022
Question answering app is used to answer for a user given question from user given text.

Question answering app is used to answer for a user given question from user given text.It is created using HuggingFace's transformer pipeline and streamlit python packages.

Siva Prakash 3 Apr 05, 2022
Research code for "What to Pre-Train on? Efficient Intermediate Task Selection", EMNLP 2021

efficient-task-transfer This repository contains code for the experiments in our paper "What to Pre-Train on? Efficient Intermediate Task Selection".

AdapterHub 26 Dec 24, 2022
sangha, pronounced "suhng-guh", is a social networking, booking platform where students and teachers can share their practice.

Flask React Project This is the backend for the Flask React project. Getting started Clone this repository (only this branch) git clone https://github

Courtney Newcomer 17 Sep 29, 2021
Source code for AAAI20 "Generating Persona Consistent Dialogues by Exploiting Natural Language Inference".

Generating Persona Consistent Dialogues by Exploiting Natural Language Inference Source code for RCDG model in AAAI20 Generating Persona Consistent Di

16 Oct 08, 2022
Language-Agnostic SEntence Representations

LASER Language-Agnostic SEntence Representations LASER is a library to calculate and use multilingual sentence embeddings. NEWS 2019/11/08 CCMatrix is

Facebook Research 3.2k Jan 04, 2023
NewsMTSC: (Multi-)Target-dependent Sentiment Classification in News Articles

NewsMTSC: (Multi-)Target-dependent Sentiment Classification in News Articles NewsMTSC is a dataset for target-dependent sentiment classification (TSC)

Felix Hamborg 79 Dec 30, 2022