TPlinker for NER 中文/英文命名实体识别

Overview

TPLinker-NER

喜欢本项目的话,欢迎点击右上角的star,感谢每一个点赞的你。

项目介绍

本项目是参考 TPLinker 中HandshakingTagging思想,将TPLinker由原来的关系抽取(RE)模型修改为命名实体识别(NER)模型。

【注意】 事实上,本项目使用的base模型是TPLinker_plus,这是因为若严格地按照TPLinker的设计思想,在NER任务上几乎无法使用。具体原因,在Q&A部分有介绍。

TPLinker-NER相比于之前的序列标注、半指针-半标注等NER模型,更加有效的解决了实体嵌套问题。因为TPLinker本身在RE领域已经取得了优异的成绩,而TPLinker-NER作为从中提取的子功能,理论上效果也会太差。 由于本人拥有的算力有限,无法在大规模语料库上进行试验,此次只在 CLUENER 数据集上做了实验。

CLUENER验证集F1

Best F1 on dev: 0.9111

Usage

实验环境

本次实验进行时Python版本为3.6,其他主要的第三方库包括:

  • pytorch==1.8.1
  • wandb==0.10.26 #for logging the result
  • glove-python-binary==0.1.0
  • transformers==4.1.1
  • tqdm==4.54.1

NOTE:

  1. wandb 是一款优秀的机器学习可视化库。本项目默认未启用wandb,如果想使用wandb管理日志,请在tplinker_plus_ner/config.py文件中修改相关配置即可。
  2. 如果你使用的Windows系统且尚未安装Glove库,或者只想用BERT作编码器,主文件请使用train_only_bert.py

数据准备

格式要求

TPLinker-NER约定数据集的的格式如下:

  • 训练集train_data.json与验证集valid_data.json
[
    {
        "id":"",
        "text":"原始语句",
        "entity_list":[{"text":"实体","type":"实体类型","char_span":"实体char级别的span","token_span":"实体token级别的span"}]
    },
    ...
]
  • 测试集test_data.json
[
    {
        "id":"",
        "text":"原始语句"
    },
    ...
]

数据转换

如果需要将其他格式的数据集转换到TPLinker-NER,请参考raw_data/convert_dataset.py的转换逻辑。

数据存放

准备好的数据需放在data4bert/{exp_name}data4bilstm/{exp_name}中,其中exp_nametplinker_plus_ner/config.py中配置的实验名。

预训练模型与词向量

请下载Bert的中文预训练模型bert-base-chinese存放至pretrained_models/,并在tplinker_plus_ner/config.py中配置正确的bert_path

如果你想使用BiLSTM,需要准备预训练word embeddings存放至pretrained_emb/,如何预训练请参考preprocess/Pretrain_Word_Embedding.ipynb

Train

请阅读tplinker_plus_ner/config.py中的内容,并根据自己的需求修改配置与超参数。

然后开始训练

cd tplinker_plus_ner
python train.py

Evaluation

你仍然需要在tplinker_plus_ner/config.py中配置Evaluation相关参数。尤其注意eval_config中的model_state_dict_dir参数值与你所用的日志模块一致。

然后开始Evaluate

cd tplinker_plus_ner
python evaluate.py

Q&A

以下问题为个人在改写项目的想法,仅供参考,如有错误,欢迎指正。

  1. 为什么TPLinker不适合直接用在NER上,而要用TPLinker_plus?

    个人理解:讨论这个问题就要先了解最初的TPLinker设计模式,除了HandShaking外,作者还预定义了三大种类型ent, head_rel, tail_rel,每个类型下又有子类型,ent:{"O":0,"ENT-H2T":1}, head_rel:{"O":0, "REL-SH2OH":1, "REL-OH2SH":2}, head_tail:{"O":0, "REL-ST2OT":1, "REL-OT2ST":2}。在模型实际做分类时,三大类之间是独立的。以head_rel为例,其原数据整理得y_true矩阵shape为(batch_size, rel_size, shaking_seq_len),这里rel_size即有多少种关系。模型预测的结果y_pred矩阵shape为(batch_size, rel_size, shaking_seq_len, 3)。可以想象,这样的y_true矩阵已经很稀疏了,只有0,1,2三种标签。而如果换做NER,这样(batch_size, ent_size, shaking_seq_len)的矩阵将更加稀疏(只有0,1两种标签),对于一个(ent_size,shaking_seq_len)的矩阵来说,可能只有1至2个地方为1,这将导致模型无限地将预测结果都置为0,从而学习失败(事实实验也是这样)。作者在TPLinker中是如何解决这一问题的呢?其实作者用了个小trick回避了这一问题,具体做法是不再区分实体的类型,将所有实体都看作是DEFAULT类型,这样就把y_true压缩成了(batch_size,shaking_seq_len),降低了矩阵的稀疏性。作者对于这一做法的解释是"Because it is not necessary to recognize the type of entities for the relation extraction task since a predefined relation usually has fixed types for its subject and object.",即实体类别信息对关系抽取不太重要,因为每种关系某种程度上已经预定义了实体类型。综上,如果想直接把TPLinker应用到NER上是不合适的。

    而TPLinker_plus改变了这一做法,他不再将ent, head_rel, tail_rel当做三个独立任务,而是将所有的关系与标签组合,形成一个大的标签库,只用一个HandShaking矩阵表示句子中的所有关系。举个例子,假设有以下3个关系(或实体类型):主演、出生于、作者,那么其与标记标签EH-ET,SH-OH,OH-SH,ST-OT,OT-ST组合后会产生15种tag,这极大地扩充了标签库。相应的,TPLinker_plus的输入也就变成了(batch_size,shaking_seq_len,tag_size)。这样的改变让矩阵中的非0值相对增多,降低了矩阵的稀疏性。(这只是一方面原因,更加重要原因的请参考问题2)

  2. TPLinker_plus还做了哪些优化?

    • 任务模式的转变:从问题1最后的结论可以看出,TPLinker_plus扩充标签库的同时,也将模型任务由原来的多分类任务转变成了多标签分类任务,即每个句子形成的shaking_seq可以出现多个的标签,且出现的数量不确定。形如
    # 设句子的seq_len=10,那么shaking_seq=55
    # 标签组合有8种tag_size=8
    [
        [0,0,1,0,1,0,1,0],
        [1,0,1,0,0,0,0,1],
        ...
        # 剩下的53行
    ]
  3. TPLinker-NER中几个关键词怎么理解?

    对于一个text中含有n个token的情况

    • shaking_matrixn*n的矩阵,若shaking_maxtrix[i][j]=1表示从第i个token到第j个token为一个实体。(实际用到的只有上三角矩阵,以为实体的起始位置一定在结束位置前。)
    • matrix_index:上三角矩阵的坐标,(0,0),(0,1),(0,2)...(0,n-1),(1,1),(1,2)...(1,n-1)...(n-1,n-1)
    • shaking_index:上三角矩阵的索引,长度为$\frac{n(n+1)}{2}$,即[0,1,2,...,n(n+1)/2 - 1]
    • shaking_ind2matrix_ind:将索引映射到矩阵坐标,即[(0,0),(0,1),...,(n-1,n-1)]
    • matrix_ind2shaking_ind:将坐标映射到索引,即
      [[0, 1, 2,    ...,        n-1],
      [0, n, n+1, n+2,  ...,  2n-2]
      ...
      [0, 0, 0, ...,  n(n+1)/2 - 1]]
      
    • spot:一个实体对应的起止span和类型id,例如实体“北京”在矩阵中起始位置在7,终止位置在9,类型为LOC"(id:3),那么其对应spot为(7, 9, 3)。

致谢

Owner
GodK
GodK
2021海华AI挑战赛·中文阅读理解·技术组·第三名

文字是人类用以记录和表达的最基本工具,也是信息传播的重要媒介。透过文字与符号,我们可以追寻人类文明的起源,可以传播知识与经验,读懂文字是认识与了解的第一步。对于人工智能而言,它的核心问题之一就是认知,而认知的核心则是语义理解。

21 Dec 26, 2022
Code for hyperboloid embeddings for knowledge graph entities

Implementation for the papers: Self-Supervised Hyperboloid Representations from Logical Queries over Knowledge Graphs, Nurendra Choudhary, Nikhil Rao,

30 Dec 10, 2022
Learn meanings behind words is a key element in NLP. This project concentrates on the disambiguation of preposition senses. Therefore, we train a bert-transformer model and surpass the state-of-the-art.

New State-of-the-Art in Preposition Sense Disambiguation Supervisor: Prof. Dr. Alexander Mehler Alexander Henlein Institutions: Goethe University TTLa

Dirk Neuhäuser 4 Apr 06, 2022
A Telegram bot to add notes to Flomo.

flomo bot 使用 Telegram 机器人发送笔记到你的 Flomo. 你需要有一台可访问 Telegram 的服务器。 Steps @BotFather 新建机器人,获取 token Flomo 官网获取 API,链接 https://flomoapp.com/mine?source=in

Zhen 44 Dec 30, 2022
CoSENT 比Sentence-BERT更有效的句向量方案

CoSENT 比Sentence-BERT更有效的句向量方案

苏剑林(Jianlin Su) 201 Dec 12, 2022
CJK computer science terms comparison / 中日韓電腦科學術語對照 / 日中韓のコンピュータ科学の用語対照 / 한·중·일 전산학 용어 대조

CJK computer science terms comparison This repository contains the source code of the website. You can see the website from the following link: Englis

Hong Minhee (洪 民憙) 88 Dec 23, 2022
A text file containing 479k English words for all your dictionary/word-based projects e.g: auto-completion / autosuggestion

List Of English Words A text file containing over 466k English words. While searching for a list of english words (for an auto-complete tutorial) I fo

dwyl 8.5k Jan 03, 2023
GPT-3 command line interaction

Writer_unblock Straight-forward command line interfacing with GPT-3. Finding yourself stuck at a conceptual stage? Spinning your wheels needlessly on

Seth Nuzum 6 Feb 10, 2022
This repository structures data in title, summary, tags, sentiment given a fragment of a conversation

Understand-conversation-AI This repository structures data in title, summary, tags, sentiment given a fragment of a conversation How to install: pip i

Juan Camilo López Montes 1 Jan 11, 2022
Code for CVPR 2021 paper: Revamping Cross-Modal Recipe Retrieval with Hierarchical Transformers and Self-supervised Learning

Revamping Cross-Modal Recipe Retrieval with Hierarchical Transformers and Self-supervised Learning This is the PyTorch companion code for the paper: A

Amazon 69 Jan 03, 2023
The source code of HeCo

HeCo This repo is for source code of KDD 2021 paper "Self-supervised Heterogeneous Graph Neural Network with Co-contrastive Learning". Paper Link: htt

Nian Liu 106 Dec 27, 2022
sangha, pronounced "suhng-guh", is a social networking, booking platform where students and teachers can share their practice.

Flask React Project This is the backend for the Flask React project. Getting started Clone this repository (only this branch) git clone https://github

Courtney Newcomer 17 Sep 29, 2021
CDLA: A Chinese document layout analysis (CDLA) dataset

CDLA: A Chinese document layout analysis (CDLA) dataset 介绍 CDLA是一个中文文档版面分析数据集,面向中文文献类(论文)场景。包含以下10个label: 正文 标题 图片 图片标题 表格 表格标题 页眉 页脚 注释 公式 Text Title

buptlihang 84 Dec 28, 2022
PyTorch Implementation of "Non-Autoregressive Neural Machine Translation"

Non-Autoregressive Transformer Code release for Non-Autoregressive Neural Machine Translation by Jiatao Gu, James Bradbury, Caiming Xiong, Victor O.K.

Salesforce 261 Nov 12, 2022
A Flask Sentiment Analysis API, with visual implementation

The Sentiment Analysis Api was created using python flask module,it allows users to parse a text or sentence throught the (?text) arguement, then view the sentiment analysis of that sentence. It can

Ifechukwudeni Oweh 10 Jul 17, 2022
Revisiting Pre-trained Models for Chinese Natural Language Processing (Findings of EMNLP 2020)

This repository contains the resources in our paper "Revisiting Pre-trained Models for Chinese Natural Language Processing", which will be published i

Yiming Cui 463 Dec 30, 2022
NVDA, the free and open source Screen Reader for Microsoft Windows

NVDA NVDA (NonVisual Desktop Access) is a free, open source screen reader for Microsoft Windows. It is developed by NV Access in collaboration with a

NV Access 1.6k Jan 07, 2023
초성 해석기 based on ko-BART

초성 해석기 개요 한국어 초성만으로 이루어진 문장을 입력하면, 완성된 문장을 예측하는 초성 해석기입니다. 초성: ㄴㄴ ㄴㄹ ㅈㅇㅎ 예측 문장: 나는 너를 좋아해 모델 모델은 SKT-AI에서 공개한 Ko-BART를 이용합니다. 데이터 문장 단위로 이루어진 아무 코퍼스나

Dawoon Jung 29 Oct 28, 2022
AllenNLP integration for Shiba: Japanese CANINE model

Allennlp Integration for Shiba allennlp-shiab-model is a Python library that provides AllenNLP integration for shiba-model. SHIBA is an approximate re

Shunsuke KITADA 12 Feb 16, 2022
Smart discord chatbot integrated with Dialogflow to manage different classrooms and assist in teaching!

smart-school-chatbot Smart discord chatbot integrated with Dialogflow to interact with students naturally and manage different classes in a school. De

Tom Huynh 5 Oct 24, 2022