Code Implementation of "Learning Span-Level Interactions for Aspect Sentiment Triplet Extraction".

Related tags

Text Data & NLPnlp
Overview

Span-ASTE: Learning Span-Level Interactions for Aspect Sentiment Triplet Extraction

***** New March 31th, 2022: Scikit-Style API for Easy Usage *****

PWC Colab Jupyter

This repository implements our ACL 2021 research paper Learning Span-Level Interactions for Aspect Sentiment Triplet Extraction. Our goal is to extract sentiment triplets of the format (aspect target, opinion expression and sentiment polarity), as shown in the diagram below.

Installation

Data Format

Our span-based model uses data files where the format for each line contains one input sentence and a list of output triplets:

sentence#### #### ####[triplet_0, ..., triplet_n]

Each triplet is a tuple that consists of (span_a, span_b, label). Each span is a list. If the span covers a single word, the list will contain only the word index. If the span covers multiple words, the list will contain the index of the first word and last word. For example:

It also has lots of other Korean dishes that are affordable and just as yummy .#### #### ####[([6, 7], [10], 'POS'), ([6, 7], [14], 'POS')]

For prediction, the data can contain the input sentence only, with an empty list for triplets:

sentence#### #### ####[]

Predict Using Model Weights

  • First, download and extract pre-trained weights to pretrained_dir
  • The input data file path_in and output data file path_out have the same data format.
from wrapper import SpanModel

model = SpanModel(save_dir=pretrained_dir, random_seed=0)
model.predict(path_in, path_out)

Model Training

  • Configure the model with save directory and random seed.
  • Start training based on the training and validation data which have the same data format.
model = SpanModel(save_dir=save_dir, random_seed=random_seed)
model.fit(path_train, path_dev)

Model Evaluation

  • From the trained model, predict triplets from the test sentences and output into path_pred.
  • The model includes a scoring function which will provide F1 metric scores for triplet extraction.
model.predict(path_in=path_test, path_out=path_pred)
results = model.score(path_pred, path_test)

Research Citation

If the code is useful for your research project, we appreciate if you cite the following paper:

@inproceedings{xu-etal-2021-learning,
    title = "Learning Span-Level Interactions for Aspect Sentiment Triplet Extraction",
    author = "Xu, Lu  and
      Chia, Yew Ken  and
      Bing, Lidong",
    booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
    month = aug,
    year = "2021",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2021.acl-long.367",
    doi = "10.18653/v1/2021.acl-long.367",
    pages = "4755--4766",
    abstract = "Aspect Sentiment Triplet Extraction (ASTE) is the most recent subtask of ABSA which outputs triplets of an aspect target, its associated sentiment, and the corresponding opinion term. Recent models perform the triplet extraction in an end-to-end manner but heavily rely on the interactions between each target word and opinion word. Thereby, they cannot perform well on targets and opinions which contain multiple words. Our proposed span-level approach explicitly considers the interaction between the whole spans of targets and opinions when predicting their sentiment relation. Thus, it can make predictions with the semantics of whole spans, ensuring better sentiment consistency. To ease the high computational cost caused by span enumeration, we propose a dual-channel span pruning strategy by incorporating supervision from the Aspect Term Extraction (ATE) and Opinion Term Extraction (OTE) tasks. This strategy not only improves computational efficiency but also distinguishes the opinion and target spans more properly. Our framework simultaneously achieves strong performance for the ASTE as well as ATE and OTE tasks. In particular, our analysis shows that our span-level approach achieves more significant improvements over the baselines on triplets with multi-word targets or opinions.",
}
Comments
  • Train model for new data collected from social media

    Train model for new data collected from social media

    Hi, I would like to train this model in a new dataset with another language "Bahasa" as aspects and opinions of them, especially in social media textual data, constitute a span of words with multiple lengths. How to execute the file accordingly?

    opened by Lafandi 7
  • command命令错误

    command命令错误

    {'command': 'cd /home/data2/yj/Span-ASTE && allennlp train outputs/14lap/seed_0/config.jsonnet --serialization-dir outputs/14lap/seed_0/weights --include-package span_model'} /bin/sh: allennlp: 未找到命令,请问这个在什么文件里改,一直没找到。。。

    opened by lzf00 6
  • Retrain with new language

    Retrain with new language

    Hi, I have some questions (sorry if this is some kind of beginners question, I am new in this field). I want to change the word embedder to the BERT that is pretrained with my language (Indonesia, using indobert). Can you give some tips on how to change the embedder to my language? Thanks!

    opened by rdyzakya 5
  • Using the notebook when there is no GPU

    Using the notebook when there is no GPU

    Hello! Thank you for sharing this work! I was wondering how I can use the demo notebook locally when there is no GPU?

    When running the cell under "# Use pretrained SpanModel weights for prediction, " I got this error:

    2022-07-06 12:28:07,840 - INFO - allennlp.common.plugins - Plugin allennlp_models available Traceback (most recent call last): File "/Users/xiaoqingwan/opt/miniconda3/envs/absa/bin/allennlp", line 8, in sys.exit(run()) File "/Users/xiaoqingwan/opt/miniconda3/envs/absa/lib/python3.7/site-packages/allennlp/main.py", line 34, in run main(prog="allennlp") File "/Users/xiaoqingwan/opt/miniconda3/envs/absa/lib/python3.7/site-packages/allennlp/commands/init.py", line 118, in main args.func(args) File "/Users/xiaoqingwan/opt/miniconda3/envs/absa/lib/python3.7/site-packages/allennlp/commands/predict.py", line 205, in _predict predictor = _get_predictor(args) File "/Users/xiaoqingwan/opt/miniconda3/envs/absa/lib/python3.7/site-packages/allennlp/commands/predict.py", line 105, in _get_predictor check_for_gpu(args.cuda_device) File "/Users/xiaoqingwan/opt/miniconda3/envs/absa/lib/python3.7/site-packages/allennlp/common/checks.py", line 131, in check_for_gpu " 'trainer.cuda_device=-1' in the json config file." + torch_gpu_error allennlp.common.checks.ConfigurationError: Experiment specified a GPU but none is available; if you want to run on CPU use the override 'trainer.cuda_device=-1' in the json config file. module 'torch.cuda' has no attribute '_check_driver'

    I changed cuda_device to -1 in the jsonnet files from your folder training_config. But still no luck.

    opened by xiaoqingwan 5
  • Suggestions to run it against other datasets

    Suggestions to run it against other datasets

    Hi! I'm pretty new to deep learning and ASTE.

    Can you please suggest to me the necessary steps to run this against another dataset? Do I need to follow this data structure (https://github.com/xuuuluuu/SemEval-Triplet-data/blob/master/README.md#data-description) on my dataset by labeling it? How can I modify the code on Colab for new datasets? thank you Any other advice?

    Thank you

    opened by Jurys22 4
  • Running problem

    Running problem

    Hello, I have a question, I want to ask you. I use Pycharm to run your project, but report an error in the main.py file, prompt: ModuleNotFoundError: No module named '_jsonnet'. I guess the main reason because import _jsonnet # noqa. Can you tell me a solution? Thank you very much. 微信图片_20211123164149

    opened by FengLingCong13 4
  • Data format

    Data format

    Excuse me,how do you label the data to make the input format be as follows:

    Exactly as posted plus a great value .####Exactly=O as=O posted=O plus=O a=O great=O value=T-POS .=O####Exactly=O as=O posted=O plus=O a=O great=S value=O .=O####[([6], [5], 'POS')] The specs are pretty good too .####The=O specs=T-POS are=O pretty=O good=O too=O .=O####The=O specs=O are=O pretty=O good=S too=O .=O####[([1], [4], 'POS')]

    opened by arroyoaaa 4
  • Interpretation of the results

    Interpretation of the results

    Hello, I was looking at the file in

    /content/Span-ASTE/model_outputs/aste_sample_c7b00b66bf7ec669d23b80879fda043d/predict_dev.jsonl

    I would like to know what are the numbers in the predicted_ner and predicted_relations such as:

    [[0, 0, 1, 1, 'NEG', 2.777, 0.971]]

    What are 2.777 and 0.971 referring to?

    Thank you

    opened by Jurys22 3
  •   I installed the package according to the requirements. I wanted to use the pre trained model to make predictions, but it failed to run.

    I installed the package according to the requirements. I wanted to use the pre trained model to make predictions, but it failed to run.

    I installed the package according to the requirements. I wanted to use the pre trained model to make predictions, but it failed to run. Two error was reported: 1. allennlp.common.checks.ConfigurationError: Extra parameters passed to SpanModel: {'relation_head_type': 'proper', 'use_bilstm_after_embedder': False, 'use_double_mix_embedder': False, 'use_ner_embeds': False} Traceback (most recent call last): File "X:\workspace\python\[email protected]\Span-ASTE\aste\test.py", line 4, in model.predict('test.txt', "pred.txt") File "X:\workspace\python\[email protected]\Span-ASTE\aste\wrapper.py", line 83, in predict with open(path_temp_out) as f: 2. FileNotFoundError: [Errno 2] No such file or directory: 'X:\workspace\python\papercode\@aspect\Span-ASTE\pretrained_dir\temp_data\pred_out.json'

    opened by SiriusXT 2
  • IndexError: List assignment index out of range

    IndexError: List assignment index out of range

    I've annotated my own data and tried to train the model with the annotated data, and run into this error here (see below). The command runs successfully, but the model doesn't train on the annotated data, going into the out.log files we see this error. The annotated data follows the correct format as I'm able to preview it in the Data Exploration command. Any help would be appreciated please! :)

    image

    opened by jasonhuynh83 2
  • No such file or directory: 'pretrained_14res/temp_data/pred_out.json'

    No such file or directory: 'pretrained_14res/temp_data/pred_out.json'

    Installed it successfully in MAC OS but getting the error pred_out.json not found. Not sure why is this working successfully in colab but not when I am installing it in my local machine. Can any one help me . I have downloaded the folder correctly. Contains all the required files. I have tried with 14lap and 14res but both have same issue. Screenshot 2022-09-22 at 7 48 09 PM

    opened by dipanmoy 2
  • python wrapper.py

    python wrapper.py

    hi ,I'm puzzled when running wrapper.py, the following appears which I can't understand NAME wrapper.py

    SYNOPSIS wrapper.py GROUP | COMMAND

    GROUPS GROUP is one of the following:

     json
       JSON (JavaScript Object Notation) <http://json.org> is a subset of JavaScript syntax (ECMA-262 3rd edition) used as a lightweight data interchange format.
    
     os
       OS routines for NT or Posix depending on what system we're on.
    
     shutil
       Utility functions for copying and archiving files and directory trees.
    
     sys
       This module provides access to some objects used or maintained by the interpreter and to functions that interact strongly with the interpreter.
    
     List
       The central part of internal API.
    
     Tuple
       Tuple type; Tuple[X, Y] is the cross-product type of X and Y.
    
     Optional
       Internal indicator of special typing constructs. See _doc instance attribute for specific docs.
    

    COMMANDS COMMAND is one of the following:

     Namespace
       Simple object for storing attributes.
    
     Path
       PurePath subclass that can make system calls.
    
     train_model
       Trains the model specified in the given [`Params`](../common/params.md#params) object, using the data and training parameters also specified in that object, and saves the results in `serialization_dir`
    
    opened by xian-xian 2
  •  ConfigurationError: key

    ConfigurationError: key "dataset_reader" is required

    I was trying to replicate the same to Azure Databricks. While I'm training to train the model, I am getting the "ConfigurationError: key "dataset_reader" is required" error. For your reference

    image image image image

    Is this solution can be implemented in the Databricks environment ? @chiayewken

    opened by tsharisaravanan 1
  • Optional: Set up NLTK packages这个是什么意思呀,可以帮忙讲解一下吗

    Optional: Set up NLTK packages这个是什么意思呀,可以帮忙讲解一下吗

    Optional: Set up NLTK packages

    if [[ -f punkt.zip ]]; then mkdir -p /home/admin/nltk_data/tokenizers cp punkt.zip /home/admin/nltk_data/tokenizers fi if [[ -f wordnet.zip ]]; then mkdir -p /home/admin/nltk_data/corpora cp wordnet.zip /home/admin/nltk_data/corpora fi 不明白这个什么意思,研一学生求求了

    opened by xian-xian 5
  • An error for Posixpath

    An error for Posixpath

    Hi, I have some questions to ask you.

    The params_file is a string type, but this error has occurred as follow:

    Traceback (most recent call last): File "/Span-ASTE-main/aste/wrapper.py", line 177, in model.fit(path_train, path_dev) File "/Span-ASTE-main/aste/wrapper.py", line 54, in fit test_data_path=str(self.save_temp_data(path_dev, "dev")), File "/lib/python3.7/site-packages/allennlp/common/params.py", line 462, in from_file file_dict = json.loads(evaluate_file(params_file, ext_vars=ext_vars)) TypeError: argument 1 must be str, not PosixPath

    By the way, what should I start your code, the "main.py" or "wrapper.py".

    opened by Chen-PengF 1
  • demo file not working, No module named 'data_utils', No module named 'data_utils'

    demo file not working, No module named 'data_utils', No module named 'data_utils'

    Hi,

    I tried to run the demo file, but it shows error of "No module named 'data_utils'". The error coming from the line "No module named 'data_utils'"

    opened by qi-xia 1
Owner
Chia Yew Ken
Hi! I'm a 2nd year PhD Student with SUTD and Alibaba. My research interests currently include zero-shot learning, structured prediction and sentiment analysis.
Chia Yew Ken
InfoBERT: Improving Robustness of Language Models from An Information Theoretic Perspective

InfoBERT: Improving Robustness of Language Models from An Information Theoretic Perspective This is the official code base for our ICLR 2021 paper

AI Secure 71 Nov 25, 2022
Easy, fast, effective, and automatic g-code compression!

Getting to the meat of g-code. Easy, fast, effective, and automatic g-code compression! MeatPack nearly doubles the effective data rate of a standard

Scott Mudge 97 Nov 21, 2022
Super Tickets in Pre-Trained Language Models: From Model Compression to Improving Generalization (ACL 2021)

Structured Super Lottery Tickets in BERT This repo contains our codes for the paper "Super Tickets in Pre-Trained Language Models: From Model Compress

Chen Liang 16 Dec 11, 2022
The proliferation of disinformation across social media has led the application of deep learning techniques to detect fake news.

Fake News Detection Overview The proliferation of disinformation across social media has led the application of deep learning techniques to detect fak

Kushal Shingote 1 Feb 08, 2022
DziriBERT: a Pre-trained Language Model for the Algerian Dialect

DziriBERT is the first Transformer-based Language Model that has been pre-trained specifically for the Algerian Dialect.

117 Jan 07, 2023
NewsMTSC: (Multi-)Target-dependent Sentiment Classification in News Articles

NewsMTSC: (Multi-)Target-dependent Sentiment Classification in News Articles NewsMTSC is a dataset for target-dependent sentiment classification (TSC)

Felix Hamborg 79 Dec 30, 2022
Research code for "What to Pre-Train on? Efficient Intermediate Task Selection", EMNLP 2021

efficient-task-transfer This repository contains code for the experiments in our paper "What to Pre-Train on? Efficient Intermediate Task Selection".

AdapterHub 26 Dec 24, 2022
Collection of scripts to pinpoint obfuscated code

Obfuscation Detection (v1.0) Author: Tim Blazytko Automatically detect control-flow flattening and other state machines Description: Scripts and binar

Tim Blazytko 230 Nov 26, 2022
A curated list of FOSS tools to improve the Hacker News experience

Awesome-Hackernews Hacker News is a social news website focusing on computer technologies, hacking and startups. It promotes any content likely to "gr

Bryton Lacquement 141 Dec 27, 2022
Curso práctico: NLP de cero a cien 🤗

Curso Práctico: NLP de cero a cien Comprende todos los conceptos y arquitecturas clave del estado del arte del NLP y aplícalos a casos prácticos utili

Somos NLP 147 Jan 06, 2023
Script to download some free japanese lessons in portuguse from NHK

Nihongo_nhk This is a script to download some free japanese lessons in portuguese from NHK. It can be executed by installing the packages with: pip in

Matheus Alves 2 Jan 06, 2022
Kinky furry assitant based on GPT2

KinkyFurs-V0 Kinky furry assistant based on GPT2 How to run python3 V0.py then, open web browser and go to localhost:8080 Requirements: Flask trans

Sparki 1 Jun 11, 2022
Repository for Graph2Pix: A Graph-Based Image to Image Translation Framework

Graph2Pix: A Graph-Based Image to Image Translation Framework Installation Install the dependencies in env.yml $ conda env create -f env.yml $ conda a

18 Nov 17, 2022
Official PyTorch Implementation of paper "NeLF: Neural Light-transport Field for Single Portrait View Synthesis and Relighting", EGSR 2021.

NeLF: Neural Light-transport Field for Single Portrait View Synthesis and Relighting Official PyTorch Implementation of paper "NeLF: Neural Light-tran

Ken Lin 38 Dec 26, 2022
Fast topic modeling platform

The state-of-the-art platform for topic modeling. Full Documentation User Mailing List Download Releases User survey What is BigARTM? BigARTM is a pow

BigARTM 633 Dec 21, 2022
Material for GW4SHM workshop, 16/03/2022.

GW4SHM Workshop Wednesday, 16th March 2022 (13:00 – 15:15 GMT): Presented by: Dr. Rhodri Nelson, Imperial College London Project website: https://www.

Devito Codes 1 Mar 16, 2022
The source code of HeCo

HeCo This repo is for source code of KDD 2021 paper "Self-supervised Heterogeneous Graph Neural Network with Co-contrastive Learning". Paper Link: htt

Nian Liu 106 Dec 27, 2022
Chinese Pre-Trained Language Models (CPM-LM) Version-I

CPM-Generate 为了促进中文自然语言处理研究的发展,本项目提供了 CPM-LM (2.6B) 模型的文本生成代码,可用于文本生成的本地测试,并以此为基础进一步研究零次学习/少次学习等场景。[项目首页] [模型下载] [技术报告] 若您想使用CPM-1进行推理,我们建议使用高效推理工具BMI

Tsinghua AI 1.4k Jan 03, 2023
Implementation of Fast Transformer in Pytorch

Fast Transformer - Pytorch Implementation of Fast Transformer in Pytorch. This only work as an encoder. Yannic video AI Epiphany Install $ pip install

Phil Wang 167 Dec 27, 2022
A multi-lingual approach to AllenNLP CoReference Resolution along with a wrapper for spaCy.

Crosslingual Coreference Coreference is amazing but the data required for training a model is very scarce. In our case, the available training for non

Pandora Intelligence 71 Jan 04, 2023