Repository for XLM-T, a framework for evaluating multilingual language models on Twitter data

Related tags

Deep Learningxlm-t
Overview

This is the XLM-T repository, which includes data, code and pre-trained multilingual language models for Twitter.

XLM-T - A Multilingual Language Model Toolkit for Twitter

As explained in the reference paper, we make start from XLM-Roberta base and continue pre-training on a large corpus of Twitter in multiple languages. This masked language model, which we named twitter-xlm-roberta-base in the 🤗 Huggingface hub, can be downloaded from here.

Note: This Twitter-specific pretrained LM was pretrained following a similar strategy to its English-only counterpart, which was introduced as part of the TweetEval framework, and available here.

We also provide task-specific models based on the Adapter technique, fine-tuned for cross-lingual sentiment analysis (See #2):

1 - Code

We include code with various functionalities to complement this release. We provide examples for, among others, feature extraction and adapter-based inference with language models in this notebook. Also with examples for training and evaluating language models on multiple tweet classification tasks, compatible with UMSAB (see #2) and TweetEval datasets.

Perform inference with Huggingface's pipelines

Using Huggingface's pipelines, obtaining predictions is as easy as:

from transformers import pipeline
model_path = "cardiffnlp/twitter-xlm-roberta-base-sentiment"
sentiment_task = pipeline("sentiment-analysis", model=model_path, tokenizer=model_path)
sentiment_task("Huggingface es lo mejor! Awesome library 🤗😎")
[{'label': 'Positive', 'score': 0.9343640804290771}]

Fine-tune xlm-t with adapters

You can fine-tune an adapter built on top of your language model of choice by running the src/adapter_finetuning.py script, for example:

python3 src/adapter_finetuning.py --language spanish --model cardfiffnlp/twitter-xlm-roberta-base --seed 1 --lr 0.0001 --max_epochs 20

Notebooks

For quick prototyping, you can direclty use the Colab notebooks we provide below:

Notebook Description Colab Link
01: Playgroud examples Minimal start examples Open In Colab
02: Extract embeddings Extract embeddings from tweets Open In Colab
03: Sentiment prediction Predict sentiment Open In Colab
04: Fine-tuning Fine-tune a model on custom data Open In Colab

2 - UMSAB, the Unified Multilingual Sentiment Analysis Benchmark

As part of our framework, we also release a unified benchmark for cross-lingual sentiment analysis for eight different languages. All datasets are framed as tweet classification with three labels (positive, negative and neutral). The languages included in the benchmark, as well as the datasets they are based on, are: Arabic (SemEval-2017, Rosenthal et al. 2017), English (SemEval-17, Rosenthal et al. 2017), French (Deft-2017, Benamara et al. 2017), German (SB-10K, Cieliebak et al. 2017), Hindi (SAIL 2015, Patra et al. 2015), Italian (Sentipolc-2016, Barbieri et al. 2016), Portuguese (SentiBR, Brum and Nunes, 2017) and Spanish (Intertass 2017, Díaz Galiano et al. 2018). The format for each dataset follows that of TweetEval with one line per tweet and label per line.

UMSAB Results / Leaderboard

The following results (Macro F1 reported) correspond to XLM-R (Conneau et al. 2020) and XLM-Tw, the same model retrained on Twitter as explained in the reference paper. The two settings are monolingual (trained and tested in the same language) and multilingual (considering all languages for training). Check the reference paper for more details on the setting and the metrics.

FT Mono XLM-R Mono XLM-Tw Mono XLM-R Multi XLM-Tw Multi
Arabic 46.0 63.6 67.7 64.3 66.9
English 50.9 68.2 66.9 68.5 70.6
French 54.8 72.0 68.2 70.5 71.2
German 59.6 73.6 76.1 72.8 77.3
Hindi 37.1 36.6 40.3 53.4 56.4
Italian 54.7 71.5 70.9 68.6 69.1
Portuguese 55.1 67.1 76.0 69.8 75.4
Spanish 50.1 65.9 68.5 66.0 67.9
All lang. 51.0 64.8 66.8 66.8 69.4

If you would like to have your results added to the leaderboard you can either submit a pull request or send an email to any of the paper authors with results and the predictions of your model. Please also submit a reference to a paper describing your approach.

Evaluating your system

For evaluating your system according to Macro-F1, you simply need an individual prediction file for each of the languages. The format of the predictions file should be the same as the output examples in the predictions folder (one output label per line as per the original test file) and the files should be named language.txt (e.g. arabic.txt or all.txt if evaluating all languages at once). The predictions included as an example in this repo correspond to xlm-t trained and evaluated on all languages (All lang.).

Example usage

python src/evaluation_script.py

The script takes as input a set of test labels and the predictions from the "predictions" folder by default, but you can set this to suit your needs as optional arguments.

Optional arguments

Three optional arguments can be modified:

--gold_path: Path to gold datasets. Default: ./data/sentiment

--predictions_path: Path to predictions directory. Default: ./predictions/sentiment

--language: Language to evaluate (arabic, english ... or all). Default: all

Evaluation script sample usage from the terminal with parameters:

python src/evaluation_script.py --gold_path ./data/sentiment --predictions_path ./predictions/sentiment --language arabic

(this script would output the results for the Arabic dataset only)

Reference paper

If you use this repository in your research, please use the following bib entry to cite the reference paper.

@inproceedings{barbieri2021xlmtwitter,
  title={{A Multilingual Language Model Toolkit for Twitter}},
  author={Barbieri, Francesco and Espinosa-Anke, Luis and Camacho-Collados, Jose},
  booktitle={arXiv preprint arXiv:2104.12250},
  year={2021}
}

If using UMSAB, please also cite their corresponding datasets.

License

This repository is released open-source but but restrictions may apply to individual datasets (which are derived from existing data) or Twitter (main data source). We refer users to the original licenses accompanying each dataset and Twitter regulations.

Owner
Cardiff NLP
Cardiff NLP
Six - a Python 2 and 3 compatibility library

Six is a Python 2 and 3 compatibility library. It provides utility functions for smoothing over the differences between the Python versions with the g

Benjamin Peterson 919 Dec 28, 2022
Lightweight library to build and train neural networks in Theano

Lasagne Lasagne is a lightweight library to build and train neural networks in Theano. Its main features are: Supports feed-forward networks such as C

Lasagne 3.8k Dec 29, 2022
FridaHookAppTool - Frida Hook App Tool With Python

FridaHookAppTool(以下是Hook mpaas框架的例子) mpaas移动开发框架ios端抓包hook脚本 使用方法:链接数据线,开启burp设置

13 Nov 30, 2022
This is an official implementation of the CVPR2022 paper "Blind2Unblind: Self-Supervised Image Denoising with Visible Blind Spots".

Blind2Unblind: Self-Supervised Image Denoising with Visible Blind Spots Blind2Unblind Citing Blind2Unblind @inproceedings{wang2022blind2unblind, tit

demonsjin 58 Dec 06, 2022
Opinionated code formatter, just like Python's black code formatter but for Beancount

beancount-black Opinionated code formatter, just like Python's black code formatter but for Beancount Try it out online here Features MIT licensed - b

Launch Platform 16 Oct 11, 2022
A Model for Natural Language Attack on Text Classification and Inference

TextFooler A Model for Natural Language Attack on Text Classification and Inference This is the source code for the paper: Jin, Di, et al. "Is BERT Re

Di Jin 418 Dec 16, 2022
Pytorch implementation for DFN: Distributed Feedback Network for Single-Image Deraining.

DFN:Distributed Feedback Network for Single-Image Deraining Abstract Recently, deep convolutional neural networks have achieved great success for sing

6 Nov 05, 2022
BiSeNet based on pytorch

BiSeNet BiSeNet based on pytorch 0.4.1 and python 3.6 Dataset Download CamVid dataset from Google Drive or Baidu Yun(6xw4). Pretrained model Download

367 Dec 26, 2022
HuSpaCy: industrial-strength Hungarian natural language processing

HuSpaCy: Industrial-strength Hungarian NLP HuSpaCy is a spaCy model and a library providing industrial-strength Hungarian language processing faciliti

HuSpaCy 120 Dec 14, 2022
Leveraging OpenAI's Codex to solve cornerstone problems in Music

Music-Codex Leveraging OpenAI's Codex to solve cornerstone problems in Music Please NOTE: Presented generated samples were created by OpenAI's Codex P

Alex 2 Mar 11, 2022
Python version of the amazing Reaction Mechanism Generator (RMG).

Reaction Mechanism Generator (RMG) Description This repository contains the Python version of Reaction Mechanism Generator (RMG), a tool for automatic

Reaction Mechanism Generator 284 Dec 27, 2022
EvDistill: Asynchronous Events to End-task Learning via Bidirectional Reconstruction-guided Cross-modal Knowledge Distillation (CVPR'21)

EvDistill: Asynchronous Events to End-task Learning via Bidirectional Reconstruction-guided Cross-modal Knowledge Distillation (CVPR'21) Citation If y

addisonwang 18 Nov 11, 2022
High-resolution networks and Segmentation Transformer for Semantic Segmentation

High-resolution networks and Segmentation Transformer for Semantic Segmentation Branches This is the implementation for HRNet + OCR. The PyTroch 1.1 v

HRNet 2.8k Jan 07, 2023
traiNNer is an open source image and video restoration (super-resolution, denoising, deblurring and others) and image to image translation toolbox based on PyTorch.

traiNNer traiNNer is an open source image and video restoration (super-resolution, denoising, deblurring and others) and image to image translation to

202 Jan 04, 2023
A supplementary code for Editable Neural Networks, an ICLR 2020 submission.

Editable neural networks A supplementary code for Editable Neural Networks, an ICLR 2020 submission by Anton Sinitsin, Vsevolod Plokhotnyuk, Dmitry Py

Anton Sinitsin 32 Nov 29, 2022
PyTorch implementation of the end-to-end coreference resolution model with different higher-order inference methods.

End-to-End Coreference Resolution with Different Higher-Order Inference Methods This repository contains the implementation of the paper: Revealing th

Liyan 52 Jan 04, 2023
基于pytorch构建cyclegan示例

cyclegan-demo 基于Pytorch构建CycleGAN示例 如何运行 准备数据集 将数据集整理成4个文件,分别命名为 trainA, trainB:训练集,A、B代表两类图片 testA, testB:测试集,A、B代表两类图片 例如 D:\CODE\CYCLEGAN-DEMO\DATA

Koorye 3 Oct 18, 2022
A simple and lightweight genetic algorithm for optimization of any machine learning model

geneticml This package contains a simple and lightweight genetic algorithm for optimization of any machine learning model. Installation Use pip to ins

Allan Barcelos 8 Aug 10, 2022
PyTorch implementation of our ICCV 2019 paper: Liquid Warping GAN: A Unified Framework for Human Motion Imitation, Appearance Transfer and Novel View Synthesis

Impersonator PyTorch implementation of our ICCV 2019 paper: Liquid Warping GAN: A Unified Framework for Human Motion Imitation, Appearance Transfer an

SVIP Lab 1.7k Jan 06, 2023
Winning solution of the Indoor Location & Navigation Kaggle competition

This repository contains the code to generate the winning solution of the Kaggle competition on indoor location and navigation organized by Microsoft

Tom Van de Wiele 62 Dec 28, 2022