Tracking Progress in Natural Language Processing

Overview

Tracking Progress in Natural Language Processing

Table of contents

English

Vietnamese

Hindi

Chinese

For more tasks, datasets and results in Chinese, check out the Chinese NLP website.

French

Russian

Spanish

Portuguese

Korean

Nepali

Bengali

Persian

Turkish

German

This document aims to track the progress in Natural Language Processing (NLP) and give an overview of the state-of-the-art (SOTA) across the most common NLP tasks and their corresponding datasets.

It aims to cover both traditional and core NLP tasks such as dependency parsing and part-of-speech tagging as well as more recent ones such as reading comprehension and natural language inference. The main objective is to provide the reader with a quick overview of benchmark datasets and the state-of-the-art for their task of interest, which serves as a stepping stone for further research. To this end, if there is a place where results for a task are already published and regularly maintained, such as a public leaderboard, the reader will be pointed there.

If you want to find this document again in the future, just go to nlpprogress.com or nlpsota.com in your browser.

Contributing

Guidelines

Results   Results reported in published papers are preferred; an exception may be made for influential preprints.

Datasets   Datasets should have been used for evaluation in at least one published paper besides the one that introduced the dataset.

Code   We recommend to add a link to an implementation if available. You can add a Code column (see below) to the table if it does not exist. In the Code column, indicate an official implementation with Official. If an unofficial implementation is available, use Link (see below). If no implementation is available, you can leave the cell empty.

Adding a new result

If you would like to add a new result, you can just click on the small edit button in the top-right corner of the file for the respective task (see below).

Click on the edit button to add a file

This allows you to edit the file in Markdown. Simply add a row to the corresponding table in the same format. Make sure that the table stays sorted (with the best result on top). After you've made your change, make sure that the table still looks ok by clicking on the "Preview changes" tab at the top of the page. If everything looks good, go to the bottom of the page, where you see the below form.

Fill out the file change information

Add a name for your proposed change, an optional description, indicate that you would like to "Create a new branch for this commit and start a pull request", and click on "Propose file change".

Adding a new dataset or task

For adding a new dataset or task, you can also follow the steps above. Alternatively, you can fork the repository. In both cases, follow the steps below:

  1. If your task is completely new, create a new file and link to it in the table of contents above.
  2. If not, add your task or dataset to the respective section of the corresponding file (in alphabetical order).
  3. Briefly describe the dataset/task and include relevant references.
  4. Describe the evaluation setting and evaluation metric.
  5. Show how an annotated example of the dataset/task looks like.
  6. Add a download link if available.
  7. Copy the below table and fill in at least two results (including the state-of-the-art) for your dataset/task (change Score to the metric of your dataset). If your dataset/task has multiple metrics, add them to the right of Score.
  8. Submit your change as a pull request.
Model Score Paper / Source Code

Wish list

These are tasks and datasets that are still missing:

  • Bilingual dictionary induction
  • Discourse parsing
  • Keyphrase extraction
  • Knowledge base population (KBP)
  • More dialogue tasks
  • Semi-supervised learning
  • Frame-semantic parsing (FrameNet full-sentence analysis)

Exporting into a structured format

You can extract all the data into a structured, machine-readable JSON format with parsed tasks, descriptions and SOTA tables.

The instructions are in structured/README.md.

Instructions for building the site locally

Instructions for building the website locally using Jekyll can be found here.

Comments
  • Conll-2003 uncomparable results

    Conll-2003 uncomparable results

    Because of the small size the training set of Conll-2003, some authors incorporated the development set as a part of training data after tuning the hyper-parameters. Consequently, not all results are directly comparable.

    Train+dev:

    Flair embeddings (Akbik et al., 2018) Peters et al. (2017) Yang et al. (2017)

    Maybe those results should be marked by an asterisk

    opened by ghaddarAbs 28
  • NLP Progress Graph

    NLP Progress Graph

    Hi Sebastian, loved your idea for this repo. I was thinking if we can have a graph, something like this

    showing progress of different tasks in NLP based on the updates to their markdown file. I have created a shell script which clones your repo into my local, counts the no of commit for different files and using python/pandas preprocess the result and create a bar chart out of it and uploads it to a free image uploading service.

    Currently, it shows count of all the commit for a specific file but if we can have a guideline for adding new results, fixing errors .. Maybe different identifiers

    Then we can count the no of times, a new result has been added to an NLP task. This can help in visualizing the NLP areas of most active/Improving research.

    Currently, the graph doesn't make much sense but over the time it will improve as we update with more results.

    Also, If you think something like this can benefit the community, i can create a cron job on my pc(i don't have a server) which will update the image url with the latest graph which you can show on the main page.

    opened by nirmalsinghania2008 16
  • YAML - pros and cons

    YAML - pros and cons

    I'd like to discuss here the pros and cons of using YAML going forward or whether we should stick with Markdown tables. Here are some pros and cons, mainly from @NirantK (in https://github.com/sebastianruder/NLP-progress/pull/116), @stared (in https://github.com/sebastianruder/NLP-progress/issues/43, https://github.com/sebastianruder/NLP-progress/pull/64) and myself.

    Pros:

    • Easier trend spotting in performance improvements
    • Easy to create plots and visualizations going forward
    • Data is separated from presentation

    Cons:

    • Hard for contributors, e.g. HTML omissions can't be spotted without setting up Jekyll locally
    • Github Repo becomes useless for readers, relying exclusively on nlpprogress.com
    • Many visualizations (e.g. bar charts) based on performance numbers are not more useful than the raw tables

    Other opinions are welcome.

    opened by sebastianruder 10
  • What about other languages?

    What about other languages?

    Thanks for this work!

    These pages seem to cover the progress only for English (well, except MT). Do you have plans to include other languages?

    One extreme example is POS tagging and dependency parsing. UD has 60+ languages :) For others, there should be very limited data

    opened by Hrant-Khachatrian 10
  • Incorrect BLEU score for English-Hindi MT System

    Incorrect BLEU score for English-Hindi MT System

    The BLEU score written in the Document is 89.35 which looks wrong to me. The referred paper mentions a BLEU score of 12.83 which itself is not state-of-the-art for the language pair.

    opened by kartikeypant 7
  • add G2P conversion task of schwa deletion to Hindi

    add G2P conversion task of schwa deletion to Hindi

    There's been a good body of previous work on schwa deletion in NLP/CL, you can see some of it in our paper. It'll be good to keep track of the SOTA on it since it's an important task for G2P conversion in North Indian languages.

    opened by aryamanarora 6
  • Added new task: data-to-text generation

    Added new task: data-to-text generation

    I have added a new task of Data-to-Text Natural Language Generation (D2T NLG). D2T NLG differs from other NLG tasks such as MT or QA in a way that the input to text generation system is a structured representation (table, knowledge graph, or JSON) instead of unstructured text. This document provides an overview of three most recent and popular datasets available publicly for D2T NLG. With the advancements in deep learning - several novel neural methods are being proposed that are capable of generating accurate, fluent and diverse texts.

    opened by ashishu007 6
  • Explain relation to paperswithcode.com

    Explain relation to paperswithcode.com

    Since the inception of this great repository of state-of-the-art results, alternatives such as paperswithcode.com have gained traction. This raises the question of the usefulness of keeping both resources up to date with the latest results. Could users and maintainers of this repository perhaps elaborate a bit, here and/or the README, how they see this resource relating to paperswithcode.com and particularly what nlpprogress.com does well that the former does not?

    opened by cwenner 6
  • add TCAN results to LM

    add TCAN results to LM

    To be honest, I'm a bit skeptical about their results and asked them some questions via email. So let's put a hold on this pull request for now (unless the maintainers think it's fine) and I will update it when they answered my questions.

    opened by Separius 6
  • Add missing LM SOTA result + # params + prev SOTA

    Add missing LM SOTA result + # params + prev SOTA

    Add missing LM ensemble which is SOTA for PTB. Add second-in-line LM SOTA for strict interpretation. Add number of params for LM results.

    (unsure why it lists commits that have already been merged)

    opened by cwenner 6
  • Data in YAML for structure and plots

    Data in YAML for structure and plots

    Related to #43.

    Right now did some demo for CCG. I didn't work on the plot form, just wanted to show it is possible and easy. Also - I think that data form can be standarized - so it would be simpler to add more complicated things (e.g. further comments, links to multiple implementations, etc).

    See files in:

    • _data - data in YAML format
    • _includes - for ways of converting data into its presentations (tables, charts, etc)
    • ccg_supertagging.md to see how to include these

    IMHO YAML is cleaner for writing and reading than markdown tables, so it is an advantage on its own. From my experience contributors (ones who use GitHub) have no slightest problem in using YAML (vide https://p.migdal.pl/interactive-machine-learning-list/).

    Right now I generate data through Liquid template.

    opened by stared 6
  • Pull request with new emotion detection dataset

    Pull request with new emotion detection dataset

    There seems to be some conflicts, therefore I am not resolving it as it might remove some code. So could you be kind to resolve them and merge my request?

    opened by KhondokerIslam 0
  • Update paraphrase-generation.md

    Update paraphrase-generation.md

    MULTIPIT, MULTIPITCROWD and MULTIPITEXPERT

    Past efforts on creating paraphrase corpora only consider one paraphrase criteria without taking into account the fact that the desired “strictness” of semantic equivalence in paraphrases varies from task to task (Bhagat and Hovy, 2013; Liu and Soh, 2022). For example, for the purpose of tracking unfolding events, “A tsunami hit Haiti.” and “303 people died because of the tsunami in Haiti” are sufficiently close to be considered as paraphrases; whereas for paraphrase generation, the extra information “303 people dead” in the latter sentence may lead models to learn to hallucinate and generate more unfaithful content. In this paper, the authors present an effective data collection and annotation method to address these issues.

    MULTIPIT is a topic Paraphrase in Twitter corpus that consists of a total of 130k sentence pairs with crowdsoursing (MULTIPITCROWD ) and expert (MULTIPITEXPERT ) annotations. MULTIPITCROWD is a large crowdsourced set of 125K sentence pairs that is useful for tracking information onTwitter. | Model | F1 | Paper / Source | Code | | ------------- | :-----:| --- | --- | | DeBERTaV3large | 92.00 |Improving Large-scale Paraphrase Acquisition and Generation| Unavailable|

    MULTIPITEXPERT is an expert annotated set of 5.5K sentence pairs using a stricter definition that is more suitable for acquiring paraphrases for generation purpose. | Model | F1 | Paper / Source | Code | | ------------- | :-----:| --- | --- | | DeBERTaV3large | 83.20 |Improving Large-scale Paraphrase Acquisition and Generation| Unavailable|

    opened by adrienpayong 0
  • add this to machine translation,. Is it okay?

    add this to machine translation,. Is it okay?

    opened by adrienpayong 0
Releases(v0.3)
Owner
Sebastian Ruder
Research Scientist @DeepMind
Sebastian Ruder
:id: A python library for accurate and scalable fuzzy matching, record deduplication and entity-resolution.

Dedupe Python Library dedupe is a python library that uses machine learning to perform fuzzy matching, deduplication and entity resolution quickly on

Dedupe.io 3.6k Jan 02, 2023
Silero Models: pre-trained speech-to-text, text-to-speech models and benchmarks made embarrassingly simple

Silero Models: pre-trained speech-to-text, text-to-speech models and benchmarks made embarrassingly simple

Alexander Veysov 3.2k Dec 31, 2022
Malware-Related Sentence Classification

Malware-Related Sentence Classification This repo contains the code for the ICTAI 2021 paper "Enrichment of Features for Malware-Related Sentence Clas

Chau Nguyen 1 Mar 26, 2022
Neural text generators like the GPT models promise a general-purpose means of manipulating texts.

Boolean Prompting for Neural Text Generators Neural text generators like the GPT models promise a general-purpose means of manipulating texts. These m

Jeffrey M. Binder 20 Jan 09, 2023
DomainWordsDict, Chinese words dict that contains more than 68 domains, which can be used as text classification、knowledge enhance task

DomainWordsDict, Chinese words dict that contains more than 68 domains, which can be used as text classification、knowledge enhance task。涵盖68个领域、共计916万词的专业词典知识库,可用于文本分类、知识增强、领域词汇库扩充等自然语言处理应用。

liuhuanyong 357 Dec 24, 2022
Python SDK for working with Voicegain Speech-to-Text

Voicegain Speech-to-Text Python SDK Python SDK for the Voicegain Speech-to-Text API. This API allows for large vocabulary speech-to-text transcription

Voicegain 3 Dec 14, 2022
REST API for sentence tokenization and embedding using Multilingual Universal Sentence Encoder.

What is MUSE? MUSE stands for Multilingual Universal Sentence Encoder - multilingual extension (16 languages) of Universal Sentence Encoder (USE). MUS

Dani El-Ayyass 47 Sep 05, 2022
Code release for "COTR: Correspondence Transformer for Matching Across Images"

COTR: Correspondence Transformer for Matching Across Images This repository contains the inference code for COTR. We plan to release the training code

UBC Computer Vision Group 358 Dec 24, 2022
Code for "Generative adversarial networks for reconstructing natural images from brain activity".

Reconstruct handwritten characters from brains using GANs Example code for the paper "Generative adversarial networks for reconstructing natural image

K. Seeliger 2 May 17, 2022
keras implement of transformers for humans

keras implement of transformers for humans

苏剑林(Jianlin Su) 4.8k Jan 03, 2023
All the code I wrote for Overwatch-related projects that I still own the rights to.

overwatch_shit.zip This is (eventually) going to contain all the software I wrote during my five-year imprisonment stay playing Overwatch. I'll be add

zkxjzmswkwl 2 Dec 31, 2021
Chatbot with Pytorch, Python & Nextjs

Installation Instructions Make sure that you have Python 3, gcc, venv, and pip installed. Clone the repository $ git clone https://github.com/sahr

Rohit Sah 0 Dec 11, 2022
A simple chatbot based on chatterbot that you can use for anything has basic features

Chatbotium A simple chatbot based on chatterbot that you can use for anything has basic features. I have some errors Read the paragraph below: Known b

Herman 1 Feb 16, 2022
Use AutoModelForSeq2SeqLM in Huggingface Transformers to train COMET

Training COMET using seq2seq setting Use AutoModelForSeq2SeqLM in Huggingface Transformers to train COMET. The codes are modified from run_summarizati

tqfang 9 Dec 17, 2022
Collection of scripts to pinpoint obfuscated code

Obfuscation Detection (v1.0) Author: Tim Blazytko Automatically detect control-flow flattening and other state machines Description: Scripts and binar

Tim Blazytko 230 Nov 26, 2022
Artificial Conversational Entity for queries in Eulogio "Amang" Rodriguez Institute of Science and Technology (EARIST)

🤖 Coeus - EARIST A.C.E 💬 Coeus is an Artificial Conversational Entity for queries in Eulogio "Amang" Rodriguez Institute of Science and Technology,

Dids Irwyn Reyes 3 Oct 14, 2022
An Analysis Toolkit for Natural Language Generation (Translation, Captioning, Summarization, etc.)

VizSeq is a Python toolkit for visual analysis on text generation tasks like machine translation, summarization, image captioning, speech translation

Facebook Research 409 Oct 28, 2022
Simple text to phones converter for multiple languages

Phonemizer -- foʊnmaɪzɚ The phonemizer allows simple phonemization of words and texts in many languages. Provides both the phonemize command-line tool

CoML 762 Dec 29, 2022
Train 🤗transformers with DeepSpeed: ZeRO-2, ZeRO-3

Fork from https://github.com/huggingface/transformers/tree/86d5fb0b360e68de46d40265e7c707fe68c8015b/examples/pytorch/language-modeling at 2021.05.17.

Junbum Lee 12 Oct 26, 2022