NLP library designed for reproducible experimentation management

Overview

Welcome to the Transfer NLP library, a framework built on top of PyTorch to promote reproducible experimentation and Transfer Learning in NLP

You can have an overview of the high-level API on this Colab Notebook, which shows how to use the framework on several examples. All DL-based examples on these notebooks embed in-cell Tensorboard training monitoring!

For an example of pre-trained model finetuning, we provide a short executable tutorial on BertClassifier finetuning on this Colab Notebook

Set up your environment

mkvirtualenv transfernlp
workon transfernlp

git clone https://github.com/feedly/transfer-nlp.git
cd transfer-nlp
pip install -r requirements.txt

To use Transfer NLP as a library:

# to install the experiment builder only
pip install transfernlp
# to install Transfer NLP with PyTorch and Transfer Learning in NLP support
pip install transfernlp[torch]

or

pip install git+https://github.com/feedly/transfer-nlp.git

to get the latest state before new releases.

To use Transfer NLP with associated examples:

git clone https://github.com/feedly/transfer-nlp.git
pip install -r requirements.txt

Documentation

API documentation and an overview of the library can be found here

Reproducible Experiment Manager

The core of the library is made of an experiment builder: you define the different objects that your experiment needs, and the configuration loader builds them in a nice way. For reproducible research and easy ablation studies, the library then enforces the use of configuration files for experiments. As people have different tastes for what constitutes a good experiment file, the library allows for experiments defined in several formats:

  • Python Dictionary
  • JSON
  • YAML
  • TOML

In Transfer-NLP, an experiment config file contains all the necessary information to define entirely the experiment. This is where you will insert names of the different components your experiment will use, along with the hyperparameters you want to use. Transfer-NLP makes use of the Inversion of Control pattern, which allows you to define any class / method / function you could need, the ExperimentConfig class will create a dictionnary and instatiate your objects accordingly.

To use your own classes inside Transfer-NLP, you need to register them using the @register_plugin decorator. Instead of using a different registry for each kind of component (Models, Data loaders, Vectorizers, Optimizers, ...), only a single registry is used here, in order to enforce total customization.

If you use Transfer NLP as a dev dependency only, you might want to use it declaratively only, and call register_plugin() on objects you want to use at experiment running time.

Here is an example of how you can define an experiment in a YAML file:

data_loader:
  _name: MyDataLoader
  data_parameter: foo
  data_vectorizer:
    _name: MyVectorizer
    vectorizer_parameter: bar

model:
  _name: MyModel
  model_hyper_param: 100
  data: $data_loader

trainer:
  _name: MyTrainer
  model: $model
  data: $data_loader
  loss:
    _name: PyTorchLoss
  tensorboard_logs: $HOME/path/to/tensorboard/logs
  metrics:
    accuracy:
      _name: Accuracy

Any object can be defined through a class, method or function, given a _name parameters followed by its own parameters. Experiments are then loaded and instantiated using ExperimentConfig(experiment=experiment_path_or_dict)

Some considerations:

  • Defaults parameters can be skipped in the experiment file.

  • If an object is used in different places, you can refer to it using the $ symbol, for example here the trainer object uses the data_loader instantiated elsewhere. No ordering of objects is required.

  • For paths, you might want to use environment variables so that other machines can also run your experiments. In the previous example, you would run e.g. ExperimentConfig(experiment=yaml_path, HOME=Path.home()) to instantiate the experiment and replace $HOME by your machine home path.

  • The config instantiation allows for any complex settings with nested dict / list

You can have a look at the tests for examples of experiment settings the config loader can build. Additionally we provide runnable experiments in experiments/.

Transfer Learning in NLP: flexible PyTorch Trainers

For deep learning experiments, we provide a BaseIgniteTrainer in transfer_nlp.plugins.trainers.py. This basic trainer will take a model and some data as input, and run a whole training pipeline. We make use of the PyTorch-Ignite library to monitor events during training (logging some metrics, manipulating learning rates, checkpointing models, etc...). Tensorboard logs are also included as an option, you will have to specify a tensorboard_logs simple parameters path in the config file. Then just run tensorboard --logdir=path/to/logs in a terminal and you can monitor your experiment while it's training! Tensorboard comes with very nice utilities to keep track of the norms of your model weights, histograms, distributions, visualizing embeddings, etc so we really recommend using it.

We provide a SingleTaskTrainer class which you can use for any supervised setting dealing with one task. We are working on a MultiTaskTrainer class to deal with multi task settings, and a SingleTaskFineTuner for large models finetuning settings.

Use cases

Here are a few use cases for Transfer NLP:

  • You have all your classes / methods / functions ready. Transfer NLP allows for a clean way to centralize loading and executing your experiments
  • You have all your classes but you would like to benchmark multiple configuration settings: the ExperimentRunner class allows for sequentially running your sets of experiments, and generates personalized reporting (you only need to implement your report method in a custom ReporterABC class)
  • You want to experiment with training deep learning models but you feel overwhelmed bby all the boilerplate code in SOTA models github projects. Transfer NLP encourages separation of important objects so that you can focus on the PyTorch Module implementation and let the trainers deal with the training part (while still controlling most of the training parameters through the experiment file)
  • You want to experiment with more advanced training strategies, but you are more interested in the ideas than the implementations details. We are working on improving the advanced trainers so that it will be easier to try new ideas for multi task settings, fine-tuning strategies or model adaptation schemes.

Slack integration

While experimenting with your own models / data, the training might take some time. To get notified when your training finishes or crashes, you can use the simple library knockknock by folks at HuggingFace, which add a simple decorator to your running function to notify you via Slack, E-mail, etc.

Some objectives to reach:

  • Include examples using state of the art pre-trained models
  • Include linguistic properties to models
  • Experiment with RL for sequential tasks
  • Include probing tasks to try to understand the properties that are learned by the models

Acknowledgment

The library has been inspired by the reading of "Natural Language Processing with PyTorch" by Delip Rao and Brian McMahan. Experiments in experiments, the Vocabulary building block and embeddings nearest neighbors are taken or adapted from the code provided in the book.

Comments
  • Pytorch Lightning as a back-end

    Pytorch Lightning as a back-end

    Is your feature request related to a problem? Please describe. A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

    Describe the solution you'd like A clear and concise description of what you want to happen.

    Describe alternatives you've considered A clear and concise description of any alternative solutions or features you've considered.

    Hi! check out Pytorch Lightning as an option for your backend! We're looking for awesome project implemented in Lightning.

    https://github.com/williamFalcon/pytorch-lightning Additional context Add any other context or screenshots about the feature request here.

    opened by williamFalcon 3
  • have the possibility to build object with a function instead of a class

    have the possibility to build object with a function instead of a class

    When you want to experiment with someone else's code, you don't want to copy-paste their code.

    If you want to use a class AwesomeClass from an awesome github repo, you can do:

    from transfer_nlp.pluginf.config import register_plugin
    from awesome_repo.module import AwesomeClass
    
    register_plugin(AwesomeClass)
    

    and then use it in your experiments.

    However, when reusing complex objects, it might complicated to configure them. An example is the pre-trained model from the pytorch-pretrained-bert repo, where you can build complex models with nice one-liners such as model = BertForSequenceClassification.from_pretrained('bert-base-uncased', num_labels=4)

    It's possible to encapsulate these into other classes and have Transfer NLP build them, but it can feel awkward and adds unnecessary complexity / lines of code compared to the initial one-liner. An alternative is to build these objects with a method, in the previous example we would only write:

    @register_function
    def bert_classifier(bert_version: str='bert-base-uncased', num_labels: int=4):
        return BertForSequenceClassification.from_pretrained(pretrained_model_name_or_path=bert_version, num_labels=num_labels)
    

    and we could use functions just as methods in the config loading.

    opened by petermartigny 2
  • caching objects in experiment runner

    caching objects in experiment runner

    some readonly objects can take awhile to load in experiments (embeddings, datasets, etc). The current ExperimentRunner always recreates the entire experiment. It would be nice if we could keep some objects in memory...

    Proposal

    add a cached property in run_all

        def run_all(experiment: Union[str, Path, Dict],
                    experiment_cache: Union[str, Path, Dict],
                    experiment_config: Union[str, Path],
                    report_dir: Union[str, Path],
                    trainer_config_name: str = 'trainer',
                    reporter_config_name: str = 'reporter',
                    **env_vars) -> None:
    

    The cache is just another experiment json. it would be loaded only once at the very beginning only using the env_vars. any resulting objects would then be added to env_vars when running each each experiment. objects can optionally implement a Resettable class that has a reset method that would be called once before each experiment.

    incorrect usage of this feature could lead to non-reproducibility issues, but through docs we could make it clear this should only be for read-only objects. i think it would be worth doing...

    opened by kireet 1
  • cleanup config tests, also fixes #28

    cleanup config tests, also fixes #28

    i wanted to try to make the config tests a bit more sane, try to minimize the number of temporary classes we needed to create and improve naming. also found issue #28 and fixed it.

    opened by kireet 1
  • unsubstituted parameter doesn't cause an error

    unsubstituted parameter doesn't cause an error

    something like this won't cause a problem:

    { 
       "item": {
           "_name": "foo",
           "param":"$bar"
        }
    }
    

    even if we don't set a value for bar. this can lead to easily misconfigured objects.

    opened by kireet 1
  • Ioc refactor

    Ioc refactor

    • Refactor the basic trainer in an IoC pattern, with a single registry for every registrable classes, allowing for maximum customization
    • Separate the example experiments from the library
    • Adapt the examples to the new logic
    • Set cuda as optional in the config file
    opened by petermartigny 1
  • TPU + 16 bit

    TPU + 16 bit

    hey!

    Not sure if you've seen: https://github.com/williamFalcon/pytorch-lightning.

    The fastest growing PyTorch front-end project.

    We're also now venture funded so we have a fulltime team working on this and will be around for a very long time :)

    https://medium.com/pytorch/pytorch-lightning-0-7-1-release-and-venture-funding-dd12b2e75fb3?postPublishedType=repub

    opened by williamFalcon 0
  • Optional torch imports for trainers

    Optional torch imports for trainers

    We import torch modules in the __init__.py of trainers. This PR makes these imports optional, in the case where we don't have torch installed but still want to use the base TrainerABC class

    opened by petermartigny 0
  • move trainerABC to separate file

    move trainerABC to separate file

    This PR moves the TrainerABC class to a separate file. Therefore, someone willing to use the experiment runner class can do so without having to install torch

    opened by petermartigny 0
  • Refactor/experiment config

    Refactor/experiment config

    This PR does the refactoring defined in #76 to have a more easily maintainable configuration logic.

    Also, we remove the pytorch modules that were included in the registry by default. This allows for non-DL projects to use the config part f the library.

    opened by petermartigny 0
  • simplify configs reporting

    simplify configs reporting

    This PR does a few things:

    • Get rid of ini .cfg files saving
    • Before doing the sequential experiments, we copy the configs, experiment and cache files to a global-reporting directory.
    • This global-reporting directory will also host the outputs from the reporter's report_globally() call
    opened by petermartigny 0
  • [ExperimentRunner] Default value of experiment_cache cause run_all to fail

    [ExperimentRunner] Default value of experiment_cache cause run_all to fail

    Describe the bug The ExperimentRunner.run_all fails if experiment_cache is None.

    The issue comes from line 109, where the default value for the experiment cache (None) is not handled correctly: https://github.com/feedly/transfer-nlp/blob/master/transfer_nlp/runner/experiment_runner.py#L109

    opened by Mathieu4141 0
  • Check that all registrables are registered

    Check that all registrables are registered

    Currently, objects are built one by one and when one fails it throws an error.

    It would be great to have a quick pass before instantiating objects to check that all registrable names / aliases are actually registered, and throw an error at this moment.

    opened by petermartigny 0
  • Downloader Plugin

    Downloader Plugin

    From the talk today, one good point was the point that reproducibility problems often stem from data inconsistencies. To that end, I think we should have a DataDownloader component that can download data from URLs and save them locally to disk.

    • If the files exist, the downloader can skip the download
    • the downloader should calculate checksums for downloaded files. it should produce a checksums.cfg file to simplify reusing these in configuration later
    • the downloader should allow checksums to be configured in the experiment file. when set, the downloader would verify the downloaded file is the same as the one specified in the experiment.

    so an example json config could be:

    {
      "_name": "Downloader",
      "local_dir": "$my_path",
      "checksums": "$WORK_DIR/checksums_2019_05_23.cfg", <-- produced by a previous download 
      "sentences.txt.gz": {
        "url": "$BASE_URL/sentences.txt.gz",
        "decompress": true
      },
      "word_embeddings.npy": {
        "url": "$BASE_URL/word_embeddings.npy"
      }
    }
    
    opened by kireet 1
Releases(v0.1.6)
  • v0.1.5(Jun 25, 2019)

  • v0.1.3(May 29, 2019)

  • v0.1.2(May 28, 2019)

  • v0.1.1(May 28, 2019)

  • v0.1(May 28, 2019)

    This is a first stable version for Transfer NLP, allowing users to:

    Keep track of experiments and enforce reproducible research Combine custom and open-source code into controlled experiments Here are a few features available in the release:

    Configuring all objects from an experiment using a json file Running sequential jobs for the same experiment using different sets of parameters (parameter tuning, ablation studies...) Keep track of your experiments and make them reproducible / incrementally improvable Allow dynamic re-creation of any instantiated object during training through object factories Use several basic building blocks: Vocabulary class, PyTorch optimizer, Predictors... Transfer Learning: use the BasicTrainer to fine-tune pre-trained models to your custom downstream tasks.

    Source code(tar.gz)
    Source code(zip)
Owner
Feedly
Feedly
A large-scale (194k), Multiple-Choice Question Answering (MCQA) dataset designed to address realworld medical entrance exam questions.

MedMCQA MedMCQA : A Large-scale Multi-Subject Multi-Choice Dataset for Medical domain Question Answering A large-scale, Multiple-Choice Question Answe

MedMCQA 24 Nov 30, 2022
An A-SOUL Text Generator Based on CPM-Distill.

ASOUL-Generator-Backend 本项目为 https://asoul.infedg.xyz/ 的后端。 模型为基于 CPM-Distill 的 transformers 转化版本 CPM-Generate-distill 训练而成。

infinityedge 46 Dec 11, 2022
In this repository we have tested 3 VQA models on the ImageCLEF-2019 dataset.

Med-VQA In this repository we have tested 3 VQA models on the ImageCLEF-2019 dataset. Two of these are made on top of Facebook AI Reasearch's Multi-Mo

Kshitij Ambilduke 8 Apr 14, 2022
Super Tickets in Pre-Trained Language Models: From Model Compression to Improving Generalization (ACL 2021)

Structured Super Lottery Tickets in BERT This repo contains our codes for the paper "Super Tickets in Pre-Trained Language Models: From Model Compress

Chen Liang 16 Dec 11, 2022
A text augmentation tool for named entity recognition.

neraug This python library helps you with augmenting text data for named entity recognition. Augmentation Example Reference from An Analysis of Simple

Hiroki Nakayama 48 Oct 11, 2022
Rethinking the Truly Unsupervised Image-to-Image Translation - Official PyTorch Implementation (ICCV 2021)

Rethinking the Truly Unsupervised Image-to-Image Translation (ICCV 2021) Each image is generated with the source image in the left and the average sty

Clova AI Research 436 Dec 27, 2022
NLP made easy

GluonNLP: Your Choice of Deep Learning for NLP GluonNLP is a toolkit that helps you solve NLP problems. It provides easy-to-use tools that helps you l

Distributed (Deep) Machine Learning Community 2.5k Jan 04, 2023
HuggingTweets - Train a model to generate tweets

HuggingTweets - Train a model to generate tweets Create in 5 minutes a tweet generator based on your favorite Tweeter Make my own model with the demo

Boris Dayma 318 Jan 04, 2023
Fast, general, and tested differentiable structured prediction in PyTorch

Torch-Struct: Structured Prediction Library A library of tested, GPU implementations of core structured prediction algorithms for deep learning applic

HNLP 1.1k Dec 16, 2022
Learning Spatio-Temporal Transformer for Visual Tracking

STARK The official implementation of the paper Learning Spatio-Temporal Transformer for Visual Tracking Highlights The strongest performances Tracker

Multimedia Research 485 Jan 04, 2023
Unofficial implementation of Google's FNet: Mixing Tokens with Fourier Transforms

FNet: Mixing Tokens with Fourier Transforms Pytorch implementation of Fnet : Mixing Tokens with Fourier Transforms. Citation: @misc{leethorp2021fnet,

Rishikesh (ऋषिकेश) 217 Dec 05, 2022
Connectionist Temporal Classification (CTC) decoding algorithms: best path, beam search, lexicon search, prefix search, and token passing. Implemented in Python.

CTC Decoding Algorithms Update 2021: installable Python package Python implementation of some common Connectionist Temporal Classification (CTC) decod

Harald Scheidl 736 Jan 03, 2023
Implementation of ProteinBERT in Pytorch

ProteinBERT - Pytorch (wip) Implementation of ProteinBERT in Pytorch. Original Repository Install $ pip install protein-bert-pytorch Usage import torc

Phil Wang 92 Dec 25, 2022
APEACH: Attacking Pejorative Expressions with Analysis on Crowd-generated Hate Speech Evaluation Datasets

APEACH - Korean Hate Speech Evaluation Datasets APEACH is the first crowd-generated Korean evaluation dataset for hate speech detection. Sentences of

Kevin-Yang 70 Dec 06, 2022
A highly sophisticated sequence-to-sequence model for code generation

CoderX A proof-of-concept AI system by Graham Neubig (June 30, 2021). About CoderX CoderX is a retrieval-based code generation AI system reminiscent o

Graham Neubig 39 Aug 03, 2021
Extracting Summary Knowledge Graphs from Long Documents

GraphSum This repo contains the data and code for the G2G model in the paper: Extracting Summary Knowledge Graphs from Long Documents. The other basel

Zeqiu (Ellen) Wu 10 Oct 21, 2022
This repository contains (not all) code from my project on Named Entity Recognition in philosophical text

NERphilosophy 👋 Welcome to the github repository of my BsC thesis. This repository contains (not all) code from my project on Named Entity Recognitio

Ruben 1 Jan 27, 2022
Black for Python docstrings and reStructuredText (rst).

Style-Doc Style-Doc is Black for Python docstrings and reStructuredText (rst). It can be used to format docstrings (Google docstring format) in Python

Telekom Open Source Software 13 Oct 24, 2022
A method to generate speech across multiple speakers

VoiceLoop PyTorch implementation of the method described in the paper VoiceLoop: Voice Fitting and Synthesis via a Phonological Loop. VoiceLoop is a n

Facebook Archive 873 Dec 15, 2022
Simple tool/toolkit for evaluating NLG (Natural Language Generation) offering various automated metrics.

Simple tool/toolkit for evaluating NLG (Natural Language Generation) offering various automated metrics. Jury offers a smooth and easy-to-use interface. It uses datasets for underlying metric computa

Open Business Software Solutions 129 Jan 06, 2023