Ludwig Benchmarking Toolkit

Overview

Ludwig Benchmarking Toolkit

The Ludwig Benchmarking Toolkit is a personalized benchmarking toolkit for running end-to-end benchmark studies across an extensible set of tasks, deep learning models, standard datasets and evaluation metrics.

Getting set-up

To get started, use the following commands to set-up your conda environment.

git clone https://github.com/HazyResearch/ludwig-benchmarking-toolkit.git
cd ludwig-benchmarking-toolkit
conda env create -f environments/{environment-osx.yaml, environment-linux.yaml}
conda activate lbt

Relevant files and directories

experiment-templates/task_template.yaml: Every task (i.e. text classification) will have its owns task template. The template specifies the model architecture (encoder and decoder structure), training parameters, and a hyperopt configuration for the task at hand. A large majority of the values of the template will be populated by the values in the hyperopt_config.yaml file and dataset_metadata.yaml at training time. The sample task template located in experiment-templates/task_template.yaml is for text classification. See sample-task-templates/ for other examples.

experiment-templates/hyperopt_config.yaml: provides a range of values for training parameters and hyperopt params that will populate the hyperopt configuration in the model template

experiment-templates/dataset_metadata.yaml: contains list of all available datasets (and associated metadata) that the hyperparameter optimization can be performed over.

model-configs/: contains all encoder specific yaml files. Each files specifies possible values for relevant encoder parameters that will be optimized over. Each file in this directory adheres to the naming convention {encoder_name}_hyperopt.yaml

hyperopt-experiment-configs/: houses all experiment configs built from the templates specified above (note: this folder will be populated at run-time) and will be used when the hyperopt experiment is called. At a high level, each config file specifies the training and hyperopt information for a (task, dataset, architecture) combination. An example might be (text classification, SST2, BERT)

elasticsearch_config.yaml : this is an optional file that is to be defined if an experiment data will be saved to an elastic database.

USAGE

Command-Line Usage

Running your first TOY experiment:

For testing/setup purposes we have included a toy dataset called toy_agnews. This dataset contains a small set of training, test and validation samples from the original agnews dataset.

Before running a full-scale experiment, we recommend running an experiment locally on the toy dataset:

python experiment_driver.py --run_environment local --datasets toy_agnews --custom_models_list rnn

Running your first REAL experiment:

Steps for configuring + running an experiment:

  1. Declare and configure the search space of all non-model specific training and preprocessing hyperparameters in the experiment-templates/hyperopt_config.yaml file. The parameters specified in this file will be used across all model experiments.

  2. Declare and configure the search space of model specific hyperparameters in the {encoder}_hyperopt.yaml files in ./model_configs

    NOTE:

    • for both (1) and (2) see the Ludwig Hyperparamter Optimization guide to see what parameters for training, preprocessing, and input/ouput features can be used in the hyperopt search
    • if the exectuor type is Ray the list of available search spaces and input format differs slightly than the built-in ludwig types. Please see the Ray Tune search space docs for more information.
  3. Run the following command specifying the datasets, encoders, path to elastic DB index config file, run environment and more:

        python experiment_driver.py \
            --experiment_output_dir  
         
          
            --run_environment {local, gcp}
            --elasticsearch_config 
          
           
            --dataset_cache_dir 
           
            
            --custom_model_list 
            
             
            --datasets 
             
               --resume_existing_exp bool 
             
            
           
          
         

NOTE: Please use python experiment_driver.py -h to see list of available datasets, encoders and args

API Usage

It is also possible to run, customize and experiments using LBTs APIs. In the following section, we describe the three flavors of APIs included in LBT.

experiment API

This API provides an alternative method for running experiments. Note that runnin experiments via the API still requires populating the aforemented configuration files

from lbt.experiments import experiment

experiment(
    models = ['rnn', 'bert'],
    datasets = ['agnews'],
    run_environment = "local",
    elastic_search_config = None,
    resume_existing_exp = False,
)

tools API

This API provides access to two tooling integrations (TextAttack and Robustness Gym (RG)). The TextAttack API can be used to generate adversarial attacks. Moreover, users can use the TextAttack interface to augment data files. The RG API which empowers users to inspect model performance on a set of generic, pre-built slices and to add more slices for their specific datasets and use cases.

from lbt.tools.robustnessgym import RG 
from lbt.tools.textattack import attack, augment

# Robustness Gym API Usage
RG( dataset_name="AGNews",
    models=["bert", "rnn"],
    path_to_dataset="agnews.csv", 
    subpopulations=[ "entities", "positive_words", "negative_words"]))

# TextAttack API Usage
attack(dataset_name="AGNews", path_to_model="agnews/model/rnn_model",
    path_to_dataset="agnews.csv", attack_recipe=["CharSwapAugmenter"])

augment(dataset_name="AGNews", transformations_per_example=1
   path_to_dataset="agnews.csv", augmenter=["WordNetAugmenter"])

visualizations API

This API provides out-of-the-box support for visualizations for learning behavior, model performance, and hyperparameter optimization using the training and evaluation statistics generated during model training

import lbt.visualizations

# compare model performance
compare_performance_viz(
    dataset_name="toy_agnews",
    model_name="rnn",
    output_feature_name="class_index",
)

# compare training and validation trajectory
learning_curves_viz(
    dataset_name="toy_agnews",
    model_name="rnn",
    output_feature_name="class_index",
)

# visualize hyperoptimzation search
hyperopt_viz(
    dataset_name="toy_agnews",
    model_name="rnn",
    output_dir="."
)

EXPERIMENT EXTENSIBILITY

Adding new custom datasets

Adding custom dataset requires creating a new LBTDataset class and adding it to the dataset registry. Creating an LBTDataset object requires implementing three class methods: download, process and load. Please see the the ToyAGNews dataset as an example.

Adding new metrics

Adding custom evaluation metrics requires creating a new LBTMetric class and adding it to the metrics registry. Creating an LBTMetric object requires implementing the run class method which takes as potential inputs a path to a model directory, path to a dataset, training batch size, and training statistics. Please see the pre-built LBT metrics for examples.

ELASTICSEARCH RESEARCH DATABASE

To get credentials to upload experiments to the shared Elasticsearch research database, please fill out this form.

Owner
HazyResearch
We are a CS research group led by Prof. Chris Ré.
HazyResearch
Unofficial Implementation of MLP-Mixer, gMLP, resMLP, Vision Permutator, S2MLPv2, RaftMLP, ConvMLP, ConvMixer in Jittor and PyTorch.

Unofficial Implementation of MLP-Mixer, gMLP, resMLP, Vision Permutator, S2MLPv2, RaftMLP, ConvMLP, ConvMixer in Jittor and PyTorch! Now, Rearrange and Reduce in einops.layers.jittor are support!!

130 Jan 08, 2023
DeepOBS: A Deep Learning Optimizer Benchmark Suite

DeepOBS - A Deep Learning Optimizer Benchmark Suite DeepOBS is a benchmarking suite that drastically simplifies, automates and improves the evaluation

Aaron Bahde 7 May 12, 2020
Official Pytorch implementation of 'GOCor: Bringing Globally Optimized Correspondence Volumes into Your Neural Network' (NeurIPS 2020)

Official implementation of GOCor This is the official implementation of our paper : GOCor: Bringing Globally Optimized Correspondence Volumes into You

Prune Truong 71 Nov 18, 2022
Tensorflow-Project-Template - A best practice for tensorflow project template architecture.

Tensorflow Project Template A simple and well designed structure is essential for any Deep Learning project, so after a lot of practice and contributi

Mahmoud G. Salem 3.6k Dec 22, 2022
Code implementation for the paper 'Conditional Gaussian PAC-Bayes'.

CondGauss This repository contains PyTorch code for the paper Stochastic Gaussian PAC-Bayes. A novel PAC-Bayesian training method is implemented. Ther

0 Nov 01, 2021
Implementation of paper "Self-supervised Learning on Graphs:Deep Insights and New Directions"

SelfTask-GNN A PyTorch implementation of "Self-supervised Learning on Graphs: Deep Insights and New Directions". [paper] In this paper, we first deepe

Wei Jin 85 Oct 13, 2022
This is a beginner-friendly repo to make a collection of some unique and awesome projects. Everyone in the community can benefit & get inspired by the amazing projects present over here.

Awesome-Projects-Collection Quality over Quantity :) What to do? Add some unique and amazing projects as per your favourite tech stack for the communi

Rohan Sharma 178 Jan 01, 2023
LeafSnap replicated using deep neural networks to test accuracy compared to traditional computer vision methods.

Deep-Leafsnap Convolutional Neural Networks have become largely popular in image tasks such as image classification recently largely due to to Krizhev

Sujith Vishwajith 48 Nov 27, 2022
PyTorch implementation of the paper Ultra Fast Structure-aware Deep Lane Detection

PyTorch implementation of the paper Ultra Fast Structure-aware Deep Lane Detection

1.4k Jan 06, 2023
An SMPC companion library for Syft

SyMPC A library that extends PySyft with SMPC support SyMPC /ˈsɪmpəθi/ is a library which extends PySyft ≥0.3 with SMPC support. It allows computing o

Arturo Marquez Flores 0 Oct 13, 2021
The implementation for "Comprehensive Knowledge Distillation with Causal Intervention".

Comprehensive Knowledge Distillation with Causal Intervention This repository is a PyTorch implementation of "Comprehensive Knowledge Distillation wit

Xiang Deng 10 Nov 03, 2022
Implement Decoupled Neural Interfaces using Synthetic Gradients in Pytorch

disclaimer: this code is modified from pytorch-tutorial Image classification with synthetic gradient in Pytorch I implement the Decoupled Neural Inter

Andrew 114 Dec 22, 2022
Reinforcement learning library(framework) designed for PyTorch, implements DQN, DDPG, A2C, PPO, SAC, MADDPG, A3C, APEX, IMPALA ...

Automatic, Readable, Reusable, Extendable Machin is a reinforcement library designed for pytorch. Build status Platform Status Linux Windows Supported

Iffi 348 Dec 24, 2022
Hands-On Machine Learning for Algorithmic Trading, published by Packt

Hands-On Machine Learning for Algorithmic Trading Hands-On Machine Learning for Algorithmic Trading, published by Packt This is the code repository fo

Packt 981 Dec 29, 2022
Dyalog-apl-docset - Dyalog APL Dash Docset Generator

Dyalog APL Dash Docset Generator o alasa e kili sona kepeken tenpo lili a A Dash

Maciej Goszczycki 1 Jan 10, 2022
Bare bones use-case for deploying a containerized web app (built in streamlit) on AWS.

Containerized Streamlit web app This repository is featured in a 3-part series on Deploying web apps with Streamlit, Docker, and AWS. Checkout the blo

Collin Prather 62 Jan 02, 2023
ANN model for prediction a spatio-temporal distribution of supercooled liquid in mixed-phase clouds using Doppler cloud radar spectra.

VOODOO Revealing supercooled liquid beyond lidar attenuation Explore the docs » Report Bug · Request Feature Table of Contents About The Project Built

remsens-lim 2 Apr 28, 2022
Vision transformers (ViTs) have found only limited practical use in processing images

CXV Convolutional Xformers for Vision Vision transformers (ViTs) have found only limited practical use in processing images, in spite of their state-o

Cloudwalker 23 Sep 10, 2022
BookMyShowPC - Movie Ticket Reservation App made with Tkinter

Book My Show PC What is this? Movie Ticket Reservation App made with Tkinter. Tk

The Nithin Balaji 3 Dec 09, 2022
⚡ H2G-Net for Semantic Segmentation of Histopathological Images

H2G-Net This repository contains the code relevant for the proposed design H2G-Net, which was introduced in the manuscript "Hybrid guiding: A multi-re

André Pedersen 8 Nov 24, 2022