A Lightweight Hyperparameter Optimization Tool 🚀

Overview

Lightweight Hyperparameter Optimization 🚀

Pyversions PyPI version Code style: black Colab

The mle-hyperopt package provides a simple and intuitive API for hyperparameter optimization of your Machine Learning Experiment (MLE) pipeline. It supports real, integer & categorical search variables and single- or multi-objective optimization.

Core features include the following:

  • API Simplicity: strategy.ask(), strategy.tell() interface & space definition.
  • Strategy Diversity: Grid, random, coordinate search, SMBO & wrapping around FAIR's nevergrad.
  • Search Space Refinement based on the top performing configs via strategy.refine(top_k=10).
  • Export of configurations to execute via e.g. python train.py --config_fname config.yaml.
  • Storage & reload search logs via strategy.save( ) , strategy.load( ) .

For a quickstart check out the notebook blog 📖 .

The API 🎮

from mle_hyperopt import RandomSearch

# Instantiate random search class
strategy = RandomSearch(real={"lrate": {"begin": 0.1,
                                        "end": 0.5,
                                        "prior": "log-uniform"}},
                        integer={"batch_size": {"begin": 32,
                                                "end": 128,
                                                "prior": "uniform"}},
                        categorical={"arch": ["mlp", "cnn"]})

# Simple ask - eval - tell API
configs = strategy.ask(5)
values = [train_network(**c) for c in configs]
strategy.tell(configs, values)

Implemented Search Types 🔭

Search Type Description search_config
drawing GridSearch Search over list of discrete values -
drawing RandomSearch Random search over variable ranges refine_after, refine_top_k
drawing CoordinateSearch Coordinate-wise optimization with fixed defaults order, defaults
drawing SMBOSearch Sequential model-based optimization base_estimator, acq_function, n_initial_points
drawing NevergradSearch Multi-objective nevergrad wrapper optimizer, budget_size, num_workers

Variable Types & Hyperparameter Spaces 🌍

Variable Type Space Specification
drawing real Real-valued Dict: begin, end, prior/bins (grid)
drawing integer Integer-valued Dict: begin, end, prior/bins (grid)
drawing categorical Categorical List: Values to search over

Installation

A PyPI installation is available via:

pip install mle-hyperopt

Alternatively, you can clone this repository and afterwards 'manually' install it:

git clone https://github.com/RobertTLange/mle-hyperopt.git
cd mle-hyperopt
pip install -e .

Further Options 🚴

Saving & Reloading Logs 🏪

# Storing & reloading of results from .pkl
strategy.save("search_log.json")
strategy = RandomSearch(..., reload_path="search_log.json")

# Or manually add info after class instantiation
strategy = RandomSearch(...)
strategy.load("search_log.json")

Search Decorator 🧶

from mle_hyperopt import hyperopt

@hyperopt(strategy_type="grid",
          num_search_iters=25,
          real={"x": {"begin": 0., "end": 0.5, "bins": 5},
                "y": {"begin": 0, "end": 0.5, "bins": 5}})
def circle(config):
    distance = abs((config["x"] ** 2 + config["y"] ** 2))
    return distance

strategy = circle()

Storing Configuration Files 📑

# Store 2 proposed configurations - eval_0.yaml, eval_1.yaml
strategy.ask(2, store=True)
# Store with explicit configuration filenames - conf_0.yaml, conf_1.yaml
strategy.ask(2, store=True, config_fnames=["conf_0.yaml", "conf_1.yaml"])

Retrieving Top Performers & Visualizing Results 📉

# Get the top k best performing configurations
id, configs, values = strategy.get_best(top_k=4)

# Plot timeseries of best performing score over search iterations
strategy.plot_best()

# Print out ranking of best performers
strategy.print_ranking(top_k=3)

Refining the Search Space of Your Strategy 🪓

# Refine the search space after 5 & 10 iterations based on top 2 configurations
strategy = RandomSearch(real={"lrate": {"begin": 0.1,
                                        "end": 0.5,
                                        "prior": "log-uniform"}},
                        integer={"batch_size": {"begin": 1,
                                                "end": 5,
                                                "prior": "uniform"}},
                        categorical={"arch": ["mlp", "cnn"]},
                        search_config={"refine_after": [5, 10],
                                       "refine_top_k": 2})

# Or do so manually using `refine` method
strategy.tell(...)
strategy.refine(top_k=2)

Note that the search space refinement is only implemented for random, SMBO and nevergrad-based search strategies.

Development & Milestones for Next Release

You can run the test suite via python -m pytest -vv tests/. If you find a bug or are missing your favourite feature, feel free to contact me @RobertTLange or create an issue 🤗 .

Comments
  • [FEATURE] Hyperband

    [FEATURE] Hyperband

    Hi! I was wondering if the Hyperband hyperparameter algorithm is something you want implemented.

    I'm willing to spend some time working on it if there's interest.

    opened by colligant 5
  • [FEATURE] Option to pickle the whole strategy

    [FEATURE] Option to pickle the whole strategy

    Right now strategy.save produces a JSON with the log. Any reason you didn't opt for (or have an option of) pickling the whole strategy? Two motivations for this:

    1. Not having to re-init the strategy with all the args/kwargs
    2. Not having to loop through tell! SMBO can take quite some time to do this.
    opened by alexander-soare 4
  • Type checking strategy.log could be made more flexible?

    Type checking strategy.log could be made more flexible?

    Yay first issue! Congrats Robert, this is a great interface. Haven't used a hyperopt library in a while and this felt so easy to pick up.


    For example https://github.com/RobertTLange/mle-hyperopt/blob/57eb806e95c854f48f8faac2b2dc182d2180d393/mle_hyperopt/search.py#L251

    raises an error if my objective is numpy.float64. Also noticed https://github.com/RobertTLange/mle-hyperopt/blob/57eb806e95c854f48f8faac2b2dc182d2180d393/mle_hyperopt/search.py#L206

    Could we just have

    isinstance(strategy.log[0]['objective'], (float, int))
    

    which would cover the numpy types?

    opened by alexander-soare 4
  • Successive Halving, Hyperband, PBT

    Successive Halving, Hyperband, PBT

    • [x] Robust type checking with isinstance(self.log[0]["objective"], (float, int, np.integer, np.float))
    • [x] Add improvement method indicating if score is better than best stored one
    • [x] Fix logging message when log is stored
    • [x] Add save option for best plot
    • [x] Make json serializer more robust for numpy data types
    • [x] Add possibility to save as .pkl file by providing filename in .save method ending with .pkl (issue #2)
    • [x] Add args, kwargs into decorator
    • [x] Adds synchronous Successive Halving (SuccessiveHalvingSearch - issue #3)
    • [x] Adds synchronous HyperBand (HyperbandSearch - issue #3)
    • [x] Adds synchronous PBT (PBTSearch - issue #4 )
    opened by RobertTLange 1
  • [Feature] Synchronous PBT

    [Feature] Synchronous PBT

    Move PBT ask/tell functionality from mle-toolbox experimental to mle-hyperopt. Is there any literature/empirical evidence for the importance of being asynchronous?

    enhancement 
    opened by RobertTLange 1
Releases(v0.0.7)
  • v0.0.7(Feb 20, 2022)

    Added

    • Log reloading helper for post-processing.

    Fixed

    • Bug fix in mle-search with imports of dependencies. Needed to append path.
    • Bug fix with cleaning nested dictionaries. Have to make sure not to delete entire sub-dictionary.
    Source code(tar.gz)
    Source code(zip)
  • v0.0.6(Feb 20, 2022)

    Added

    • Adds a command line interface for running a sequential search given a python script <script>.py containing a function main(config), a default configuration file <base>.yaml & a search configuration <search>.yaml. The main function should return a single scalar performance score. You can then start the search via:

      mle-search <script>.py --base_config <base>.yaml --search_config <search>.yaml --num_iters <search_iters>
      

      Or short via:

      mle-search <script>.py -base <base>.yaml -search <search>.yaml -iters <search_iters>
      
    • Adds doc-strings to all functionalities.

    Changed

    • Make it possible to optimize parameters in nested dictionaries. Added helpers flatten_config and unflatten_config. For shaping 'sub1/sub2/vname' <-> {sub1: {sub2: {vname: v}}}
    • Make start-up message also print fixed parameter settings.
    • Cleaned up decorator with the help of Strategies wrapper.
    Source code(tar.gz)
    Source code(zip)
  • v0.0.5(Jan 5, 2022)

    Added

    • Adds possibility to store and reload entire strategies as pkl file (as asked for in issue #2).
    • Adds improvement method indicating if score is better than best stored one
    • Adds save option for best plot
    • Adds args, kwargs into decorator
    • Adds synchronous Successive Halving (SuccessiveHalvingSearch - issue #3)
    • Adds synchronous HyperBand (HyperbandSearch - issue #3)
    • Adds synchronous PBT (PBTSearch - issue #4)
    • Adds option to save log in tell method
    • Adds small torch mlp example for SH/Hyperband/PBT w. logging/scheduler
    • Adds print welcome/update message for strategy specific info

    Changed

    • Major internal restructuring:
      • clean_data: Get rid of extra data provided in configuration file
      • tell_search: Update model of search strategy (e.g. SMBO/Nevergrad)
      • log_search: Add search specific log data to evaluation log
      • update_search: Refine search space/change active strategy etc.
    • Also allow to store checkpoint of trained models in tell method.
    • Fix logging message when log is stored
    • Make json serializer more robust for numpy data types
    • Robust type checking with isinstance(self.log[0]["objective"], (float, int, np.integer, np.float))
    • Update NB to include mle-scheduler example
    • Make PBT explore robust for integer/categorical valued hyperparams
    • Calculate total batches & their sizes for hyperband
    Source code(tar.gz)
    Source code(zip)
  • v0.0.3(Oct 24, 2021)

    • Fixes CoordinateSearch active grid search dimension updating. We have to account for the fact that previous coordinates are not evaluated again after switching the active variable.
    • Generalizes NevergradSearch to wrap around all search strategies.
    • Adds rich logging to all console print statements.
    • Updates documentation and adds text to getting_started.ipynb.
    Source code(tar.gz)
    Source code(zip)
  • v0.0.2(Oct 20, 2021)

    • Fixes import bug when using PyPi installation.
    • Enhances documentation and test coverage.
    • Adds search space refinement for nevergrad and smbo search strategies via refine_after and refine_top_k:
    strategy = SMBOSearch(
            real={"lrate": {"begin": 0.1, "end": 0.5, "prior": "uniform"}},
            integer={"batch_size": {"begin": 1, "end": 5, "prior": "uniform"}},
            categorical={"arch": ["mlp", "cnn"]},
            search_config={
                "base_estimator": "GP",
                "acq_function": "gp_hedge",
                "n_initial_points": 5,
                "refine_after": 5,
                "refine_top_k": 2,
            },
            seed_id=42,
            verbose=True
        )
    
    • Adds additional strategy boolean option maximize_objective to maximize instead of performing default black-box minimization.
    Source code(tar.gz)
    Source code(zip)
  • v0.0.1(Oct 16, 2021)

    Base API implementation:

    from mle_hyperopt import RandomSearch
    
    # Instantiate random search class
    strategy = RandomSearch(real={"lrate": {"begin": 0.1,
                                            "end": 0.5,
                                            "prior": "log-uniform"}},
                            integer={"batch_size": {"begin": 32,
                                                    "end": 128,
                                                    "prior": "uniform"}},
                            categorical={"arch": ["mlp", "cnn"]})
    
    # Simple ask - eval - tell API
    configs = strategy.ask(5)
    values = [train_network(**c) for c in configs]
    strategy.tell(configs, values)
    
    Source code(tar.gz)
    Source code(zip)
Owner
Robert Lange
Deep Something @ TU Berlin 🕵️
Robert Lange
This is the code repository for Interpretable Machine Learning with Python, published by Packt.

Interpretable Machine Learning with Python, published by Packt

Packt 299 Jan 02, 2023
Required for a machine learning pipeline data preprocessing and variable engineering script needs to be prepared

Feature-Engineering Required for a machine learning pipeline data preprocessing and variable engineering script needs to be prepared. When the dataset

kemalgunay 5 Apr 21, 2022
Probabilistic time series modeling in Python

GluonTS - Probabilistic Time Series Modeling in Python GluonTS is a Python toolkit for probabilistic time series modeling, built around Apache MXNet (

Amazon Web Services - Labs 3.3k Jan 03, 2023
A unified framework for machine learning with time series

Welcome to sktime A unified framework for machine learning with time series We provide specialized time series algorithms and scikit-learn compatible

The Alan Turing Institute 6k Jan 06, 2023
LibRerank is a toolkit for re-ranking algorithms. There are a number of re-ranking algorithms, such as PRM, DLCM, GSF, miDNN, SetRank, EGRerank, Seq2Slate.

LibRerank LibRerank is a toolkit for re-ranking algorithms. There are a number of re-ranking algorithms, such as PRM, DLCM, GSF, miDNN, SetRank, EGRer

126 Dec 28, 2022
Machine Learning from Scratch

Machine Learning from Scratch Author: Shengxuan Wang From: Oregon State University Content: Building Machine Learning model from Scratch, without usin

ShawnWang 0 Jul 05, 2022
Napari sklearn decomposition

napari-sklearn-decomposition A simple plugin to use with napari This napari plug

1 Sep 01, 2022
Probabilistic programming framework that facilitates objective model selection for time-varying parameter models.

Time series analysis today is an important cornerstone of quantitative science in many disciplines, including natural and life sciences as well as eco

Christoph Mark 129 Dec 24, 2022
The easy way to combine mlflow, hydra and optuna into one machine learning pipeline.

mlflow_hydra_optuna_the_easy_way The easy way to combine mlflow, hydra and optuna into one machine learning pipeline. Objective TODO Usage 1. build do

shibuiwilliam 9 Sep 09, 2022
distfit - Probability density fitting

Python package for probability density function fitting of univariate distributions of non-censored data

Erdogan Taskesen 187 Dec 30, 2022
Open MLOps - A Production-focused Open-Source Machine Learning Framework

Open MLOps - A Production-focused Open-Source Machine Learning Framework Open MLOps is a set of open-source tools carefully chosen to ease user experi

Data Revenue 590 Dec 28, 2022
A single Python file with some tools for visualizing machine learning in the terminal.

Machine Learning Visualization Tools A single Python file with some tools for visualizing machine learning in the terminal. This demo is composed of t

Bram Wasti 35 Dec 29, 2022
Scalable, Portable and Distributed Gradient Boosting (GBDT, GBRT or GBM) Library, for Python, R, Java, Scala, C++ and more. Runs on single machine, Hadoop, Spark, Dask, Flink and DataFlow

eXtreme Gradient Boosting Community | Documentation | Resources | Contributors | Release Notes XGBoost is an optimized distributed gradient boosting l

Distributed (Deep) Machine Learning Community 23.6k Jan 03, 2023
Predict the demand for electricity (R) - FRENCH

06.demand-electricity Predict the demand for electricity (R) - FRENCH Prédisez la demande en électricité Prérequis Pour effectuer ce projet, vous devr

1 Feb 13, 2022
My capstone project for Udacity's Machine Learning Nanodegree

MLND-Capstone My capstone project for Udacity's Machine Learning Nanodegree Lane Detection with Deep Learning In this project, I use a deep learning-b

Michael Virgo 407 Dec 12, 2022
Diabetes Prediction with Logistic Regression

Diabetes Prediction with Logistic Regression Exploratory Data Analysis Data Preprocessing Model & Prediction Model Evaluation Model Validation: Holdou

AZİZE SULTAN PALALI 2 Oct 23, 2021
Simple Machine Learning Tool Kit

Getting started smltk (Simple Machine Learning Tool Kit) package is implemented for helping your work during data preparation testing your model The g

Alessandra Bilardi 1 Dec 30, 2021
healthy and lesion models for learning based on the joint estimation of stochasticity and volatility

health-lesion-stovol healthy and lesion models for learning based on the joint estimation of stochasticity and volatility Reference please cite this p

5 Nov 01, 2022
Python ML pipeline that showcases mltrace functionality.

mltrace tutorial Date: October 2021 This tutorial builds a training and testing pipeline for a toy ML prediction problem: to predict whether a passeng

Log Labs 28 Nov 09, 2022
#30DaysOfStreamlit is a 30-day social challenge for you to build and deploy Streamlit apps.

30 Days Of Streamlit 🎈 This is the official repo of #30DaysOfStreamlit — a 30-day social challenge for you to learn, build and deploy Streamlit apps.

Streamlit 53 Jan 02, 2023