An AutoML survey focusing on practical systems.

Overview

AutoML Survey

An (in-progress) AutoML survey focusing on practical systems.


This project is a community effort in constructing and maintaining an up-to-date beginner-friendly introduction to AutoML, focusing on practical systems. AutoML is a big field, and continues to grow daily. Hence, we cannot hope to provide a comprehensive description of every interesting idea or approach available. Thus, we decided to focus on practical AutoML systems, and spread outwards from there into the methodologies and theoretical concepts that power these systems. Our intuition is that, even though there are a lot of interesting ideas still in research stage, the most mature and battle-tested concepts are those that have been succesfully applied to construct practical AutoML systems.

To this end, we are building a database of qualitative criteria for all AutoML systems we've heard of. We define an AutoML system as a software project that can be used by non-experts in machine learning to build effective ML pipelines on at least some common domains and tasks. It doesn't matter if its open-source and/or commercial, a library or an application with a GUI, or a cloud service. What matters is that it is intended to be used in practice, as opposed to, say, a reference implementation of a novel AutoML strategy in a Jupyter Notebook.

Features of an AutoML system

For each of them we are creating a system card that describes, in our opinion, the most relevant features of the system, both from the scientific and the engineering points of view. To describe an AutoML system, we use a YAML-based definition. Most of the features are self-explanatory.

💡 Check data/systems/_template.yml for a starting template.

Basic information

Characteristics about the basic information of the system as a software product.

  • name (str): Name of the system.
  • description (str): A short (2-4 sentences) description of the sytem.
  • website (str): The URL of the main website or documentation.
  • open_source (bool): Whether the system is open-source.
  • institutions (list[str]): List of businesses or academic institutions that directly support the development of the system, and/or hold intellectual property over it.
  • repository (str): If it's open-source, link of a public source code repository, otherwise null.
  • license (str): If it's open-source, a license key, otherwise null.
  • references (list[str]): List of links to relevant papers, preferably DOIs or other universal handlers, but can also be links to arxiv.org or other repositories sorted by most relevant papers, not date.

User interfaces

Characteristics describing how the users interact with the system.

  • cli (bool): Whether the system has a command line interface
  • gui (bool): Whether the system has a graphic user interface
  • http (bool): Whether the system can used from an HTTP RESTful API
  • library (bool): Whether the system can be linked as a code library
  • programming_languages (list[str]): List of programming languages in which the system can be used, i.e., it is either natively coded in that language or there are maintained bindings (as opposed to using language X's standard way to call code from language Y).

Domains

Characteristics describing the domains in which the system can be applied, which roughly correspond to the types of input data that the system can handle.

  • domains (list[str]): Domains in which the system can be deployed. Valid values are:
    • images
    • nlp
    • tabular
    • time_series
  • multi_domain (bool): Whether the system supports multiple domains for a single workflow, e.g., by allowing multiple inputs of different types simultaneously

Techniques

Characteristics describing the actual models and techniques used in the system, and the underlying ML libraries where those techniques are implemented.

  • techniques (list[str]): List of high-level techniques that are available in the systems, broadly classified according to model families. Valid values are:
    • linear_models
    • trees
    • bayesian
    • kernel_machines
    • graphical_models
    • mlp
    • cnn
    • rnn
    • pretrained
    • ensembles
    • ad_hoc ( 📝 indicates non-ML algorithms, e.g., tokenizers)
  • distillation (bool): Whether the system supports model distillation
  • ml_libraries (list[str]): List of ML libraries that support the system, i.e., where the techniques are actually implemented, if any. Valid values are lists of strings. Some examples are:
    • scikit-learn
    • keras
    • pytorch
    • nltk
    • spacy
    • transformers

Tasks

Characteristics describing the types of tasks, or problems, in which the system can be applied, which roughly correspond to the types of outputs supported.

  • tasks (list[str]): List of high-level tasks the system can perform automatically. Valid values are:
    • classification
    • structured_prediction
    • structured_generation
    • unstructured_generation
    • regression
    • clustering
    • imputation
    • segmentation
    • feature_preprocessing
    • feature_selection
    • data_augmentation
    • dimensionality_reduction
    • data_preprocessing ( 📝 domain-agonostic data preprocessing such as normalization and scaling)
    • domain_preprocessing ( 📝 refers to domain-specific preprocessing, e.g., stemming)
  • multi_task: Whether the system supports multiple tasks in a single workflow, e.g., by allowing multiple output heads from the same neural network

Search strategies

Characteristics describing the optimizaction/search strategies used for model search and/or hyperparameter tunning.

  • search_strategies (list[str]): List of high-level search strategies that are available in the system. Valid values are:
    • random
    • evolutionary
    • gradient_descent
    • hill_climbing
    • bayesian
    • grid
    • hyperband
    • reinforcement_learning
    • constructive
    • monte_carlo
  • meta_learning (list[str]): If the system includes meta-learning, list of broadly classified techniques used. Valid values are:
    • portfolio
    • warm_start

Search space

Characteristics describing the search space, the types of hyperparameters that can be optimized, and the types of ML pipelines that can be represented in this space.

  • search_space: High-level characteristics of the hyperparameter search space.
    • hierarchical (bool): If there are hyperparameters that only make sense conditioned to others.
    • probabilistic (bool): If the hyperparameter space has an associated probabilistic model.
    • differentiable (bool): If the hyperameter space can be used for gradient descent.
    • automatic (bool): If the global structure of the hyperparameter space is inferred automatically from, e.g., type annotations or model's documentation, as opposed to explicitely defined by the developers or the user.
    • hyperparameters (list[str]): Types of hyperparameters that can be optimized. Valid values are:
      • continuous
      • discrete
      • categorical
      • conditional
    • pipelines: Types of pipelines that can be discovered by the AutoML process. Each of the following keys is boolean.
      • single (bool): A single estimator (or model in general)
      • fixed (bool): A fixed pipeline with several, but predefined, steps
      • linear (bool): A variable-length pipeline where each step feeds on the immediately previous output
      • graph (bool): An arbitrarily graph-shaped pipeline where each step can feed on any of the previous outputs
    • robust (bool): Whether the seach space contains potentially invalid pipelines that are only discovered when evaluated, e.g., allowing a dense-only estimator to precede a sparse transformer.

Software architecture

Other characteristics describing general features of the system as a software product.

  • extensible (bool): Whether the system is designed to be extensible, in the sense that a user can add a single new type of model, or search algorithm, etc., in an easy manner, not needing to modify any part of the system/
  • accessible (bool): Whether the models obtained from the AutoML process can be freely inspected by the user up to the level of individual parameters (e.g., neural network weights).
  • portable (bool): Whether the models obtained can be exported out of the AutoML system, either on a standard format, or, at least, in a format native of the underlying ML library,such that they can be deployed on another platform without depending on the AutoML system itself.
  • computational_resources: Computational resources that, if available, can be leveraged by the system.
    • gpu (bool): Whether the system supports GPUs.
    • tpu (bool): Whether the system supports TPUs.
    • cluster (bool): Whether the system supports cluster-based parallelism.

How to contribute

If you are an author or a user of any practical AutoML system that roughly fits the previous criteria, we would love to have your contributions. You can add new systems, add information for existing ones, or fix anything that is incorrect.

To do this, either create a new or modify an existing file in data/systems. Once done, you can run make check to ensure that the modifications are valid with respect to the schema defined in scripts/models.py. If you need to add new fields, or new values to any of the enumerations defined, feel free to modify the corresponding schema as well (and modify both data/systems/_template.yml and this README).

Once validated, you can open a pull request.

License

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Owner
AutoGOAL
Democratizing Machine Learning
AutoGOAL
Machine learning algorithms implementation

Machine learning algorithms implementation This repository consisits of implementation of various machine learning algorithms. The algorithms implemen

Karun Dawadi 1 Jan 03, 2022
A framework for building (and incrementally growing) graph-based data structures used in hierarchical or DAG-structured clustering and nearest neighbor search

A framework for building (and incrementally growing) graph-based data structures used in hierarchical or DAG-structured clustering and nearest neighbor search

Nicholas Monath 31 Nov 03, 2022
Turning images into '9-pan' palettes using KMeans clustering from sklearn.

img2palette Turning images into '9-pan' palettes using KMeans clustering from sklearn. Requirements We require: Pillow, for opening and processing ima

Samuel Vidovich 2 Jan 01, 2022
A Python Automated Machine Learning tool that optimizes machine learning pipelines using genetic programming.

Master status: Development status: Package information: TPOT stands for Tree-based Pipeline Optimization Tool. Consider TPOT your Data Science Assista

Epistasis Lab at UPenn 8.9k Jan 09, 2023
Backtesting an algorithmic trading strategy using Machine Learning and Sentiment Analysis.

Trading Tesla with Machine Learning and Sentiment Analysis An interactive program to train a Random Forest Classifier to predict Tesla daily prices us

Renato Votto 31 Nov 17, 2022
SynapseML - an open source library to simplify the creation of scalable machine learning pipelines

Synapse Machine Learning SynapseML (previously MMLSpark) is an open source library to simplify the creation of scalable machine learning pipelines. Sy

Microsoft 3.9k Dec 30, 2022
An open-source library of algorithms to analyse time series in GPU and CPU.

An open-source library of algorithms to analyse time series in GPU and CPU.

Shapelets 216 Dec 30, 2022
Programming assignments and quizzes from all courses within the Machine Learning Engineering for Production (MLOps) specialization offered by deeplearning.ai

Machine Learning Engineering for Production (MLOps) Specialization on Coursera (offered by deeplearning.ai) Programming assignments from all courses i

Aman Chadha 173 Jan 05, 2023
ml4h is a toolkit for machine learning on clinical data of all kinds including genetics, labs, imaging, clinical notes, and more

ml4h is a toolkit for machine learning on clinical data of all kinds including genetics, labs, imaging, clinical notes, and more

Broad Institute 65 Dec 20, 2022
Home repository for the Regularized Greedy Forest (RGF) library. It includes original implementation from the paper and multithreaded one written in C++, along with various language-specific wrappers.

Regularized Greedy Forest Regularized Greedy Forest (RGF) is a tree ensemble machine learning method described in this paper. RGF can deliver better r

RGF-team 363 Dec 14, 2022
Ml based project which uses regression technique to predict the price.

Price-Predictor Ml based project which uses regression technique to predict the price. I have used various regression models and finds the model with

Garvit Verma 1 Jul 09, 2022
Machine Learning Algorithms ( Desion Tree, XG Boost, Random Forest )

implementation of machine learning Algorithms such as decision tree and random forest and xgboost on darasets then compare results for each and implement ant colony and genetic algorithms on tsp map,

Mohamadreza Rezaei 1 Jan 19, 2022
BigDL: Distributed Deep Learning Framework for Apache Spark

BigDL: Distributed Deep Learning on Apache Spark What is BigDL? BigDL is a distributed deep learning library for Apache Spark; with BigDL, users can w

4.1k Jan 09, 2023
A comprehensive repository containing 30+ notebooks on learning machine learning!

A comprehensive repository containing 30+ notebooks on learning machine learning!

Jean de Dieu Nyandwi 3.8k Jan 09, 2023
Open MLOps - A Production-focused Open-Source Machine Learning Framework

Open MLOps - A Production-focused Open-Source Machine Learning Framework Open MLOps is a set of open-source tools carefully chosen to ease user experi

Data Revenue 590 Dec 28, 2022
A high performance and generic framework for distributed DNN training

BytePS BytePS is a high performance and general distributed training framework. It supports TensorFlow, Keras, PyTorch, and MXNet, and can run on eith

Bytedance Inc. 3.3k Dec 28, 2022
Kaggle Tweet Sentiment Extraction Competition: 1st place solution (Dark of the Moon team)

Kaggle Tweet Sentiment Extraction Competition: 1st place solution (Dark of the Moon team)

Artsem Zhyvalkouski 64 Nov 30, 2022
Python library for multilinear algebra and tensor factorizations

scikit-tensor is a Python module for multilinear algebra and tensor factorizations

Maximilian Nickel 394 Dec 09, 2022
Distributed Evolutionary Algorithms in Python

DEAP DEAP is a novel evolutionary computation framework for rapid prototyping and testing of ideas. It seeks to make algorithms explicit and data stru

Distributed Evolutionary Algorithms in Python 4.9k Jan 05, 2023
Model factory is a ML training platform to help engineers to build ML models at scale

Model Factory Machine learning today is powering many businesses today, e.g., search engine, e-commerce, news or feed recommendation. Training high qu

16 Sep 23, 2022