An end-to-end framework for mixed-integer optimization with data-driven learned constraints.

Overview

OptiCL

OptiCL is an end-to-end framework for mixed-integer optimization (MIO) with data-driven learned constraints. We address a problem setting in which a practitioner wishes to optimize decisions according to some objective and constraints, but that we have no known functions relating our decisions to the outcomes of interest. We propose to learn predictive models for these outcomes using machine learning, and to subsequently optimize decisions by embedding the learned models in a larger MIO formulation.

The framework and full methodology are detailed in our manuscript, Mixed-Integer Optimization with Constraint Learning.

How to use OptiCL

You can install the OptiCL package locally by cloning the repository and running pip install . within the home directory of the repo. This will allow you to load opticl in Python; see the example notebooks for specific usage of the functions.

The OptiCL pipeline

Our pipeline requires two inputs from a user:

  • Training data, with features classified as contextual variables, decisions, and outcomes.
  • An initial conceptual model, which is defined by specifying the decision variables and any domain-driven fixed constraints or deterministic objective terms.

Given these inputs, we implement a pipeline that:

  1. Learns predictive models for the outcomes of interest by using a moel training and selection pipeline with cross-validation.
  2. Efficiently charactertizes the feasible decision space, or "trust region," using the convex hull of the observed data.
  3. Embeds the learned models and trust region into a MIO formulation, which can then be solved using a Pyomo-supported MIO solver (e.g., Gurobi).

OptiCL requires no manual specification of a trained ML model, although the end-user can optionally restrict to a subset of model types to be considered in the selection pipeline. Furthermore, we expose the underlying trained models within the pipeline, providing transparency and allowing for the predictive models to be externally evaluated.

Examples

We illustrate the full OptiCL pipeline in three notebooks:

  • A case study on food basket optimization for the World Food Programme (notebooks/WFP/The Palatable Diet Problem.ipynb): This notebook presents a simplified version of the case study in the manuscript. It shows how to train and select models for a single learned outcome, define a conceptual model with a known objective and constraints, and solve the MIO with an additional learned constraint.
  • A general pipeline overview (notebooks/Pipeline/Model_embedding.ipynb): This notebook demonstrates the general features of the pipleine, including the procedure for training and embedding models for multiple outcomes, the specification of each outcome as either a constraint or objective term, and the incorporation of contextual features and domain-driven constraints.
  • Model verification (notebooks/Pipeline/Model_Verification_Regression.ipynb, notebooks/Pipeline/Model_Verification_Classification.ipynb): These notebooks shows the training and embedding of a single model and compares the sklearn predictions to the MIO predictions to verify the MIO embeddings. The classification notebook also provides details on how we linearize constraints for the binary classification setting.

The package currently fully supports model training and embedding for continuous outcomes across all ML methods, as demonstrated in the example notebooks. Binary classification is fully supported for learned constraints. Multi-class classification support is in development.

Citation

Our software can be cited as:

  @misc{OptiCL,
    author = "Donato Maragno and Holly Wiberg",
    title = "OptiCL: Mixed-integer optimization with constraint learning",
    year = 2021,
    url = "https://github.com/hwiberg/OptiCL/"
  }

Get in touch!

Our package is under active development. We welcome any questions or suggestions. Please submit an issue on Github, or reach us at [email protected] and [email protected].

Owner
Holly Wiberg
Holly Wiberg
Python Wrapper for Embree

pyembree Python Wrapper for Embree Installation You can install pyembree (and embree) via the conda-forge package. $ conda install -c conda-forge pyem

Anthony Scopatz 67 Dec 24, 2022
Probabilistic-Monocular-3D-Human-Pose-Estimation-with-Normalizing-Flows

Probabilistic-Monocular-3D-Human-Pose-Estimation-with-Normalizing-Flows This is the official implementation of the ICCV 2021 Paper "Probabilistic Mono

62 Nov 23, 2022
TransFGU: A Top-down Approach to Fine-Grained Unsupervised Semantic Segmentation

TransFGU: A Top-down Approach to Fine-Grained Unsupervised Semantic Segmentation Zhaoyun Yin, Pichao Wang, Fan Wang, Xianzhe Xu, Hanling Zhang, Hao Li

DamoCV 25 Dec 16, 2022
CLIP (Contrastive Language–Image Pre-training) for Italian

Italian CLIP CLIP (Radford et al., 2021) is a multimodal model that can learn to represent images and text jointly in the same space. In this project,

Italian CLIP 114 Dec 29, 2022
CycleTransGAN-EVC: A CycleGAN-based Emotional Voice Conversion Model with Transformer

CycleTransGAN-EVC CycleTransGAN-EVC: A CycleGAN-based Emotional Voice Conversion Model with Transformer Demo emotion CycleTransGAN CycleTransGAN Cycle

24 Dec 15, 2022
This is a library for training and applying sparse fine-tunings with torch and transformers.

This is a library for training and applying sparse fine-tunings with torch and transformers. Please refer to our paper Composable Sparse Fine-Tuning f

Cambridge Language Technology Lab 37 Dec 30, 2022
PyTorch implementation of DeepLab v2 on COCO-Stuff / PASCAL VOC

DeepLab with PyTorch This is an unofficial PyTorch implementation of DeepLab v2 [1] with a ResNet-101 backbone. COCO-Stuff dataset [2] and PASCAL VOC

Kazuto Nakashima 995 Jan 08, 2023
A collection of implementations of deep domain adaptation algorithms

Deep Transfer Learning on PyTorch This is a PyTorch library for deep transfer learning. We divide the code into two aspects: Single-source Unsupervise

Yongchun Zhu 647 Jan 03, 2023
This is an official implementation of CvT: Introducing Convolutions to Vision Transformers.

Introduction This is an official implementation of CvT: Introducing Convolutions to Vision Transformers. We present a new architecture, named Convolut

Bin Xiao 175 Jan 08, 2023
Official implementation of ETH-XGaze dataset baseline

ETH-XGaze baseline Official implementation of ETH-XGaze dataset baseline. ETH-XGaze dataset ETH-XGaze dataset is a gaze estimation dataset consisting

Xucong Zhang 134 Jan 03, 2023
PyTorch implementation of PNASNet-5 on ImageNet

PNASNet.pytorch PyTorch implementation of PNASNet-5. Specifically, PyTorch code from this repository is adapted to completely match both my implemetat

Chenxi Liu 314 Nov 25, 2022
The Fundamental Clustering Problems Suite (FCPS) summaries 54 state-of-the-art clustering algorithms, common cluster challenges and estimations of the number of clusters as well as the testing for cluster tendency.

FCPS Fundamental Clustering Problems Suite The package provides over sixty state-of-the-art clustering algorithms for unsupervised machine learning pu

9 Nov 27, 2022
Plug and play transformer you can find network structure and official complete code by clicking List

Plug-and-play Module Plug and play transformer you can find network structure and official complete code by clicking List The following is to quickly

8 Mar 27, 2022
AAAI 2022 paper - Unifying Model Explainability and Robustness for Joint Text Classification and Rationale Extraction

AT-BMC Unifying Model Explainability and Robustness for Joint Text Classification and Rationale Extraction (AAAI 2022) Paper Prerequisites Install pac

16 Nov 26, 2022
百度2021年语言与智能技术竞赛机器阅读理解Pytorch版baseline

项目说明: 百度2021年语言与智能技术竞赛机器阅读理解Pytorch版baseline 比赛链接:https://aistudio.baidu.com/aistudio/competition/detail/66?isFromLuge=true 官方的baseline版本是基于paddlepadd

周俊贤 54 Nov 23, 2022
Offline Multi-Agent Reinforcement Learning Implementations: Solving Overcooked Game with Data-Driven Method

Overcooked-AI We suppose to apply traditional offline reinforcement learning technique to multi-agent algorithm. In this repository, we implemented be

Baek In-Chang 14 Sep 16, 2022
Adversarial Learning for Modeling Human Motion

Adversarial Learning for Modeling Human Motion This repository contains the open source code which reproduces the results for the paper: Adversarial l

wangqi 6 Jun 15, 2021
LV-BERT: Exploiting Layer Variety for BERT (Findings of ACL 2021)

LV-BERT Introduction In this repo, we introduce LV-BERT by exploiting layer variety for BERT. For detailed description and experimental results, pleas

Weihao Yu 14 Aug 24, 2022
A web application that provides real time temperature and humidity readings of a house.

About A web application which provides real time temperature and humidity readings of a house. If you're interested in the data collected so far click

Ben Thompson 3 Jan 28, 2022
Information Gain Filtration (IGF) is a method for filtering domain-specific data during language model finetuning. IGF shows significant improvements over baseline fine-tuning without data filtration.

Information Gain Filtration Information Gain Filtration (IGF) is a method for filtering domain-specific data during language model finetuning. IGF sho

4 Jul 28, 2022