Modular Probabilistic Programming on MXNet

Overview

MXFusion

Build Status | codecov | pypi | Documentation Status | GitHub license

MXFusion

Tutorials | Documentation | Contribution Guide

MXFusion is a modular deep probabilistic programming library.

With MXFusion Modules you can use state-of-the-art inference techniques for specialized probabilistic models without needing to implement those techniques yourself. MXFusion helps you rapidly build and test new methods at scale, by focusing on the modularity of probabilistic models and their integration with modern deep learning techniques.

MXFusion uses MXNet as its computational platform to bring the power of distributed, heterogenous computation to probabilistic modeling.

Installation

Dependencies / Prerequisites

MXFusion's primary dependencies are MXNet >= 1.3 and Networkx >= 2.1. See requirements.

Supported Architectures / Versions

MXFusion is tested on Python 3.4+ on MacOS and Linux.

Installation of MXNet

There are multiple PyPi packages of MXNet. A straight-forward installation with only CPU support can be done by:

pip install mxnet

For an installation with GPU or MKL, detailed instructions can be found on MXNet site.

pip

If you just want to use MXFusion and not modify the source, you can install through pip:

pip install mxfusion

From source

To install MXFusion from source, after cloning the repository run the following from the top-level directory:

pip install .

Where to go from here?

Tutorials

Documentation

Contributions

Community

We welcome your contributions and questions and are working to build a responsive community around MXFusion. Feel free to file an Github issue if you find a bug or want to request a new feature.

Contributing

Have a look at our contributing guide, thanks for the interest!

Points of contact for MXFusion are:

  • Eric Meissner (@meissnereric)
  • Zhenwen Dai (@zhenwendai)

License

MXFusion is licensed under Apache 2.0. See LICENSE.

Comments
  • LBFGS optimizer

    LBFGS optimizer

    Issue #, if available: #75

    Description of changes: This is a draft pull request to add LBFGS optimizer for optimization of deterministic loss functions. I found this works much better than the current default optimizer for vanilla GPs.

    If you don't object to the approach I use here I will take the time and add tests and tidy the code up.

    By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

    opened by marpulli 5
  • Copy over module algorithms correctly

    Copy over module algorithms correctly

    The module algorithms dictionary is expected to be a list containing tuples but after cloning a module this is not the case. This PR fixes that.

    By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

    opened by marpulli 5
  • Changed transpose property to function.

    Changed transpose property to function.

    This stops the debugger accidentally creating new variables on access.

    Description of changes: When using an interactive debugger (e.g. PyCharm), inspecting any Variable object invokes the property T which has the side effect of creating new variables that are not part of the graph. By changing this to a function .transpose() the functionality is maintained (at the slight expense of ease of use) whilst preventing this from happening.

    By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

    opened by tdiethe 5
  • GP replicate_self fixes

    GP replicate_self fixes

    This fixes #183 (but maybe you want to copy kernels in a different way as discussed yesterday). It also fixes a few other issues with cloning this specific GP module and adds tests

    By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

    opened by marpulli 4
  • DGP implementation

    DGP implementation

    This PR adds an implementation of DGPs.

    I have added a marginalise_conditional_distribution method to the conditional GP class which is used both by the deep GP and the SVI GP.

    I chose not to explicitly represent the conditional p(f|u) as I couldn't work out how to represent it while making the marginalization as efficient.

    By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

    opened by marpulli 4
  • Move constants in InferenceParameters into ParameterDict

    Move constants in InferenceParameters into ParameterDict

    Description of changes: Shift MXNet based constants from being stored in the InferenceParameters._constants dictionary (which still exists and keeps scalar/native constants) to keeping MXNet constants in the ParameterDict where other Parameters are kept.

    This removes the need for the separate MXNet constants file at serialization file as well!

    (Feel free to ignore the documentation change here, it's a remnant of the larger docs change I made in another PR and I'll try to make sure the other one is the one that gets kept.)

    By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

    enhancement 
    opened by meissnereric 4
  • Implement common MXNet operators (dot product, diag, etc. in MXFusion

    Implement common MXNet operators (dot product, diag, etc. in MXFusion

    These should be implemented as stateless functions.

    Create a class that takes in variables and returns a function evaluation.

    Basic - Add, subtract, multiplication, division of variables.

    elementwise - square, exponentiation, log

    Aggregation - sum, mean, prod

    Matrix ops - dot product, diag

    Matrix manipulation - reshape, transpose

    enhancement 
    opened by meissnereric 4
  • Folder structure changes and renaming

    Folder structure changes and renaming

    It contains the following changes:

    • Folder structure changes and renaming according to our discussion.
    • Extend the Mark's multivariate normal distribution implementation to allow general broadcasting.

    @marpulli I changed the behavior of your log_pdf and kl_divergence implementation to return an array of the size of mean without the last dimension. This is consistent with our other distributions.

    By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

    opened by zhenwendai 3
  • The monthly cron build of Travis-CI on master fails.

    The monthly cron build of Travis-CI on master fails.

    The monthly cron build of Travis-CI on master branch fails, because it tries to deploy again. Here is the link to a build: https://travis-ci.org/amzn/MXFusion/jobs/453598240.

    bug 
    opened by zhenwendai 3
  • Uniform and Laplace distributions

    Uniform and Laplace distributions

    Issue #, if available:

    Description of changes:

    Uniform and Laplace distributions + tests

    By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

    enhancement 
    opened by tdiethe 3
  • Added Bernoulli distribution + tests

    Added Bernoulli distribution + tests

    Issue #, if available: #21

    Description of changes: Added Bernoulli distribution + tests

    By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

    enhancement 
    opened by tdiethe 3
  • Switch to deep numpy as backend.

    Switch to deep numpy as backend.

    Hi,

    I'm a member of Deep Numpy (a new MXNet frontend API with numpy like interface: https://numpy.mxnet.io/index.html) dev team and I'm currently working on the random module for Deep Numpy, which, as its name, has behavior exactly like Numpy, including broadcastable parameters and output shape, for example: https://github.com/apache/incubator-mxnet/pull/15858

    Also, sampling backend for rejection sampling is entirely rewritten to remove while loop from GPU Kernel. The new version, according to my profiling, performs ten times faster than nd.random on GPU at the cost of tiny increase in memory usage. (not merged yet) https://github.com/apache/incubator-mxnet/issues/15928

    We could have some further discussion if you have interests in switching to deep numpy's random sampling backend, or, migrating MXFusion to Deep Numpy API :-)

    opened by xidulu 0
  • GP Likelihood classes

    GP Likelihood classes

    Is your feature request related to a problem? Please describe. The SVGP and DGP modules should be able to work with alternative likelihoods. They are both currently implemented for Gaussian likelihoods only. The ability to switch out likelihoods would make things like #126 much simpler with less code duplication.

    Describe the solution you'd like Currently:

    m = Model()
    m.X = Variable()
    m.noise_var = Variable(transformation=PositiveTransformation())
    m.Y = SVGP.define_variable(m.X, kern, m.noise_var...)
    

    I'd like something more like

    m = Model()
    m.X = Variable()
    m.noise_var = Variable(transformation=PositiveTransformation())
    m.likelihood = NormalGPLikelihood(variance)
    # This could also be: 
    # m.likelihood = BernoulliGPLikelihood()
    m.Y = SVGP.define_variable(m.X, kern, m.likelihood...)
    

    These likelihoods would need to be able to compute: where p(f) is Gaussian.

    class BernoulliGPLikelihood(GPLikelihood):
        # This class would also contain the transformation of the 
        # latent function to the interval [0, 1] I think
       def expectation(self, mean, variance, data): 
           # This method would implement the quadrature rule to do this here
    
    class NormalGPLikelihood(GPLikelihood):
       def expectation(self, mean, variance, data): # think of a better name!
           # This method could use the analytically equation
    

    Describe alternatives you've considered Creating a new class might not be necessary, we might be able to reuse the current distribution objects directly.

    I would be happy to implement this, if we come up with a design which people are happy with first.

    opened by marpulli 0
  • Inference.run throws when passing Variables of type PARAMETER as 'constants' argument

    Inference.run throws when passing Variables of type PARAMETER as 'constants' argument

    Describe the bug

    When passing Variables of type PARAMETER as 'constants' argument to the constructor of Inference (or child classes), this line throws during the execution of Inference's run method since the Variables that were passed as 'constants' are in self._var_trans but not in the 'kw' input argument.

    Expected behavior

    Execution does not throw and Variables passed as 'constants' argument to the Inference constructor, even if they are of type PARAMETER, are treated as constants and not optimized in training.

    Desktop:

    • OS: Ubuntu 18.04.2
    • Python version: 3.6
    • MXNet version: 1.4.1
    • MXFusion version: 0.3.1
    • MXNet context: CPU
    bug 
    opened by pabfer 2
  • Execution throws if a Kernel Variable is set to CONSTANT

    Execution throws if a Kernel Variable is set to CONSTANT

    Describe the bug

    If a kernel Variable is of CONSTANT type, fetch_parameters throws here since CONSTANT Variables do not make it to params input argument.

    Expected behavior

    Execution does not throw, and kernel Variables that are of type CONSTANT are kept constant and not optimized in training.

    Desktop:

    • OS: Ubuntu 18.04.2
    • Python version: 3.6
    • MXNet version: 1.4.1
    • MXFusion version: 0.3.1
    • MXNet context: CPU
    bug 
    opened by pabfer 1
  • Unexplicit throw in MinibatchInferenceLoop if batch_size is larger than dataset size

    Unexplicit throw in MinibatchInferenceLoop if batch_size is larger than dataset size

    Describe the bug

    If batch_size for MinibatchInferenceLoop is larger than the dataset size, this line throws due to division by zero.

    Expected behavior

    Possible mitigations are:

    • Option No. 1: Default batch size to min("requested batch size", "dataset size") and issue a warning if "requested batch size" > "dataset size".
    • Option No. 2: Throw with a message stating why it's throwing and how to fix the issue.

    Desktop:

    • OS: Ubuntu 18.04.2
    • Python version: 3.6
    • MXNet version: 1.4.1
    • MXFusion version: 0.3.1
    • MXNet context: CPU
    bug Easy 
    opened by pabfer 1
Releases(v0.3.1)
  • v0.3.1(May 30, 2019)

    • Added SVGP regression notebook.
    • Updated the VAE and GP regression notebook.
    • Removed the dependency on scikit-learn.
    • Moved the parameters of Gluon block to be controlled by MXFusion.
    • Fixed a bug in mini-batch learning.
    • Extended the SVGPRegression module to take samples as input variables.
    • Documentation and stylistic edits.
    • Merged in the PILCO Changes
    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(Feb 20, 2019)

    The bigger changes are:

    • #133 Changing variable broadcasting for factors
    • #153 Simplify serialization to use a simple zip file

    Other than that its mostly bug fixes and documentation changes.

    • #144 Add details to inference and serialization documentation
    • #148 Fix a bug for SVGP regression with minibatch traning
    • #81 Reduce num_samples in uniform non-mock test to 1000
    • #137 Changed transpose property to function.
    • #135 Bug fix in function evaluation
    Source code(tar.gz)
    Source code(zip)
  • v0.2.1(Nov 9, 2018)

    • Add the tutorial for Gaussian process regression.
    • Fix empty operator bug.
    • Fix bug to allow the same variable for multiple inputs to a factor.
    • Add module serialization.
    • Fix the bug that causes the failure of documentation compilation.
    • Fix the bug: the inference methods of GP modules do not handle samples.
    • Update issue templates.
    • Add license headers to all files.
    • Add the getting started notebook.
    • Remove the dependency on future.
    • Update the Inference documentation.
    • Implement Dirichlet distribution.
    • Add logistic variable transformation.
    • Implement Expectation inference.
    • Fix the bugs related to dtype.
    • Validate shape of array variables.
    • Fix divide by zero error if max_iter < n_prints.
    • Add multiply kernel.
    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Oct 19, 2018)

    • Improve the printing messages of gradient loop and bug fix for GluonFunctionEvaluation
    • Add Gamma Distribution
    • GP Modules enhancement
    • Implement Module, GPRegression, SparseGPRegression and SVGPRegression…
    • Update the README
    • Add score function for variational inference enhancement
    • Refactor the interface of inferece for module
    • Add score function inference
    • Implement common MXNet operators (dot product, diag, etc. in MXFusion enhancement
    • Add support for MXNet operators.
    • Cleanup kernel function and function wrappers for copying enhancement
    • Implement the base class MXFusionFunction
    Source code(tar.gz)
    Source code(zip)
Owner
Amazon
Amazon
Much faster than SORT(Simple Online and Realtime Tracking), a little worse than SORT

QSORT QSORT(Quick + Simple Online and Realtime Tracking) is a simple online and realtime tracking algorithm for 2D multiple object tracking in video s

Yonghye Kwon 8 Jul 27, 2022
An automated facial recognition based attendance system (desktop application)

Facial_Recognition_based_Attendance_System An automated facial recognition based attendance system (desktop application) Made using Python, Tkinter an

1 Jun 21, 2022
A "gym" style toolkit for building lightweight Neural Architecture Search systems

A "gym" style toolkit for building lightweight Neural Architecture Search systems

Jack Turner 12 Nov 05, 2022
PromptDet: Expand Your Detector Vocabulary with Uncurated Images

PromptDet: Expand Your Detector Vocabulary with Uncurated Images Paper Website Introduction The goal of this work is to establish a scalable pipeline

103 Dec 20, 2022
Prototype python implementation of the ome-ngff table spec

Prototype python implementation of the ome-ngff table spec

Kevin Yamauchi 8 Nov 20, 2022
JDet is Object Detection Framework based on Jittor.

JDet is Object Detection Framework based on Jittor.

135 Dec 14, 2022
Count the MACs / FLOPs of your PyTorch model.

THOP: PyTorch-OpCounter How to install pip install thop (now continously intergrated on Github actions) OR pip install --upgrade git+https://github.co

Ligeng Zhu 3.9k Dec 29, 2022
DSTC10 Track 2 - Knowledge-grounded Task-oriented Dialogue Modeling on Spoken Conversations

DSTC10 Track 2 - Knowledge-grounded Task-oriented Dialogue Modeling on Spoken Conversations This repository contains the data, scripts and baseline co

Alexa 51 Dec 17, 2022
PAWS 🐾 Predicting View-Assignments with Support Samples

This repo provides a PyTorch implementation of PAWS (predicting view assignments with support samples), as described in the paper Semi-Supervised Learning of Visual Features by Non-Parametrically Pre

Facebook Research 437 Dec 23, 2022
Pose estimation for iOS and android using TensorFlow 2.0

💃 Mobile 2D Single Person (Or Your Own Object) Pose Estimation for TensorFlow 2.0 This repository is forked from edvardHua/PoseEstimationForMobile wh

tucan9389 165 Nov 16, 2022
Pytorch Implementation of Residual Vision Transformers(ResViT)

ResViT Official Pytorch Implementation of Residual Vision Transformers(ResViT) which is described in the following paper: Onat Dalmaz and Mahmut Yurt

ICON Lab 41 Dec 08, 2022
Temporal Dynamic Convolutional Neural Network for Text-Independent Speaker Verification and Phonemetic Analysis

TDY-CNN for Text-Independent Speaker Verification Official implementation of Temporal Dynamic Convolutional Neural Network for Text-Independent Speake

Seong-Hu Kim 16 Oct 17, 2022
Understanding the Properties of Minimum Bayes Risk Decoding in Neural Machine Translation.

Understanding Minimum Bayes Risk Decoding This repo provides code and documentation for the following paper: Müller and Sennrich (2021): Understanding

ZurichNLP 13 May 01, 2022
This is the implementation of our work Deep Extreme Cut (DEXTR), for object segmentation from extreme points.

This is the implementation of our work Deep Extreme Cut (DEXTR), for object segmentation from extreme points.

Sergi Caelles 828 Jan 05, 2023
Graph Representation Learning via Graphical Mutual Information Maximization

GMI (Graphical Mutual Information) Graph Representation Learning via Graphical Mutual Information Maximization (Peng Z, Huang W, Luo M, et al., WWW 20

93 Dec 29, 2022
Official repository for ABC-GAN

ABC-GAN The work represented in this repository is the result of a 14 week semesterthesis on photo-realistic image generation using generative adversa

IgorSusmelj 10 Jun 23, 2022
YOLOv5 + ROS2 object detection package

YOLOv5-ROS YOLOv5 + ROS2 object detection package This program changes the input of detect.py (ultralytics/yolov5) to sensor_msgs/Image of ROS2. Requi

Ar-Ray 23 Dec 19, 2022
LOFO (Leave One Feature Out) Importance calculates the importances of a set of features based on a metric of choice,

LOFO (Leave One Feature Out) Importance calculates the importances of a set of features based on a metric of choice, for a model of choice, by iteratively removing each feature from the set, and eval

Ahmet Erdem 691 Dec 23, 2022
Deeper insights into graph convolutional networks for semi-supervised learning

deeper_insights_into_GCNs Deeper insights into graph convolutional networks for semi-supervised learning References data and utils.py come from Implem

Davidham3 17 Dec 16, 2022
Awesome Weak-Shot Learning

Awesome Weak-Shot Learning In weak-shot learning, all categories are split into non-overlapped base categories and novel categories, in which base cat

BCMI 162 Dec 30, 2022