FlowTorch is a PyTorch library for learning and sampling from complex probability distributions using a class of methods called Normalizing Flows

Overview

Copyright © Meta Platforms, Inc

This source code is licensed under the MIT license found in the LICENSE.txt file in the root directory of this source tree.

Overview

FlowTorch is a PyTorch library for learning and sampling from complex probability distributions using a class of methods called Normalizing Flows.

Installing

An easy way to get started is to install from source:

git clone https://github.com/facebookincubator/flowtorch.git
cd flowtorch
pip install -e .

Further Information

We refer you to the FlowTorch website for more information about installation, using the library, and becoming a contributor. Here is a handy guide:

Comments
  • Ported Jacobian and inverse tests for Bijector from Pyro

    Ported Jacobian and inverse tests for Bijector from Pyro

    This PR, ports the two most important (and complex) tests from Pyro for bijectors: comparing the numerical Jacobian to the analytical one, and confirming that the Bijector.inverse method is correct for invertible bijectors.

    enhancement CLA Signed Merged 
    opened by stefanwebb 20
  • Lazy parameters and bijectors with metaclasses

    Lazy parameters and bijectors with metaclasses

    Motivation

    Shape information for a normalizing flow only becomes known when the base distribution has been specified. We have been searching for an ideal solution to express the delayed instantiation of Bijector and Params for this purpose. Several possible solutions are outlined in #57.

    Changes proposed

    The purpose of this PR is to showcase a prototype for a solution that uses metaclasses to express delayed instantiation. This works by intercepting .__call__ when a class initiated and returning a lazy wrapper around the class and bound arguments if only partial arguments are given to .__init__. If all arguments are given then the actual object is initialized. The lazy wrapper can have additional arguments bound to it, and will only become non-lazy when all the arguments are filled (or have defaults).

    enhancement CLA Signed refactor 
    opened by stefanwebb 16
  • Docusaurus v2/API docs integration + Meta rebranding

    Docusaurus v2/API docs integration + Meta rebranding

    Motivation

    The API docs are currently lacking content and use an inflexible system to specify which modules to include.

    Also, I am unable to make the repo public until I have rebranded FB as Meta Platforms

    Changes proposed

    I have integrated the new API markdown autogen tool with Docusaurus v2 styling into the website. It uses a general configuration file with regular expressions to specify what to include/exclude, displays a box/label for each symbol, plus it's signature if it has one and its raw docstring.

    The remaining tasks are parsing/formatting the docstring, adding symbol lists to module pages, and some small cosmetic fixes.

    I also completed the Meta rebranding in the copyright notices etc.

    Test Plan

    cd website
    yarn build
    yarn serve
    
    CLA Signed 
    opened by stefanwebb 12
  • Autogenerating imports for for `flowtorch.parameters` and `flowtorch.bijectors`

    Autogenerating imports for for `flowtorch.parameters` and `flowtorch.bijectors`

    Motivation

    It is tiresome to have to add new components to init.py for bijectors, distributions and parameters. We should be able to automatically generate it!

    Changes proposed

    Autogen for distributions was completed in a previous PR. This one completes it for parameters and bijectors.

    I also uncovered and fixed a bug in how utils.list_bijectors(), utils.list_distributions(), and utils.list_parameters were working

    CLA Signed Merged 
    opened by stefanwebb 10
  • First sample scripts

    First sample scripts

    Motivation

    We would like a number of example scripts to demonstrate how to use FlowTorch.

    Changes proposed

    I have created a new folder, /samples, and added the simple example from the landing page of the website. At the moment it is a Python script, although I think in the future they will be converted into Jupyter notebooks that are mirrored on Colab.

    Test Plan

    The sample plots figures that demonstrate learning is working.

    CLA Signed Merged 
    opened by stefanwebb 10
  • Parameterless bijectors

    Parameterless bijectors

    This PR migrates code from pyro.distributions.transforms and torch.distributions.transforms for parameterless bijectors.

    These are easy so I wanted to get them all over now!

    enhancement CLA Signed Merged 
    opened by stefanwebb 10
  • Empty params class: `flowtorch.params.Empty`

    Empty params class: `flowtorch.params.Empty`

    This PR adds a flowtorch.params.Empty class that will be used for flowtorch.bijectors.FixedBijector bijectors like Sigmoid, Exp, etc. that don't have any learnable parameters.

    I have fixed a number of other things in order to get all the tests running!

    enhancement CLA Signed 
    opened by stefanwebb 10
  • Autoregressive Bijector type

    Autoregressive Bijector type

    Motivation

    See #22 and #6.

    Changes proposed

    This PR implements a new bijectors.Autoregressive meta bijector. We then refactor bijectors.AffineAutoregressive as a class that inherits from bijectors.Affine and bijectors.Autoregressive.

    This change makes it easy to implement new autoregressive bijectors, like spline and neural autoregressive flow. All you have to do is implement the corresponding element-wise operator and inherit from the two

    CLA Signed Merged 
    opened by stefanwebb 9
  • Test that type hints are present for all Bijector classes' methods

    Test that type hints are present for all Bijector classes' methods

    Motivation

    mypy is excellent for checking types and preventing bugs, however it is not applied if type hints aren't declared for a function, method etc. Enforcing this via a unit test should lead to better code!

    Changes proposed

    I've written a unit test that will raise an exception when a methods arguments do not have type hints. Also, added stubs for additional tests on a Bijector/Params definition

    CLA Signed unit tests 
    opened by stefanwebb 9
  • Fixes pypi release, configured against test pypi

    Fixes pypi release, configured against test pypi

    Summary: Separates a pypi release workflow based off of github releases (these create tags so we dont get dev version numbers from setuptools_scm)

    Differential Revision: D28419348

    CLA Signed fb-exported Merged 
    opened by feynmanliang 9
  • CI installs stable Pytorch release

    CI installs stable Pytorch release

    Our CI was installing the nightly build of PyTorch, since 1.8.1 hadn't been released and we needed newly developed features in torch.distributions.constraints.

    Now that 1.8.1 is out, I have changed the config file to install the stable release.

    This PR contains the same changes from the flowtorch.params.Empty one so it can pass tests - @feynmanliang could you please merge the other one first? Then only the relevant changes should appear here

    CLA Signed Merged 
    opened by stefanwebb 9
  • Multivariate Bijectors Tutorial issue

    Multivariate Bijectors Tutorial issue

    Issue Description

    The Multivariate Bijectors tutorial notebook has an issue: someone hit a keyboard interrupt and so it's not complete.

    Steps to Reproduce

    No steps to reproduce needed, here's a snapshot stright from this github (https://github.com/facebookincubator/flowtorch/blob/main/tutorials/multivariate_bijections.ipynb) image

    Expected Behavior

    Users should expect the tutorial to be complete.

    opened by maulberto3 1
  • Issue with log_prob values not exported to Cuda

    Issue with log_prob values not exported to Cuda

    Issue Description

    A clear and concise description of the issue. If it's a feature request, please add [Feature Request] to the title.

    Not able to get all the data into 'device (CUDA)'. Facing problem at 'loss = -dist_y.log_prob(data).mean()'. Looks like data cant be transferred to GPU. Do we need to regester data as buffer and work around it?

    Error: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:1! (when checking argument for argument mat1 in method wrapper_addmm)

    Steps to Reproduce

    Please provide steps to reproduce the issue attaching any error messages and stack traces.

    dataset = torch.tensor(data_train, dtype=torch.float)
    trainloader = torch.utils.data.DataLoader(dataset, batch_size=1024)
    for steps in range(t_steps):
        step_loss=0
        for i, data in enumerate(trainloader):
            data = data.to(device)
            if i==0:
                print(data.shape)
                #p_getsizeof(data)
            try:
                optimizer.zero_grad()
                loss = -dist_y.log_prob(data).mean()
                loss.backward()
                optimizer.step()
            except ValueError as e:
                print('Error')
                print('Skipping thatbatch')
    

    Expected Behavior

    What did you expect to happen?

    Matrices should be computated in the CUDA device and not show a conflit of data being at 2 different place.

    System Info

    Please provide information about your setup

    • PyTorch Version (run print(torch.__version__)
    • Python version

    Additional Context

    opened by bigmb 2
  • [WIP] Conv1x1

    [WIP] Conv1x1

    Motivation

    Proposes a 1x1 convolution bijector.

    Test Plan

    from flowtorch.bijectors import Conv1x1Bijector
    import torch
    
    
    def test(LU_decompose, zero_init):
        c = Conv1x1Bijector(LU_decompose=LU_decompose, zero_init=zero_init)
        c = c(shape=torch.Size([10, 20, 20]))
        for p in c.parameters():
            p.data += torch.randn_like(p)/5
    
        x = torch.randn(1, 10, 20, 20)
        y = c.forward(x)
        yp = y.detach_from_flow()
        xp = c.inverse(yp)
        assert (xp/x-1).norm() < 1e-2
    
        assert (xp-x).norm()
        
    for LU_decompose in (True, False):
        for zero_init in (True, False):
            test(True, True)
    
    

    Important

    This PR is branched out from the coupling layer. I'll update the branch once the review of the coupling layer is completed.

    CLA Signed 
    opened by vmoens 0
  • Split Bijector

    Split Bijector

    Motivation

    We introduce the Split Bijector, which allows to split a tensor in half, process one half through a sequence of transformations and normalize the other.

    Changes proposed

    The new class first splits the tensor, then passes the outputs to the _param_fn and then to the transform itself. The introduction of a _forward_pre_ops and _inverse_pre_ops methods is necessary as, in the inverse case, we need to first pass the input through the transform inverse to then pass it through the convolutional layer that will give us the normalizing constants. This breaks the _param_fb(...) -> _inverse(...) logic, as we need to do something before _param_fn. As this might be the case for the forward pass too, we introduced a similar _forward_pre_ops method.

    Types of changes

    • [ ] Docs change / refactoring / dependency upgrade
    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [x] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Checklist

    • [x] My code follows the code style of this project.
    • [x] My change requires a change to the documentation.
    • [ ] I have updated the documentation accordingly.
    • [x] I have read the CONTRIBUTING document.
    • [ ] I have added tests to cover my changes.
    • [ ] All new and existing tests passed.
    • [ ] The title of my pull request is a short description of the requested changes.
    CLA Signed 
    opened by vmoens 1
  • Split bijector

    Split bijector

    A splitting bijector splits an input x in two equal parts, x1 and x2 (see for instance Glow paper): image

    Of those, only x1 is passed to the remaining part of the flow. x2 on the other hand is "normalized" by a location and scale determined by x1. The transform usually looks like this

    def _forward(self, x):
        x1, x2 = x.chunk(2, -1)
        loc, scale = some_parametric_fun(x1)
        x2 = (x2 - loc) / scale
        log_abs_det_jacobian = scale.reciprocal().log().sum()  # part of the jacobian that accounts for the transform of x2
        log_abs_det_jacobian += self.normal.log_prob(x2).sum()  # since x2 will disappear, we can include its prior log-lik here
        return x1, log_abs_det_jacobian
    

    The _inverse is done like this

    def _inverse(self, y):
        x1 = y
        loc, scale = some_parametric_fun(x1)
        x2 = torch.randn_like(x1)  # since we fit x2 to a gaussian in forward
        log_abs_det_jacobian += self.normal.log_prob(x2).sum()  
        x2 = x2 * scale + loc
        log_abs_det_jacobian = scale.reciprocal().log().sum()  
        return torch.cat([x1, x2], -1), log_abs_det_jacobian
    

    However, I personally find this coding very confusing: First and foremost, it messes up with the logic y = flow(x) -> dist.log_prob(y). What if we don't want a normal? That seems orthogonal to the bijector responsibility to me. Second, it includes in the LADJ a normal log-likelihood, which should come from the prior. Third, it makes the _inverse stochastic, but that should not be the case. Finally, it has an input of -- say -- dimension d and an output of d/2 (and conversely for _inverse).

    For some models (e.g. Glow), when generating data, we don't sample from a Gaussian with unit variance but from a Gaussian with some decreased temperature (e.g. an SD of 0.9 or something). With this logic, we'd have to tell every split layer in a flow to modify the self.normal scale!

    What I would suggest is this: we could use SplitBijector as a wrapper around another bijector. The way that would work is this:

    class SplitBijector(Bijector):
        def __init__(self, bijector):
             ...
             self.bijector = bijector
    
        def _forward(self, x):
            x1, x2 = x.chunk(2, -1)
            loc, scale = some_parametric_fun(x1)
            y2 = (x2 - loc) / scale
            log_abs_det_jacobian = scale.reciprocal().log().sum()  # part of the jacobian that accounts for the transform of x2
            y1 = self.bijector.forward(x1)
            log_abs_det_jacobian += self.bijector.log_abs_det_jacobian(x1, y1)
            y = torch.cat([y1, y2], 0)
            return y, log_abs_det_jacobian
    

    The _inverse would follow. Of course bijector must have the same input and output space! That way, we solve all of our problems: input and output space match, no weird stuff happen with a nested normal log-density, the prior density is only called out of the bijector, and one can tweak it at will without caring about what will happen in the bijector.

    enhancement 
    opened by vmoens 1
Releases(0.8)
  • 0.8(Apr 27, 2022)

    • Fixed a bug in distributions.Flow.parameters() where it returned duplicate parameters
    • Several tutorials converted from .mdx to .ipynb format in anticipation of new tutorial system
    • Removed yarn.lock
    Source code(tar.gz)
    Source code(zip)
  • 0.7(Apr 25, 2022)

    This release add two new minor features.

    A new class flowtorch.bijectors.Invert can be used to swap the forward and inverse operator of a Bijector. This is useful to turn, for example, Inverse Autoregressive Flow (IAF) into Masked Autoregressive Flow (MAF).

    Bijector objects are now nn.Modules, which amongst other benefits allows easily saving and loading of state.

    Source code(tar.gz)
    Source code(zip)
  • 0.6(Mar 3, 2022)

    This small release fixes a bug in bijectors.ops.Spline where the sign of log(det(J)) was inverted for the .inverse method. It also fixes the unit tests so that they pick up this error in the future.

    Source code(tar.gz)
    Source code(zip)
  • 0.5(Feb 3, 2022)

    In this release, we add caching of intermediate values for Bijectors.

    What this means is that you can often reduce computation by calculating log|det(J)| at the same time as y = f(x). It's also useful for performing variational inference on Bijectors that don't have an explicit inverse. The mechanism by which this is achieved is a subclass of torch.Tensor called BijectiveTensor that bundles together (x, y, context, bundle, log_det_J).

    Special shout out to @vmoens for coming up with this neat solution and taking the implementation lead! Looking forward to your future contributions 🥳

    Source code(tar.gz)
    Source code(zip)
  • 0.4(Nov 18, 2021)

    Implementations of Inverse Autoregressive Flow and Neural Spline Flow.

    Basic content for website.

    Some unit tests for bijectors and distributions.

    Source code(tar.gz)
    Source code(zip)
Owner
Meta Incubator
We work hard to contribute our work back to the web, mobile, big data, & infrastructure communities. NB: members must have two-factor auth.
Meta Incubator
A new codebase for Group Activity Recognition. It contains codes for ICCV 2021 paper: Spatio-Temporal Dynamic Inference Network for Group Activity Recognition and some other methods.

Spatio-Temporal Dynamic Inference Network for Group Activity Recognition The source codes for ICCV2021 Paper: Spatio-Temporal Dynamic Inference Networ

40 Dec 12, 2022
Distributed Evolutionary Algorithms in Python

DEAP DEAP is a novel evolutionary computation framework for rapid prototyping and testing of ideas. It seeks to make algorithms explicit and data stru

Distributed Evolutionary Algorithms in Python 4.9k Jan 05, 2023
Wordplay, an artificial Intelligence based crossword puzzle solver.

Wordplay, AI based crossword puzzle solver A crossword is a word puzzle that usually takes the form of a square or a rectangular grid of white- and bl

Vaibhaw 4 Nov 16, 2022
Deep deconfounded recommender (Deep-Deconf) for paper "Deep causal reasoning for recommendations"

Deep Causal Reasoning for Recommender Systems The codes are associated with the following paper: Deep Causal Reasoning for Recommendations, Yaochen Zh

Yaochen Zhu 22 Oct 15, 2022
Model-based reinforcement learning in TensorFlow

Bellman Website | Twitter | Documentation (latest) What does Bellman do? Bellman is a package for model-based reinforcement learning (MBRL) in Python,

46 Nov 09, 2022
A simplified framework and utilities for PyTorch

Here is Poutyne. Poutyne is a simplified framework for PyTorch and handles much of the boilerplating code needed to train neural networks. Use Poutyne

GRAAL/GRAIL 534 Dec 17, 2022
Keras implementation of AdaBound

AdaBound for Keras Keras port of AdaBound Optimizer for PyTorch, from the paper Adaptive Gradient Methods with Dynamic Bound of Learning Rate. Usage A

Somshubra Majumdar 132 Sep 23, 2022
Internship Assessment Task for BaggageAI.

BaggageAI Internship Task Problem Statement: You are given two sets of images:- background and threat objects. Background images are the background x-

Arya Shah 10 Nov 14, 2022
Bagua is a flexible and performant distributed training algorithm development framework.

Bagua is a flexible and performant distributed training algorithm development framework.

786 Dec 17, 2022
Code and data for "TURL: Table Understanding through Representation Learning"

TURL This Repo contains code and data for "TURL: Table Understanding through Representation Learning". Environment and Setup Data Pretraining Finetuni

SunLab-OSU 63 Nov 23, 2022
This is the dataset for testing the robustness of various VO/VIO methods

KAIST VIO dataset This is the dataset for testing the robustness of various VO/VIO methods You can download the whole dataset on KAIST VIO dataset Ind

1 Sep 01, 2022
An implementation demo of the ICLR 2021 paper Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks in PyTorch.

Neural Attention Distillation This is an implementation demo of the ICLR 2021 paper Neural Attention Distillation: Erasing Backdoor Triggers from Deep

Yige-Li 84 Jan 04, 2023
A Pytorch Implementation of a continuously rate adjustable learned image compression framework.

GainedVAE A Pytorch Implementation of a continuously rate adjustable learned image compression framework, Gained Variational Autoencoder(GainedVAE). N

39 Dec 24, 2022
Code examples and benchmarks from the paper "Understanding Entropy Coding With Asymmetric Numeral Systems (ANS): a Statistician's Perspective"

Code For the Paper "Understanding Entropy Coding With Asymmetric Numeral Systems (ANS): a Statistician's Perspective" Author: Robert Bamler Date: 22 D

4 Nov 02, 2022
3D-Reconstruction 基于深度学习方法的单目多视图三维重建

基于深度学习方法的单目多视图三维重建 Part I 三维重建 代码:Part1 技术文档:[Markdown] [PDF] 原始图像:Original Images 点云结果:Point Cloud Results-1

HMT_Curo 19 Dec 26, 2022
Feature extraction made simple with torchextractor

torchextractor: PyTorch Intermediate Feature Extraction Introduction Too many times some model definitions get remorselessly copy-pasted just because

Antoine Broyelle 89 Oct 31, 2022
This reposityory contains the PyTorch implementation of our paper "Generative Dynamic Patch Attack".

Generative Dynamic Patch Attack This reposityory contains the PyTorch implementation of our paper "Generative Dynamic Patch Attack". Requirements PyTo

Xiang Li 8 Nov 17, 2022
A sequence of Jupyter notebooks featuring the 12 Steps to Navier-Stokes

CFD Python Please cite as: Barba, Lorena A., and Forsyth, Gilbert F. (2018). CFD Python: the 12 steps to Navier-Stokes equations. Journal of Open Sour

Barba group 2.6k Dec 30, 2022
Text2Art is an AI art generator powered with VQGAN + CLIP and CLIPDrawer models

Text2Art is an AI art generator powered with VQGAN + CLIP and CLIPDrawer models. You can easily generate all kind of art from drawing, painting, sketch, or even a specific artist style just using a t

Muhammad Fathy Rashad 643 Dec 30, 2022
Monocular Depth Estimation - Weighted-average prediction from multiple pre-trained depth estimation models

merged_depth runs (1) AdaBins, (2) DiverseDepth, (3) MiDaS, (4) SGDepth, and (5) Monodepth2, and calculates a weighted-average per-pixel absolute dept

Pranav 39 Nov 21, 2022