FlowTorch is a PyTorch library for learning and sampling from complex probability distributions using a class of methods called Normalizing Flows

Overview

Copyright © Meta Platforms, Inc

This source code is licensed under the MIT license found in the LICENSE.txt file in the root directory of this source tree.

Overview

FlowTorch is a PyTorch library for learning and sampling from complex probability distributions using a class of methods called Normalizing Flows.

Installing

An easy way to get started is to install from source:

git clone https://github.com/facebookincubator/flowtorch.git
cd flowtorch
pip install -e .

Further Information

We refer you to the FlowTorch website for more information about installation, using the library, and becoming a contributor. Here is a handy guide:

Comments
  • Ported Jacobian and inverse tests for Bijector from Pyro

    Ported Jacobian and inverse tests for Bijector from Pyro

    This PR, ports the two most important (and complex) tests from Pyro for bijectors: comparing the numerical Jacobian to the analytical one, and confirming that the Bijector.inverse method is correct for invertible bijectors.

    enhancement CLA Signed Merged 
    opened by stefanwebb 20
  • Lazy parameters and bijectors with metaclasses

    Lazy parameters and bijectors with metaclasses

    Motivation

    Shape information for a normalizing flow only becomes known when the base distribution has been specified. We have been searching for an ideal solution to express the delayed instantiation of Bijector and Params for this purpose. Several possible solutions are outlined in #57.

    Changes proposed

    The purpose of this PR is to showcase a prototype for a solution that uses metaclasses to express delayed instantiation. This works by intercepting .__call__ when a class initiated and returning a lazy wrapper around the class and bound arguments if only partial arguments are given to .__init__. If all arguments are given then the actual object is initialized. The lazy wrapper can have additional arguments bound to it, and will only become non-lazy when all the arguments are filled (or have defaults).

    enhancement CLA Signed refactor 
    opened by stefanwebb 16
  • Docusaurus v2/API docs integration + Meta rebranding

    Docusaurus v2/API docs integration + Meta rebranding

    Motivation

    The API docs are currently lacking content and use an inflexible system to specify which modules to include.

    Also, I am unable to make the repo public until I have rebranded FB as Meta Platforms

    Changes proposed

    I have integrated the new API markdown autogen tool with Docusaurus v2 styling into the website. It uses a general configuration file with regular expressions to specify what to include/exclude, displays a box/label for each symbol, plus it's signature if it has one and its raw docstring.

    The remaining tasks are parsing/formatting the docstring, adding symbol lists to module pages, and some small cosmetic fixes.

    I also completed the Meta rebranding in the copyright notices etc.

    Test Plan

    cd website
    yarn build
    yarn serve
    
    CLA Signed 
    opened by stefanwebb 12
  • Autogenerating imports for for `flowtorch.parameters` and `flowtorch.bijectors`

    Autogenerating imports for for `flowtorch.parameters` and `flowtorch.bijectors`

    Motivation

    It is tiresome to have to add new components to init.py for bijectors, distributions and parameters. We should be able to automatically generate it!

    Changes proposed

    Autogen for distributions was completed in a previous PR. This one completes it for parameters and bijectors.

    I also uncovered and fixed a bug in how utils.list_bijectors(), utils.list_distributions(), and utils.list_parameters were working

    CLA Signed Merged 
    opened by stefanwebb 10
  • First sample scripts

    First sample scripts

    Motivation

    We would like a number of example scripts to demonstrate how to use FlowTorch.

    Changes proposed

    I have created a new folder, /samples, and added the simple example from the landing page of the website. At the moment it is a Python script, although I think in the future they will be converted into Jupyter notebooks that are mirrored on Colab.

    Test Plan

    The sample plots figures that demonstrate learning is working.

    CLA Signed Merged 
    opened by stefanwebb 10
  • Parameterless bijectors

    Parameterless bijectors

    This PR migrates code from pyro.distributions.transforms and torch.distributions.transforms for parameterless bijectors.

    These are easy so I wanted to get them all over now!

    enhancement CLA Signed Merged 
    opened by stefanwebb 10
  • Empty params class: `flowtorch.params.Empty`

    Empty params class: `flowtorch.params.Empty`

    This PR adds a flowtorch.params.Empty class that will be used for flowtorch.bijectors.FixedBijector bijectors like Sigmoid, Exp, etc. that don't have any learnable parameters.

    I have fixed a number of other things in order to get all the tests running!

    enhancement CLA Signed 
    opened by stefanwebb 10
  • Autoregressive Bijector type

    Autoregressive Bijector type

    Motivation

    See #22 and #6.

    Changes proposed

    This PR implements a new bijectors.Autoregressive meta bijector. We then refactor bijectors.AffineAutoregressive as a class that inherits from bijectors.Affine and bijectors.Autoregressive.

    This change makes it easy to implement new autoregressive bijectors, like spline and neural autoregressive flow. All you have to do is implement the corresponding element-wise operator and inherit from the two

    CLA Signed Merged 
    opened by stefanwebb 9
  • Test that type hints are present for all Bijector classes' methods

    Test that type hints are present for all Bijector classes' methods

    Motivation

    mypy is excellent for checking types and preventing bugs, however it is not applied if type hints aren't declared for a function, method etc. Enforcing this via a unit test should lead to better code!

    Changes proposed

    I've written a unit test that will raise an exception when a methods arguments do not have type hints. Also, added stubs for additional tests on a Bijector/Params definition

    CLA Signed unit tests 
    opened by stefanwebb 9
  • Fixes pypi release, configured against test pypi

    Fixes pypi release, configured against test pypi

    Summary: Separates a pypi release workflow based off of github releases (these create tags so we dont get dev version numbers from setuptools_scm)

    Differential Revision: D28419348

    CLA Signed fb-exported Merged 
    opened by feynmanliang 9
  • CI installs stable Pytorch release

    CI installs stable Pytorch release

    Our CI was installing the nightly build of PyTorch, since 1.8.1 hadn't been released and we needed newly developed features in torch.distributions.constraints.

    Now that 1.8.1 is out, I have changed the config file to install the stable release.

    This PR contains the same changes from the flowtorch.params.Empty one so it can pass tests - @feynmanliang could you please merge the other one first? Then only the relevant changes should appear here

    CLA Signed Merged 
    opened by stefanwebb 9
  • Multivariate Bijectors Tutorial issue

    Multivariate Bijectors Tutorial issue

    Issue Description

    The Multivariate Bijectors tutorial notebook has an issue: someone hit a keyboard interrupt and so it's not complete.

    Steps to Reproduce

    No steps to reproduce needed, here's a snapshot stright from this github (https://github.com/facebookincubator/flowtorch/blob/main/tutorials/multivariate_bijections.ipynb) image

    Expected Behavior

    Users should expect the tutorial to be complete.

    opened by maulberto3 1
  • Issue with log_prob values not exported to Cuda

    Issue with log_prob values not exported to Cuda

    Issue Description

    A clear and concise description of the issue. If it's a feature request, please add [Feature Request] to the title.

    Not able to get all the data into 'device (CUDA)'. Facing problem at 'loss = -dist_y.log_prob(data).mean()'. Looks like data cant be transferred to GPU. Do we need to regester data as buffer and work around it?

    Error: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:1! (when checking argument for argument mat1 in method wrapper_addmm)

    Steps to Reproduce

    Please provide steps to reproduce the issue attaching any error messages and stack traces.

    dataset = torch.tensor(data_train, dtype=torch.float)
    trainloader = torch.utils.data.DataLoader(dataset, batch_size=1024)
    for steps in range(t_steps):
        step_loss=0
        for i, data in enumerate(trainloader):
            data = data.to(device)
            if i==0:
                print(data.shape)
                #p_getsizeof(data)
            try:
                optimizer.zero_grad()
                loss = -dist_y.log_prob(data).mean()
                loss.backward()
                optimizer.step()
            except ValueError as e:
                print('Error')
                print('Skipping thatbatch')
    

    Expected Behavior

    What did you expect to happen?

    Matrices should be computated in the CUDA device and not show a conflit of data being at 2 different place.

    System Info

    Please provide information about your setup

    • PyTorch Version (run print(torch.__version__)
    • Python version

    Additional Context

    opened by bigmb 2
  • [WIP] Conv1x1

    [WIP] Conv1x1

    Motivation

    Proposes a 1x1 convolution bijector.

    Test Plan

    from flowtorch.bijectors import Conv1x1Bijector
    import torch
    
    
    def test(LU_decompose, zero_init):
        c = Conv1x1Bijector(LU_decompose=LU_decompose, zero_init=zero_init)
        c = c(shape=torch.Size([10, 20, 20]))
        for p in c.parameters():
            p.data += torch.randn_like(p)/5
    
        x = torch.randn(1, 10, 20, 20)
        y = c.forward(x)
        yp = y.detach_from_flow()
        xp = c.inverse(yp)
        assert (xp/x-1).norm() < 1e-2
    
        assert (xp-x).norm()
        
    for LU_decompose in (True, False):
        for zero_init in (True, False):
            test(True, True)
    
    

    Important

    This PR is branched out from the coupling layer. I'll update the branch once the review of the coupling layer is completed.

    CLA Signed 
    opened by vmoens 0
  • Split Bijector

    Split Bijector

    Motivation

    We introduce the Split Bijector, which allows to split a tensor in half, process one half through a sequence of transformations and normalize the other.

    Changes proposed

    The new class first splits the tensor, then passes the outputs to the _param_fn and then to the transform itself. The introduction of a _forward_pre_ops and _inverse_pre_ops methods is necessary as, in the inverse case, we need to first pass the input through the transform inverse to then pass it through the convolutional layer that will give us the normalizing constants. This breaks the _param_fb(...) -> _inverse(...) logic, as we need to do something before _param_fn. As this might be the case for the forward pass too, we introduced a similar _forward_pre_ops method.

    Types of changes

    • [ ] Docs change / refactoring / dependency upgrade
    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [x] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Checklist

    • [x] My code follows the code style of this project.
    • [x] My change requires a change to the documentation.
    • [ ] I have updated the documentation accordingly.
    • [x] I have read the CONTRIBUTING document.
    • [ ] I have added tests to cover my changes.
    • [ ] All new and existing tests passed.
    • [ ] The title of my pull request is a short description of the requested changes.
    CLA Signed 
    opened by vmoens 1
  • Split bijector

    Split bijector

    A splitting bijector splits an input x in two equal parts, x1 and x2 (see for instance Glow paper): image

    Of those, only x1 is passed to the remaining part of the flow. x2 on the other hand is "normalized" by a location and scale determined by x1. The transform usually looks like this

    def _forward(self, x):
        x1, x2 = x.chunk(2, -1)
        loc, scale = some_parametric_fun(x1)
        x2 = (x2 - loc) / scale
        log_abs_det_jacobian = scale.reciprocal().log().sum()  # part of the jacobian that accounts for the transform of x2
        log_abs_det_jacobian += self.normal.log_prob(x2).sum()  # since x2 will disappear, we can include its prior log-lik here
        return x1, log_abs_det_jacobian
    

    The _inverse is done like this

    def _inverse(self, y):
        x1 = y
        loc, scale = some_parametric_fun(x1)
        x2 = torch.randn_like(x1)  # since we fit x2 to a gaussian in forward
        log_abs_det_jacobian += self.normal.log_prob(x2).sum()  
        x2 = x2 * scale + loc
        log_abs_det_jacobian = scale.reciprocal().log().sum()  
        return torch.cat([x1, x2], -1), log_abs_det_jacobian
    

    However, I personally find this coding very confusing: First and foremost, it messes up with the logic y = flow(x) -> dist.log_prob(y). What if we don't want a normal? That seems orthogonal to the bijector responsibility to me. Second, it includes in the LADJ a normal log-likelihood, which should come from the prior. Third, it makes the _inverse stochastic, but that should not be the case. Finally, it has an input of -- say -- dimension d and an output of d/2 (and conversely for _inverse).

    For some models (e.g. Glow), when generating data, we don't sample from a Gaussian with unit variance but from a Gaussian with some decreased temperature (e.g. an SD of 0.9 or something). With this logic, we'd have to tell every split layer in a flow to modify the self.normal scale!

    What I would suggest is this: we could use SplitBijector as a wrapper around another bijector. The way that would work is this:

    class SplitBijector(Bijector):
        def __init__(self, bijector):
             ...
             self.bijector = bijector
    
        def _forward(self, x):
            x1, x2 = x.chunk(2, -1)
            loc, scale = some_parametric_fun(x1)
            y2 = (x2 - loc) / scale
            log_abs_det_jacobian = scale.reciprocal().log().sum()  # part of the jacobian that accounts for the transform of x2
            y1 = self.bijector.forward(x1)
            log_abs_det_jacobian += self.bijector.log_abs_det_jacobian(x1, y1)
            y = torch.cat([y1, y2], 0)
            return y, log_abs_det_jacobian
    

    The _inverse would follow. Of course bijector must have the same input and output space! That way, we solve all of our problems: input and output space match, no weird stuff happen with a nested normal log-density, the prior density is only called out of the bijector, and one can tweak it at will without caring about what will happen in the bijector.

    enhancement 
    opened by vmoens 1
Releases(0.8)
  • 0.8(Apr 27, 2022)

    • Fixed a bug in distributions.Flow.parameters() where it returned duplicate parameters
    • Several tutorials converted from .mdx to .ipynb format in anticipation of new tutorial system
    • Removed yarn.lock
    Source code(tar.gz)
    Source code(zip)
  • 0.7(Apr 25, 2022)

    This release add two new minor features.

    A new class flowtorch.bijectors.Invert can be used to swap the forward and inverse operator of a Bijector. This is useful to turn, for example, Inverse Autoregressive Flow (IAF) into Masked Autoregressive Flow (MAF).

    Bijector objects are now nn.Modules, which amongst other benefits allows easily saving and loading of state.

    Source code(tar.gz)
    Source code(zip)
  • 0.6(Mar 3, 2022)

    This small release fixes a bug in bijectors.ops.Spline where the sign of log(det(J)) was inverted for the .inverse method. It also fixes the unit tests so that they pick up this error in the future.

    Source code(tar.gz)
    Source code(zip)
  • 0.5(Feb 3, 2022)

    In this release, we add caching of intermediate values for Bijectors.

    What this means is that you can often reduce computation by calculating log|det(J)| at the same time as y = f(x). It's also useful for performing variational inference on Bijectors that don't have an explicit inverse. The mechanism by which this is achieved is a subclass of torch.Tensor called BijectiveTensor that bundles together (x, y, context, bundle, log_det_J).

    Special shout out to @vmoens for coming up with this neat solution and taking the implementation lead! Looking forward to your future contributions 🥳

    Source code(tar.gz)
    Source code(zip)
  • 0.4(Nov 18, 2021)

    Implementations of Inverse Autoregressive Flow and Neural Spline Flow.

    Basic content for website.

    Some unit tests for bijectors and distributions.

    Source code(tar.gz)
    Source code(zip)
Owner
Meta Incubator
We work hard to contribute our work back to the web, mobile, big data, & infrastructure communities. NB: members must have two-factor auth.
Meta Incubator
Official implementation of CATs: Cost Aggregation Transformers for Visual Correspondence NeurIPS'21

CATs: Cost Aggregation Transformers for Visual Correspondence NeurIPS'21 For more information, check out the paper on [arXiv]. Training with different

Sunghwan Hong 120 Jan 04, 2023
🏅 Top 5% in 제2회 연구개발특구 인공지능 경진대회 AI SPARK 챌린지

AI_SPARK_CHALLENG_Object_Detection 제2회 연구개발특구 인공지능 경진대회 AI SPARK 챌린지 🏅 Top 5% in mAP(0.75) (443명 중 13등, mAP: 0.98116) 대회 설명 Edge 환경에서의 가축 Object Dete

3 Sep 19, 2022
Cross-Image Region Mining with Region Prototypical Network for Weakly Supervised Segmentation

Cross-Image Region Mining with Region Prototypical Network for Weakly Supervised Segmentation The code of: Cross-Image Region Mining with Region Proto

LiuWeide 16 Nov 26, 2022
FastReID is a research platform that implements state-of-the-art re-identification algorithms.

FastReID is a research platform that implements state-of-the-art re-identification algorithms.

JDAI-CV 2.8k Jan 07, 2023
Real-time Neural Representation Fusion for Robust Volumetric Mapping

NeuralBlox: Real-Time Neural Representation Fusion for Robust Volumetric Mapping Paper | Supplementary This repository contains the implementation of

ETHZ ASL 106 Dec 24, 2022
Face uncertainty quantification or estimation using PyTorch.

Face-uncertainty-pytorch This is a demo code of face uncertainty quantification or estimation using PyTorch. The uncertainty of face recognition is af

Kaen 3 Sep 16, 2022
MVFNet: Multi-View Fusion Network for Efficient Video Recognition (AAAI 2021)

MVFNet: Multi-View Fusion Network for Efficient Video Recognition (AAAI 2021) Overview We release the code of the MVFNet (Multi-View Fusion Network).

2 Jan 29, 2022
Official PyTorch code for the paper: "Point-Based Modeling of Human Clothing" (ICCV 2021)

Point-Based Modeling of Human Clothing Paper | Project page | Video This is an official PyTorch code repository of the paper "Point-Based Modeling of

Visual Understanding Lab @ Samsung AI Center Moscow 64 Nov 22, 2022
Official code for paper Exemplar Based 3D Portrait Stylization.

3D-Portrait-Stylization This is the official code for the paper "Exemplar Based 3D Portrait Stylization". You can check the paper on our project websi

60 Dec 07, 2022
PyTorch Implementation of Exploring Explicit Domain Supervision for Latent Space Disentanglement in Unpaired Image-to-Image Translation.

DosGAN-PyTorch PyTorch Implementation of Exploring Explicit Domain Supervision for Latent Space Disentanglement in Unpaired Image-to-Image Translation

40 Nov 30, 2022
Baseline inference Algorithm for the STOIC2021 challenge.

STOIC2021 Baseline Algorithm This codebase contains an example submission for the STOIC2021 COVID-19 AI Challenge. As a baseline algorithm, it impleme

Luuk Boulogne 10 Aug 08, 2022
Implementation of the bachelor's thesis "Real-time stock predictions with deep learning and news scraping".

Real-time stock predictions with deep learning and news scraping This repository contains a partial implementation of my bachelor's thesis "Real-time

David Álvarez de la Torre 0 Feb 09, 2022
基于深度强化学习的原神自动钓鱼AI

原神自动钓鱼AI由YOLOX, DQN两部分模型组成。使用迁移学习,半监督学习进行训练。 模型也包含一些使用opencv等传统数字图像处理方法实现的不可学习部分。

4.2k Jan 01, 2023
A Python Reconnection Tool for alt:V

altv-reconnect What? It invokes a reconnect in the altV Client Dev Console. You get to determine when your local client should reconnect when developi

8 Jun 30, 2022
Implementation of Pooling by Sliced-Wasserstein Embedding (NeurIPS 2021)

PSWE: Pooling by Sliced-Wasserstein Embedding (NeurIPS 2021) PSWE is a permutation-invariant feature aggregation/pooling method based on sliced-Wasser

Navid Naderializadeh 3 May 06, 2022
Repository for the paper : Meta-FDMixup: Cross-Domain Few-Shot Learning Guided byLabeled Target Data

1 Meta-FDMIxup Repository for the paper : Meta-FDMixup: Cross-Domain Few-Shot Learning Guided byLabeled Target Data. (ACM MM 2021) paper News! the rep

Fu Yuqian 44 Nov 18, 2022
A command line simple note taking app

Why yet another note taking program? note was designed with a very specific target in mind: me, and my 2354 scraps of paper. It runs from the command

64 Nov 20, 2022
Implementation for our ICCV2021 paper: Internal Video Inpainting by Implicit Long-range Propagation

Implicit Internal Video Inpainting Implementation for our ICCV2021 paper: Internal Video Inpainting by Implicit Long-range Propagation paper | project

202 Dec 30, 2022
Food Drinks and groceries Images Multi Lingual (FooDI-ML) dataset.

Food Drinks and groceries Images Multi Lingual (FooDI-ML) dataset.

41 Jan 04, 2023
buildseg is a building extraction plugin of QGIS based on PaddlePaddle.

buildseg buildseg is a building extraction plugin of QGIS based on PaddlePaddle. TODO Extract building on 512x512 remote sensing images. Extract build

Yizhou Chen 11 Sep 26, 2022