AlgoVision - A Framework for Differentiable Algorithms and Algorithmic Supervision

Overview

AlgoVision - A Framework for Differentiable Algorithms and Algorithmic Supervision

AlgoVision

This repository includes the official implementation of our NeurIPS 2021 Paper "Learning with Algorithmic Supervision via Continuous Relaxations" (Paper @ ArXiv, Video @ Youtube).

algovision is a Python 3.6+ and PyTorch 1.9.0+ based library for making algorithms differentiable. It can be installed via:

pip install algovision

Applications include smoothly integrating algorithms into neural networks for algorithmic supervision, problem-specific optimization within an algorithm, and whatever your imagination allows. As algovision relies on PyTorch it also supports CUDA, etc.

Check out the Documentation!

🌱 Intro

Deriving a loss from a smooth algorithm can be as easy as

from examples import get_bubble_sort
import torch

# Get an array (the first dimension is the batch dimension, which is always required)
array = torch.randn(1, 8, requires_grad=True)

bubble_sort = get_bubble_sort(beta=5)
result, loss = bubble_sort(array)

loss.backward()
print(array)
print(result)
print(array.grad)

Here, the loss is a sorting loss corresponding to the number of swaps in the bubble sort algorithm. But we can also define this algorithm from scratch:

from algovision import (
    Algorithm, Input, Output, Var, VarInt,                                          # core
    Let, LetInt, Print,                                                     # instructions
    Eq, NEq, LT, LEq, GT, GEq, CatProbEq, CosineSimilarity, IsTrue, IsFalse,  # conditions
    If, While, For,                                                   # control_structures
    Min, ArgMin, Max, ArgMax,                                                  # functions
)
import torch

bubble_sort = Algorithm(
    # Define the variables the input corresponds to
    Input('array'),
    # Declare and initialize all differentiable variables 
    Var('a',        torch.tensor(0.)),
    Var('b',        torch.tensor(0.)),
    Var('swapped',  torch.tensor(1.)),
    Var('loss',     torch.tensor(0.)),
    # Declare and initialize a hard integer variable (VarInt) for the control flow.
    # It can be defined in terms of a lambda expression. The required variables
    # are automatically inferred from the signature of the lambda expression.
    VarInt('n', lambda array: array.shape[1] - 1),
    # Start a relaxed While loop:
    While(IsTrue('swapped'),
        # Set `swapped` to 0 / False
        Let('swapped', 0),
        # Start an unrolled For loop. Corresponds to `for i in range(n):`
        For('i', 'n',
            # Set `a` to the `i`th element of `array`
            Let('a', 'array', ['i']),
            # Using an inplace lambda expression, we can include computations 
            # based on variables to obtain the element at position i+1. 
            Let('b', 'array', [lambda i: i+1]),
            # An If-Else statement with the condition a > b
            If(GT('a', 'b'),
               if_true=[
                   # Set the i+1 th element of array to a
                   Let('array', [lambda i: i + 1], 'a'),
                   # Set the i th element of array to b
                   Let('array', ['i'], 'b'),
                   # Set swapped to 1 / True
                   Let('swapped', 1.),
                   # Increment the loss by 1 using a lambda expression
                   Let('loss', lambda loss: loss + 1.),
               ]
           ),
        ),
        # Decrement the hard integer variable n by 1
        LetInt('n', lambda n: n-1),
    ),
    # Define what the algorithm should return
    Output('array'),
    Output('loss'),
    # Set the inverse temperature beta
    beta=5,
)

👾 Full Instruction Set

(click to expand)

The full set of modules is:

from algovision import (
    Algorithm, Input, Output, Var, VarInt,                                          # core
    Let, LetInt, Print,                                                     # instructions
    Eq, NEq, LT, LEq, GT, GEq, CatProbEq, CosineSimilarity, IsTrue, IsFalse,  # conditions
    If, While, For,                                                   # control_structures
    Min, ArgMin, Max, ArgMax,                                                  # functions
)

Algorithm is the main class, Input and Output define arguments and return values, Var defines differentiable variables and VarInt defines non-differentiable integer variables. Eq, LT, etc. are relaxed conditions for If and While, which are respective control structures. For bounded loops of fixed length that are unrolled. Let sets a differentiable variable, LetInt sets a hard integer variable. Note that hard integer variables should only be used if they are independent of the input values, but they may depend on the input shape (e.g., for reducing the number of iterations after each traversal of a For loop). Print prints for debug purposes. Min, ArgMin, Max, and ArgMax return the element-wise min/max/argmin/argmax of a list of tensors (of equal shape).

λ Lambda Expressions

Key to defining an algorithm are lambda expressions (see here for a reference). They allow defining anonymous functions and therefore allow expressing computations in-place. In most cases in algovision, it is possible to write a value in terms of a lambda expressions. The name of the used variable will be inferred from the signature of the expression. For example, lambda x: x**2 will take the variable named x and return the square of it at the location where the expression is written.

Let('z', lambda x, y: x**2 + y) corresponds to the regular line of code z = x**2 + y. This also allows inserting complex external functions including neural networks as part of the lambda expression. Assuming net is a neural networks, one can write Let('y', lambda x: net(x)) (corresponding to y = net(x)).

Let

Let is a very flexible instruction. The following table shows the use cases of it.

AlgoVision Python Description
Let('a', 'x') a = x Variable a is set to the value of variable x.
Let('a', lambda x: x**2) a = x**2 As soon as we compute anything on the right hand side of the equation, we need to write it as a lambda expression.
Let('a', 'array', ['i']) a = array[i] Indexing on the right hand requires an additional list parameter after the second argument.
Let('a', lambda array, i: array[:, i]) a = array[i] Equivalent to the row above: indexing can also be manually done inside of a lambda expression. Note that in this case, the batch dimension has to be written explicitly.
Let('a', 'array', ['i', lambda j: j+1]) a = array[i, j+1] Multiple indices and lambda expressions are also supported.
Let('a', 'array', [None, slice(0, None, 2)]) a = array[:, 0::2] None and slices are also supported.
Let('a', ['i'], 'x') a[i] = x Indexing can also be done on the left hand side of the equation.
Let('a', ['i'], 'x', ['j']) a[i] = x['j'] ...or on both sides.
Let(['a', 'b'], lamba x, y: (x+y, x-y)) a, b = x+y, x-y Multiple return values are supported.

In its most simple form Let obtains two arguments, a string naming the variable where the result is written, and the value that may be expressed via a lambda expression.

If the lambda expression returns multiple values, e.g., because a complex function is called and has two return values, the left argument can be a list of strings. That is, Let(['a', 'b'], lamba x, y: (x+y, x-y)) corresponds to a, b = x+y, x-y.

Let also supports indexing. This is denoted by an additional list argument after the left and/or the right argument. For example, Let('a', 'array', ['i']) corresponds to a = array[i], while Let('array', ['i'], 'b') corresponds to array[i] = b. Let('array', ['i'], 'array', ['j']) corresponding to array[i] = array[j] is also supported.

Note that indexing can also be expressed through lambda expressions. For example, Let('a', 'array', ['i']) is equivalent to Let('a', lambda array, i: array[:, i]). Note how in this case the batch dimension has to be explicitly taken into account ([:, ]). Relaxed indexing on the right-hand side is only supported through lambda expressions due to its complexity. Relaxed indexing on the left-hand side is supported if exactly one probability weight tensor is in the list (e.g., Let('array', [lambda x: get_weights(x)], 'a')).

LetInt only supports setting the variable to an integer (Python int) or list of integers (as well as the same type via lambda expressions). Note that hard integer variables should only be used if they are independent of the input values, but they may depend on the input shape.

If you need help implementing your differentiable algorithm, you may schedule an appointment. This will also help me improve the documentation and usability.

🧪 Experiments

The experiments can be found in the experiments folder. Additional experiments will be added soon.

🔬 Sorting Supervision

The sorting supervision experiment can be run with

python experiments/train_sort.py

or by checking out this Colab notebook.

📖 Citing

If you used our library, please cite it as

@inproceedings{petersen2021learning,
  title={{Learning with Algorithmic Supervision via Continuous Relaxations}},
  author={Petersen, Felix and Borgelt, Christian and Kuehne, Hilde and Deussen, Oliver},
  booktitle={Conference on Neural Information Processing Systems (NeurIPS)},
  year={2021}
}

📜 License

algovision is released under the MIT license. See LICENSE for additional details.

Owner
Felix Petersen
Researcher @ University of Konstanz
Felix Petersen
It's a implement of this paper:Relation extraction via Multi-Level attention CNNs

Relation Classification via Multi-Level Attention CNNs It's a implement of this paper:Relation Classification via Multi-Level Attention CNNs. Training

Aybss 2 Nov 04, 2022
Heterogeneous Deep Graph Infomax

Heterogeneous-Deep-Graph-Infomax Parameter Setting: HDGI-A: Node-level dimension: 16 Attention head: 4 Semantic-level attention vector: 8 learning rat

52 Oct 31, 2022
A simple python program that can be used to implement user authentication tokens into your program...

token-generator A simple python module that can be used by developers to implement user authentication tokens into your program... code examples creat

octo 6 Apr 18, 2022
Tensorboard for pytorch (and chainer, mxnet, numpy, ...)

tensorboardX Write TensorBoard events with simple function call. The current release (v2.3) is tested on anaconda3, with PyTorch 1.8.1 / torchvision 0

Tzu-Wei Huang 7.5k Dec 28, 2022
Structured Edge Detection Toolbox

################################################################### # # # Structure

Piotr Dollar 779 Jan 02, 2023
Styled text-to-drawing synthesis method. Featured at the 2021 NeurIPS Workshop on Machine Learning for Creativity and Design

Styled text-to-drawing synthesis method. Featured at the 2021 NeurIPS Workshop on Machine Learning for Creativity and Design

Peter Schaldenbrand 247 Dec 23, 2022
The official repository for our paper "The Neural Data Router: Adaptive Control Flow in Transformers Improves Systematic Generalization".

Codebase for learning control flow in transformers The official repository for our paper "The Neural Data Router: Adaptive Control Flow in Transformer

Csordás Róbert 24 Oct 15, 2022
GAN-generated image detection based on CNNs

GAN-image-detection This repository contains a GAN-generated image detector developed to distinguish real images from synthetic ones. The detector is

Image and Sound Processing Lab 17 Dec 15, 2022
CTF challenges and write-ups for MicroCTF 2021.

MicroCTF 2021 Qualifications About This repository contains CTF challenges and official write-ups for MicroCTF 2021 Qualifications. License Distribute

Shellmates 12 Dec 27, 2022
NuPIC Studio is an all­-in-­one tool that allows users create a HTM neural network from scratch

NuPIC Studio is an all­-in-­one tool that allows users create a HTM neural network from scratch, train it, collect statistics, and share it among the members of the community. It is not just a visual

HTM Community 93 Sep 30, 2022
Aerial Imagery dataset for fire detection: classification and segmentation (Unmanned Aerial Vehicle (UAV))

Aerial Imagery dataset for fire detection: classification and segmentation using Unmanned Aerial Vehicle (UAV) Title FLAME (Fire Luminosity Airborne-b

79 Jan 06, 2023
A more easy-to-use implementation of KPConv

A more easy-to-use implementation of KPConv This repo contains a more easy-to-use implementation of KPConv based on PyTorch. Introduction KPConv is a

Zheng Qin 35 Dec 14, 2022
TensorFlow (v2.7.0) benchmark results on an M1 Macbook Air 2020 laptop (macOS Monterey v12.1).

M1-tensorflow-benchmark TensorFlow (v2.7.0) benchmark results on an M1 Macbook Air 2020 laptop (macOS Monterey v12.1). I was initially testing if Tens

particle 2 Jan 05, 2022
Code for IntraQ, PyTorch implementation of our paper under review

IntraQ: Learning Synthetic Images with Intra-Class Heterogeneity for Zero-Shot Network Quantization paper Requirements Python = 3.7.10 Pytorch == 1.7

1 Nov 19, 2021
TCube generates rich and fluent narratives that describes the characteristics, trends, and anomalies of any time-series data (domain-agnostic) using the transfer learning capabilities of PLMs.

TCube: Domain-Agnostic Neural Time series Narration This repository contains the code for the paper: "TCube: Domain-Agnostic Neural Time series Narrat

Mandar Sharma 7 Oct 31, 2021
modelvshuman is a Python library to benchmark the gap between human and machine vision

modelvshuman is a Python library to benchmark the gap between human and machine vision. Using this library, both PyTorch and TensorFlow models can be evaluated on 17 out-of-distribution datasets with

Bethge Lab 244 Jan 03, 2023
Code for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.

WECHSEL Code for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models. arXiv: https://arx

Institute of Computational Perception 45 Dec 29, 2022
AugLy is a data augmentations library that currently supports four modalities (audio, image, text & video) and over 100 augmentations

AugLy is a data augmentations library that currently supports four modalities (audio, image, text & video) and over 100 augmentations. Each modality’s augmentations are contained within its own sub-l

Facebook Research 4.6k Jan 09, 2023
FasterAI: A library to make smaller and faster models with FastAI.

Fasterai fasterai is a library created to make neural network smaller and faster. It essentially relies on common compression techniques for networks

Nathan Hubens 193 Jan 01, 2023
One line to host them all. Bootstrap your image search case in minutes.

One line to host them all. Bootstrap your image search case in minutes. Survey NOW gives the world access to customized neural image search in just on

Jina AI 403 Dec 30, 2022