sequitur is a library that lets you create and train an autoencoder for sequential data in just two lines of code

Overview

sequitur

sequitur is a library that lets you create and train an autoencoder for sequential data in just two lines of code. It implements three different autoencoder architectures in PyTorch, and a predefined training loop. sequitur is ideal for working with sequential data ranging from single and multivariate time series to videos, and is geared for those who want to get started quickly with autoencoders.

import torch
from sequitur.models import LINEAR_AE
from sequitur import quick_train

train_seqs = [torch.randn(4) for _ in range(100)] # 100 sequences of length 4
encoder, decoder, _, _ = quick_train(LINEAR_AE, train_seqs, encoding_dim=2, denoise=True)

encoder(torch.randn(4)) # => torch.tensor([0.19, 0.84])

Each autoencoder learns to represent input sequences as lower-dimensional, fixed-size vectors. This can be useful for finding patterns among sequences, clustering sequences, or converting sequences into inputs for other algorithms.

Installation

Requires Python 3.X and PyTorch 1.2.X

You can install sequitur with pip:

$ pip install sequitur

Getting Started

1. Prepare your data

First, you need to prepare a set of example sequences to train an autoencoder on. This training set should be a list of torch.Tensors, where each tensor has shape [num_elements, *num_features]. So, if each example in your training set is a sequence of 10 5x5 matrices, then each example would be a tensor with shape [10, 5, 5].

2. Choose an autoencoder

Next, you need to choose an autoencoder model. If you're working with sequences of numbers (e.g. time series) or 1D vectors (e.g. word vectors), then you should use the LINEAR_AE or LSTM_AE model. For sequences of 2D matrices (e.g. videos) or 3D matrices (e.g. fMRI scans), you'll want to use CONV_LSTM_AE. Each model is a PyTorch module, and can be imported like so:

from sequitur.models import CONV_LSTM_AE

More details about each model are in the "Models" section below.

3. Train the autoencoder

From here, you can either initialize the model yourself and write your own training loop, or import the quick_train function and plug in the model, training set, and desired encoding size, like so:

import torch
from sequitur.models import CONV_LSTM_AE
from sequitur import quick_train

train_set = [torch.randn(10, 5, 5) for _ in range(100)]
encoder, decoder, _, _ = quick_train(CONV_LSTM_AE, train_set, encoding_dim=4)

After training, quick_train returns the encoder and decoder models, which are PyTorch modules that can encode and decode new sequences. These can be used like so:

x = torch.randn(10, 5, 5)
z = encoder(x) # Tensor with shape [4]
x_prime = decoder(z) # Tensor with shape [10, 5, 5]

API

Training your Model

quick_train(model, train_set, encoding_dim, verbose=False, lr=1e-3, epochs=50, denoise=False, **kwargs)

Lets you train an autoencoder with just one line of code. Useful if you don't want to create your own training loop. Training involves learning a vector encoding of each input sequence, reconstructing the original sequence from the encoding, and calculating the loss (mean-squared error) between the reconstructed input and the original input. The autoencoder weights are updated using the Adam optimizer.

Parameters:

  • model (torch.nn.Module): Autoencoder model to train (imported from sequitur.models)
  • train_set (list): List of sequences (each a torch.Tensor) to train the model on; has shape [num_examples, seq_len, *num_features]
  • encoding_dim (int): Desired size of the vector encoding
  • verbose (bool, optional (default=False)): Whether or not to print the loss at each epoch
  • lr (float, optional (default=1e-3)): Learning rate
  • epochs (int, optional (default=50)): Number of epochs to train for
  • **kwargs: Parameters to pass into model when it's instantiated

Returns:

  • encoder (torch.nn.Module): Trained encoder model; takes a sequence (as a tensor) as input and returns an encoding of the sequence as a tensor of shape [encoding_dim]
  • decoder (torch.nn.Module): Trained decoder model; takes an encoding (as a tensor) and returns a decoded sequence
  • encodings (list): List of tensors corresponding to the final vector encodings of each sequence in the training set
  • losses (list): List of average MSE values at each epoch

Models

Every autoencoder inherits from torch.nn.Module and has an encoder attribute and a decoder attribute, both of which also inherit from torch.nn.Module.

Sequences of Numbers

LINEAR_AE(input_dim, encoding_dim, h_dims=[], h_activ=torch.nn.Sigmoid(), out_activ=torch.nn.Tanh())

Consists of fully-connected layers stacked on top of each other. Can only be used if you're dealing with sequences of numbers, not vectors or matrices.

Parameters:

  • input_dim (int): Size of each input sequence
  • encoding_dim (int): Size of the vector encoding
  • h_dims (list, optional (default=[])): List of hidden layer sizes for the encoder
  • h_activ (torch.nn.Module or None, optional (default=torch.nn.Sigmoid())): Activation function to use for hidden layers; if None, no activation function is used
  • out_activ (torch.nn.Module or None, optional (default=torch.nn.Tanh())): Activation function to use for the output layer in the encoder; if None, no activation function is used

Example:

To create the autoencoder shown in the diagram above, use the following arguments:

from sequitur.models import LINEAR_AE

model = LINEAR_AE(
  input_dim=10,
  encoding_dim=4,
  h_dims=[8, 6],
  h_activ=None,
  out_activ=None
)

x = torch.randn(10) # Sequence of 10 numbers
z = model.encoder(x) # z.shape = [4]
x_prime = model.decoder(z) # x_prime.shape = [10]

Sequences of 1D Vectors

LSTM_AE(input_dim, encoding_dim, h_dims=[], h_activ=torch.nn.Sigmoid(), out_activ=torch.nn.Tanh())

Autoencoder for sequences of vectors which consists of stacked LSTMs. Can be trained on sequences of varying length.

Parameters:

  • input_dim (int): Size of each sequence element (vector)
  • encoding_dim (int): Size of the vector encoding
  • h_dims (list, optional (default=[])): List of hidden layer sizes for the encoder
  • h_activ (torch.nn.Module or None, optional (default=torch.nn.Sigmoid())): Activation function to use for hidden layers; if None, no activation function is used
  • out_activ (torch.nn.Module or None, optional (default=torch.nn.Tanh())): Activation function to use for the output layer in the encoder; if None, no activation function is used

Example:

To create the autoencoder shown in the diagram above, use the following arguments:

from sequitur.models import LSTM_AE

model = LSTM_AE(
  input_dim=3,
  encoding_dim=7,
  h_dims=[64],
  h_activ=None,
  out_activ=None
)

x = torch.randn(10, 3) # Sequence of 10 3D vectors
z = model.encoder(x) # z.shape = [7]
x_prime = model.decoder(z, seq_len=10) # x_prime.shape = [10, 3]

Sequences of 2D/3D Matrices

CONV_LSTM_AE(input_dims, encoding_dim, kernel, stride=1, h_conv_channels=[1], h_lstm_channels=[])

Autoencoder for sequences of 2D or 3D matrices/images, loosely based on the CNN-LSTM architecture described in Beyond Short Snippets: Deep Networks for Video Classification. Uses a CNN to create vector encodings of each image in an input sequence, and then an LSTM to create encodings of the sequence of vectors.

Parameters:

  • input_dims (tuple): Shape of each 2D or 3D image in the input sequences
  • encoding_dim (int): Size of the vector encoding
  • kernel (int or tuple): Size of the convolving kernel; use tuple to specify a different size for each dimension
  • stride (int or tuple, optional (default=1)): Stride of the convolution; use tuple to specify a different stride for each dimension
  • h_conv_channels (list, optional (default=[1])): List of hidden channel sizes for the convolutional layers
  • h_lstm_channels (list, optional (default=[])): List of hidden channel sizes for the LSTM layers

Example:

from sequitur.models import CONV_LSTM_AE

model = CONV_LSTM_AE(
  input_dims=(50, 100),
  encoding_dim=16,
  kernel=(5, 8),
  stride=(3, 5),
  h_conv_channels=[4, 8],
  h_lstm_channels=[32, 64]
)

x = torch.randn(22, 50, 100) # Sequence of 22 50x100 images
z = model.encoder(x) # z.shape = [16]
x_prime = model.decoder(z, seq_len=22) # x_prime.shape = [22, 50, 100]
Owner
Jonathan Shobrook
Jonathan Shobrook
The codes I made while I practiced various TensorFlow examples

TensorFlow_Exercises The codes I made while I practiced various TensorFlow examples About the codes I didn't create these codes by myself, but re-crea

Terry Taewoong Um 614 Dec 08, 2022
The official implementation of EIGNN: Efficient Infinite-Depth Graph Neural Networks (NeurIPS 2021)

EIGNN: Efficient Infinite-Depth Graph Neural Networks The official implementation of EIGNN: Efficient Infinite-Depth Graph Neural Networks (NeurIPS 20

Juncheng Liu 14 Nov 22, 2022
The coda and data for "Measuring Fine-Grained Domain Relevance of Terms: A Hierarchical Core-Fringe Approach" (ACL '21)

We propose a hierarchical core-fringe learning framework to measure fine-grained domain relevance of terms – the degree that a term is relevant to a broad (e.g., computer science) or narrow (e.g., de

Jie Huang 14 Oct 21, 2022
Official Keras Implementation for UNet++ in IEEE Transactions on Medical Imaging and DLMIA 2018

UNet++: A Nested U-Net Architecture for Medical Image Segmentation UNet++ is a new general purpose image segmentation architecture for more accurate i

Zongwei Zhou 1.8k Dec 27, 2022
[ACM MM2021] MGH: Metadata Guided Hypergraph Modeling for Unsupervised Person Re-identification

Introduction This project is developed based on FastReID, which is an ongoing ReID project. Projects BUC In projects/BUC, we implement AAAI 2019 paper

WuYiming 7 Apr 13, 2022
TensorFlow implementation of "TokenLearner: What Can 8 Learned Tokens Do for Images and Videos?"

TokenLearner: What Can 8 Learned Tokens Do for Images and Videos? Source: Improving Vision Transformer Efficiency and Accuracy by Learning to Tokenize

Aritra Roy Gosthipaty 23 Dec 24, 2022
This repository introduces a short project about Transfer Learning for Classification of MRI Images.

Transfer Learning for MRI Images Classification This repository introduces a short project made during my stay at Neuromatch Summer School 2021. This

Oscar Guarnizo 3 Nov 15, 2022
Official implementation for "Style Transformer for Image Inversion and Editing" (CVPR 2022)

Style Transformer for Image Inversion and Editing (CVPR2022) https://arxiv.org/abs/2203.07932 Existing GAN inversion methods fail to provide latent co

Xueqi Hu 153 Dec 02, 2022
Orbivator AI - To Determine which features of data (measurements) are most important for diagnosing breast cancer and find out if breast cancer occurs or not.

Orbivator_AI Breast Cancer Wisconsin (Diagnostic) GOAL To Determine which features of data (measurements) are most important for diagnosing breast can

anurag kumar singh 1 Jan 02, 2022
Official implementation of CVPR2020 paper "Deep Generative Model for Robust Imbalance Classification"

Deep Generative Model for Robust Imbalance Classification Deep Generative Model for Robust Imbalance Classification Xinyue Wang, Yilin Lyu, Liping Jin

9 Nov 01, 2022
Jetson Nano-based smart camera system that measures crowd face mask usage in real-time.

MaskCam MaskCam is a prototype reference design for a Jetson Nano-based smart camera system that measures crowd face mask usage in real-time, with all

BDTI 212 Dec 29, 2022
Deep learning models for change detection of remote sensing images

Change Detection Models (Remote Sensing) Python library with Neural Networks for Change Detection based on PyTorch. ⚡ ⚡ ⚡ I am trying to build this pr

Kaiyu Li 176 Dec 24, 2022
Scalable training for dense retrieval models.

Scalable implementation of dense retrieval. Training on cluster By default it trains locally: PYTHONPATH=.:$PYTHONPATH python dpr_scale/main.py traine

Facebook Research 90 Dec 28, 2022
M3DSSD: Monocular 3D Single Stage Object Detector

M3DSSD: Monocular 3D Single Stage Object Detector Setup pytorch 0.4.1 Preparation Download the full KITTI detection dataset. Then place a softlink (or

mumianyuxin 64 Dec 27, 2022
Neural Scene Graphs for Dynamic Scene (CVPR 2021)

Implementation of Neural Scene Graphs, that optimizes multiple radiance fields to represent different objects and a static scene background. Learned representations can be rendered with novel object

151 Dec 26, 2022
Rotated Box Is Back : Accurate Box Proposal Network for Scene Text Detection

Rotated Box Is Back : Accurate Box Proposal Network for Scene Text Detection This material is supplementray code for paper accepted in ICDAR 2021 We h

NCSOFT 30 Dec 21, 2022
git《Beta R-CNN: Looking into Pedestrian Detection from Another Perspective》(NeurIPS 2020) GitHub:[fig3]

Beta R-CNN: Looking into Pedestrian Detection from Another Perspective This is the pytorch implementation of our paper "[Beta R-CNN: Looking into Pede

35 Sep 08, 2021
Robust Video Matting in PyTorch, TensorFlow, TensorFlow.js, ONNX, CoreML!

Robust Video Matting (RVM) English | 中文 Official repository for the paper Robust High-Resolution Video Matting with Temporal Guidance. RVM is specific

flow-dev 2 Aug 21, 2022
Robot Hacking Manual (RHM). From robotics to cybersecurity. Papers, notes and writeups from a journey into robot cybersecurity.

RHM: Robot Hacking Manual Download in PDF RHM v0.4 ┃ Read online The Robot Hacking Manual (RHM) is an introductory series about cybersecurity for robo

Víctor Mayoral Vilches 233 Dec 30, 2022
Minimisation of a negative log likelihood fit to extract the lifetime of the D^0 meson (MNLL2ELDM)

Minimisation of a negative log likelihood fit to extract the lifetime of the D^0 meson (MNLL2ELDM) Introduction The average lifetime of the $D^{0}$ me

Son Gyo Jung 1 Dec 17, 2021