ConformalLayers: A non-linear sequential neural network with associative layers

Overview

ConformalLayers: A non-linear sequential neural network with associative layers

ConformalLayers is a conformal embedding of sequential layers of Convolutional Neural Networks (CNNs) that allows associativity between operations like convolution, average pooling, dropout, flattening, padding, dilation, and stride. Such embedding allows associativity between layers of CNNs, considerably reducing the amount of operations to perform inference in type of neural networks.

This repository is a implementation of ConformalLayers written in Python using Minkowski Engine and PyTorch as backend. This implementation is a first step into the usage of activation functions, like ReSPro, that can be represented as tensors, depending on the geometry model.

Please cite our SIBGRAPI'21 paper if you use this code in your research. The paper presents a complete description of the library:

@InProceedings{sousa_et_al-sibgrapi-2021,
  author    = {Sousa, Eduardo V. and Fernandes, Leandro A. F. and Vasconcelos, Cristina N.},
  title     = {{C}onformal{L}ayers: a non-linear sequential neural network with associative layers},
  booktitle = {Proceedings of the 2021 34th SIBGRAPI Conference on Graphics, Patterns and Images},
  year      = {2021},
}

Please, let Eduardo Vera Sousa (http://www.ic.uff.br/~eduardovera), Leandro A. F. Fernandes (http://www.ic.uff.br/~laffernandes) and Cristina Nader Vasconcelos(http://www2.ic.uff.br/~crisnv/index.php) know if you want to contribute to this project. Also, do not hesitate to contact them if you encounter any problems.

Contents:

  1. Requirements
  2. How to Install ConformalLayers
  3. Running Examples
  4. Compiling and Running Unit Tests
  5. Documentation
  6. License

1. Requirements

Make sure that you have the following tools before attempting to use ConformalLayers.

Required tools:

Optional tool to use ConformalLayers:

  • Virtual enviroment to create an isolated workspace for a Python application.

  • Docker to create a container to run ConformalLayers

2. How to Install ConformalLayers

No magic needed here. Just run:

python setup.py install

3. Running Examples

The basic steps for running the examples of ConformalLayers look like this:

cd <ConformalLayers-dir>/Experiments/<experiment-name>

For Experiments I and II, each file refers to the experiment described on the main paper. Thus, in order to run BaseReSProNet with FashionMNIST dataset, for example, all you have to do is:

python BaseReSProNet.py --dataset=FashionMNIST

The values that can be used for the dataset argument are

  • MNIST
  • FashionMNIST
  • CIFAR10

The loader of each dataset is described in Experiments/utils/datasets.py file.

Other arguments for the script files in Experiments I and II are:

  • epochs (int value)
  • batch_size (int value)
  • learning_rate (float value)
  • optimizer (adam or rmsprop)
  • dropout (float value)

For Experiments III and IV, since we compute the amount of memory used, we used an external file to orchestrate the calls and make sure we have a clean environment for the next iterations. Such orchestrator is writen on the files with the suffix _manager.py.

You can also run the files that corresponds to each architecture individually, without the orchestrator. To run D3ModNetCL architecture, for example, just run

python D3ModNetCL.py

The arguments for the non-orchestrated scripts in Experiments III and IV are:

  • num_inferences (int value)
  • batch_size (int value)
  • depth (int value, Experiment III only)

The files in networks folder contains the description of each architecture used in our experiments and presents the usage of the classes and methods of our library.

4. Running Unit Tests

The basic steps for running the unit tests of ConformalLayers look like this:

cd <ConformalLayers-dir>/tests

To run all tests, simply run

python test_all.py

To run the tests for each module, run:

python test_<module_name>.py

5. Documentation

Here you find a brief description of the namespaces, macros, classes, functions, procedures, and operators available for the user. All methods are available with C++ and most of them with Python. The detailed documentation is not ready yet.

Contents:

Modules

Here we present the main modules implemented in our framework. Most of the modules are used just like in PyTorch, so users with some background on this framework benefits from this implementation. For users not familiar with PyTorch, the usage still quite simple and intuitive.

Module Description
Conv1d, Conv2d, Conv3d, ConvNd Convolution operation implemented for n-D signals
AvgPool1d, AvgPool2d, AvgPool3d, AvgPoolNd Average pooling operation implemented for n-D signals
BaseActivation The abstract class for the activation function layer. To extend the library, one shall implement this class
ReSPro The layer that corresponds to the ReSPro activation function. Such function is a linear function with non-linear behavior that can be encoded as a tensor. The non-linearity of this function is controlled by a parameter α (passed as argument) that can be provided or inferred from the data
Regularization In this version, Dropout is only regularization available. In this approach, during the training phase, we randomly shut down some neurons with a probability p, passed as argument to this module

These modules are composed into ConformalLayers in a very similar way to the pure PyTorch-based way. The class ConformalLayers plays an important role in this task, as you can see by comparing the code snippets below:

# This one is built with pure PyTorch
import torch.nn as nn

class D3ModNet(nn.Module):
    def __init__(self):
        super(D3ModNet, self).__init__()
        self.features = nn.Sequential(
            nn.Conv2d(in_channels=3, out_channels=32, kernel_size=3),
            nn.ReSPro(),
            nn.AvgPool2d(kernel_size=2, stride=2),
            nn.Conv2d(in_channels=32, out_channels=32, kernel_size=3),
            nn.ReSPro(),
            nn.AvgPool2d(kernel_size=2, stride=2),
            nn.Conv2d(in_channels=32, out_channels=32, kernel_size=3),
            nn.ReSPro(),
            nn.AvgPool2d(kernel_size=2, stride=2),
        )
        self.fc1 = nn.Linear(128, 10)

    def forward(self, x):
        x = self.features(x)
        x = x.view(x.shape[0], -1)
        x = self.fc1(x)
        return x
# This one is built with ConformalLayers
import torch.nn as nn
import ConformalLayers as cl

class D3ModNetCL(nn.Module):
    def __init__(self):
        super(D3ModNetCL, self).__init__()
        self.features = cl.ConformalLayers(
            cl.Conv2d(in_channels=3, out_channels=32, kernel_size=3),
            cl.ReSPro(),
            cl.AvgPool2d(kernel_size=2, stride=2),
            cl.Conv2d(in_channels=32, out_channels=32, kernel_size=3),
            cl.ReSPro(),
            cl.AvgPool2d(kernel_size=2, stride=2),
            cl.Conv2d(in_channels=32, out_channels=32, kernel_size=3),
            cl.ReSPro(),
            cl.AvgPool2d(kernel_size=2, stride=2),
        )
        self.fc1 = nn.Linear(128, 10)

    def forward(self, x):
        x = self.features(x)
        x = x.view(x.shape[0], -1)
        x = self.fc1(x)
        return x

They look pretty much the same code, right? That's because we've implemented ConformalLayers to be a transition smoothest as possible to the PyTorch user. Most of the modules has almost the same method signatures of the ones provided by PyTorch.

Convolution

The convolution operation implemented in ConformalLayers on the modules ConvNd, Conv1d, Conv2d and Conv3d is almost the same one implemented on PyTorch but we do not allow bias. This is mostly due to the construction of our logic when building the representation with tensors. Although we have a few ideas on how to include bias on this representation, they are not included in the current version. The parameters are detailed below and are originally available in PyTorch convolution documentation page. The exception here relies on the padding_mode parameter, that is always set to 'zeros' in our implementation.

  • in_channels (int) – Number of channels in the input image

  • out_channels (int) – Number of channels produced by the convolution

  • kernel_size (int or tuple) – Size of the convolving kernel

  • stride (int or tuple, optional) – Stride of the convolution. Default: 1

  • padding (int, tuple or str, optional) – Padding added to both sides of the input. Default: 0

  • dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1

  • groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1

Pooling

In our current implementation, we only support average pooling, which is implemented on modules AvgPoolNd, AvgPool1d, AvgPool2d and AvgPool3d. The parameters list, originally available in PyTorch average pooling documentation page, is described below:

  • kernel_size – the size of the window

  • stride – the stride of the window. Default value is kernel_size

  • padding – implicit zero padding to be added on both sides

  • ceil_mode – when True, will use ceil instead of floor to compute the output shape

  • count_include_pad – when True, will include the zero-padding in the averaging calculation

Activation

Our activation module has ReSPro activation function implemented natively. By using Reflections, Scalings and Projections on an hypersphere in higher dimensions, we created a non-linear differentiable associative activation function that can be represented in tensor form. It has only one parameter, that controls how close to linear or non-linear is the curve. More details are available on the main paper.

  • alpha (float, optional) - controls the non-linearity of the curve. If it is not provided, it's automatically estimated.

Regularization

On regularization module we have Dropout implemented in this version. It is based on the idea of randomly shutting down some neurons in order to prevent overfitting. It takes only two parameters, listed below. This list was originally available in PyTorch documentation page.

  • p – probability of an element to be zeroed. Default: 0.5

  • inplace – If set to True, will do this operation in-place. Default: False

6. License

This software is licensed under the GNU General Public License v3.0. See the LICENSE file for details.

You might also like...
 Improving Deep Network Debuggability via Sparse Decision Layers
Improving Deep Network Debuggability via Sparse Decision Layers

Improving Deep Network Debuggability via Sparse Decision Layers This repository contains the code for our paper: Leveraging Sparse Linear Layers for D

Accelerate Neural Net Training by Progressively Freezing Layers
Accelerate Neural Net Training by Progressively Freezing Layers

FreezeOut A simple technique to accelerate neural net training by progressively freezing layers. This repository contains code for the extended abstra

a reccurrent neural netowrk that when trained on a peice of text and fed a starting prompt will write its on 250 character text using LSTM layers

RNN-Playwrite a reccurrent neural netowrk that when trained on a peice of text and fed a starting prompt will write its on 250 character text using LS

PyTorch Code of "Memory In Memory: A Predictive Neural Network for Learning Higher-Order Non-Stationarity from Spatiotemporal Dynamics"

Memory In Memory Networks It is based on the paper Memory In Memory: A Predictive Neural Network for Learning Higher-Order Non-Stationarity from Spati

Pytorch code for paper
Pytorch code for paper "Image Compressed Sensing Using Non-local Neural Network" TMM 2021.

NL-CSNet-Pytorch Pytorch code for paper "Image Compressed Sensing Using Non-local Neural Network" TMM 2021. Note: this repo only shows the strategy of

This is a model made out of Neural Network specifically a Convolutional Neural Network model
This is a model made out of Neural Network specifically a Convolutional Neural Network model

This is a model made out of Neural Network specifically a Convolutional Neural Network model. This was done with a pre-built dataset from the tensorflow and keras packages. There are other alternative libraries that can be used for this purpose, one of which is the PyTorch library.

Code image classification of MNIST dataset using different architectures: simple linear NN, autoencoder, and highway network

Deep Learning for image classification pip install -r http://webia.lip6.fr/~baskiotisn/requirements-amal.txt Train an autoencoder python3 train_auto

Code for our paper at ECCV 2020: Post-Training Piecewise Linear Quantization for Deep Neural Networks
Code for our paper at ECCV 2020: Post-Training Piecewise Linear Quantization for Deep Neural Networks

PWLQ Updates 2020/07/16 - We are working on getting permission from our institution to release our source code. We will release it once we are granted

A framework that constructs deep neural networks, autoencoders, logistic regressors, and linear networks

A framework that constructs deep neural networks, autoencoders, logistic regressors, and linear networks without the use of any outside machine learning libraries - all from scratch.

Comments
  • how would you add bias?

    how would you add bias?

    the readme mentions you have a few ideas on how to do so, curious what they are. lack of bias seems to hurt performance, based on the conformallayers paper

    opened by alok 1
  • Typo when importing progress_bar in Experiment 1

    Typo when importing progress_bar in Experiment 1

    Upon trying to run Experiment I, the following error message appears.

    $ python BaseReSProNet.py --dataset=FashionMNIST
    Traceback (most recent call last):
      File "BaseReSProNet.py", line 11, in <module>
        from Experiments.utils.utils import progress_bar
    ModuleNotFoundError: No module named 'Experiments.utils.utils'
    

    I believe it is a typo; according to the source code, it should be

    from Experiments.utils.utils import progress_bar

    Environment:

    • Running the Minkowski Engine Docker built with https://github.com/NVIDIA/MinkowskiEngine/blob/master/docker/Dockerfile
    • Ubuntu 20.04
    bug 
    opened by wilderlopes 1
  • Missing setup.py

    Missing setup.py

    Hi everybody,

    Congrats on the paper! I am looking forward to reproducing it. However, I noticed the setup.py (used to install your library according to the documentation) is missing from the repo. Is there any other we could install/run it?

    enhancement 
    opened by wilderlopes 1
Releases(v1.2.1)
Owner
Prograf-UFF
Graphics Processing Research Laboratory
Prograf-UFF
P-Tuning v2: Prompt Tuning Can Be Comparable to Finetuning Universally Across Scales and Tasks

P-tuning v2 P-Tuning v2: Prompt Tuning Can Be Comparable to Finetuning Universally Across Scales and Tasks An optimized prompt tuning strategy achievi

THUDM 540 Dec 30, 2022
Code and training data for our ECCV 2016 paper on Unsupervised Learning

Shuffle and Learn (Shuffle Tuple) Created by Ishan Misra Based on the ECCV 2016 Paper - "Shuffle and Learn: Unsupervised Learning using Temporal Order

Ishan Misra 44 Dec 08, 2021
The Dual Memory is build from a simple CNN for the deep memory and Linear Regression fro the fast Memory

Simple-DMA a simple Dual Memory Architecture for classifications. based on the paper Dual-Memory Deep Learning Architectures for Lifelong Learning of

1 Jan 27, 2022
SpineAI Bilsky Grading With Python

SpineAI-Bilsky-Grading SpineAI Paper with Code 📫 Contact Address correspondence to J.T.P.D.H. (e-mail: james_hallinan AT nuhs.edu.sg) Disclaimer This

<a href=[email protected]"> 2 Dec 16, 2021
VGGVox models for Speaker Identification and Verification trained on the VoxCeleb (1 & 2) datasets

VGGVox models for speaker identification and verification This directory contains code to import and evaluate the speaker identification and verificat

338 Dec 27, 2022
Weight initialization schemes for PyTorch nn.Modules

nninit Weight initialization schemes for PyTorch nn.Modules. This is a port of the popular nninit for Torch7 by @kaixhin. ##Update This repo has been

Alykhan Tejani 69 Jan 26, 2021
LibFewShot: A Comprehensive Library for Few-shot Learning.

LibFewShot Make few-shot learning easy. Supported Methods Meta MAML(ICML'17) ANIL(ICLR'20) R2D2(ICLR'19) Versa(NeurIPS'18) LEO(ICLR'19) MTL(CVPR'19) M

<a href=[email protected]&L"> 603 Jan 05, 2023
Cl datasets - PyTorch image dataloaders and utility functions to load datasets for supervised continual learning

Continual learning datasets Introduction This repository contains PyTorch image

berjaoui 5 Aug 28, 2022
Visual odometry package based on hardware-accelerated NVIDIA Elbrus library with world class quality and performance.

Isaac ROS Visual Odometry This repository provides a ROS2 package that estimates stereo visual inertial odometry using the Isaac Elbrus GPU-accelerate

NVIDIA Isaac ROS 343 Jan 03, 2023
Equivariant GNN for the prediction of atomic multipoles up to quadrupoles.

Equivariant Graph Neural Network for Atomic Multipoles Description Repository for the Model used in the publication 'Learning Atomic Multipoles: Predi

16 Nov 22, 2022
Codecov coverage standard for Python

Python-Standard Last Updated: 01/07/22 00:09:25 What is this? This is a Python application, with basic unit tests, for which coverage is uploaded to C

Codecov 10 Nov 04, 2022
Deep Latent Force Models

Deep Latent Force Models This repository contains a PyTorch implementation of the deep latent force model (DLFM), presented in the paper, Compositiona

Tom McDonald 5 Oct 26, 2022
Implementation of "Efficient Regional Memory Network for Video Object Segmentation" (Xie et al., CVPR 2021).

RMNet This repository contains the source code for the paper Efficient Regional Memory Network for Video Object Segmentation. Cite this work @inprocee

Haozhe Xie 76 Dec 14, 2022
This repository contains the code for Direct Molecular Conformation Generation (DMCG).

Direct Molecular Conformation Generation This repository contains the code for Direct Molecular Conformation Generation (DMCG). Dataset Download rdkit

25 Dec 20, 2022
EASY - Ensemble Augmented-Shot Y-shaped Learning: State-Of-The-Art Few-Shot Classification with Simple Ingredients.

EASY - Ensemble Augmented-Shot Y-shaped Learning: State-Of-The-Art Few-Shot Classification with Simple Ingredients. This repository is the official im

Yassir BENDOU 57 Dec 26, 2022
Colour detection is necessary to recognize objects, it is also used as a tool in various image editing and drawing apps.

Colour Detection On Image Colour detection is the process of detecting the name of any color. Simple isn’t it? Well, for humans this is an extremely e

Astitva Veer Garg 1 Jan 13, 2022
Python package for downloading ECMWF reanalysis data and converting it into a time series format.

ecmwf_models Readers and converters for data from the ECMWF reanalysis models. Written in Python. Works great in combination with pytesmo. Citation If

TU Wien - Department of Geodesy and Geoinformation 31 Dec 26, 2022
Official repo for QHack—the quantum machine learning hackathon

Note: This repository has been frozen while we consider the submissions for the QHack Open Hackathon. We hope you enjoyed the event! Welcome to QHack,

Xanadu 118 Jan 05, 2023
The challenge for Quantum Coalition Hackathon 2021

Qchack 2021 Google Challenge This is a challenge for the brave 2021 qchack.io participants. Instructions Hello, intrepid qchacker, welcome to the G|o

quantumlib 18 May 04, 2022
Nest Protect integration for Home Assistant. This will allow you to integrate your smoke, heat, co and occupancy status real-time in HA.

Nest Protect integration for Home Assistant Custom component for Home Assistant to interact with Nest Protect devices via an undocumented and unoffici

Mick Vleeshouwer 175 Dec 29, 2022