Official NumPy Implementation of Deep Networks from the Principle of Rate Reduction (2021)

Overview

Deep Networks from the Principle of Rate Reduction

This repository is the official NumPy implementation of the paper Deep Networks from the Principle of Rate Reduction (2021) by Kwan Ho Ryan Chan* (UC Berkeley), Yaodong Yu* (UC Berkeley), Chong You* (UC Berkeley), Haozhi Qi (UC Berkeley), John Wright (Columbia), and Yi Ma (UC Berkeley). For PyTorch version of ReduNet, please visit https://github.com/ryanchankh/redunet.

What is ReduNet?

ReduNet is a deep neural network construcuted naturally by deriving the gradients of the Maximal Coding Rate Reduction (MCR2) [1] objective. Every layer of this network can be interpreted based on its mathematical operations and the network collectively is trained in a feed-forward manner only. In addition, by imposing shift invariant properties to our network, the convolutional operator can be derived using only the data and MCR2 objective function, hence making our network design principled and interpretable.


Figure: Weights and operations for one layer of ReduNet

[1] Yu, Yaodong, Kwan Ho Ryan Chan, Chong You, Chaobing Song, and Yi Ma. "Learning diverse and discriminative representations via the principle of maximal coding rate reduction" Advances in Neural Information Processing Systems 33 (2020).

Requirements

This codebase is written for python3. To install necessary python packages, run conda create --name redunet_official --file requirements.txt.

File Structure

Training

To train a model, one can run the training files, which has the dataset as thier names. For the appropriate commands to reproduce our experimental results, check out the experiment section below. All the files for training is listed below:

  • gaussian2d.py: mixture of Guassians in 2-dimensional Reals
  • gaussian3d.py: mixture of Guassians in 3-dimensional Reals
  • iris.py: Iris dataset from UCI Machine Learning Repository (link)
  • mice.py: Mice Protein Expression Data Set (link)
  • mnist1d.py: MNIST dataset, each image is multi-channel polar form and model is trained to have rotational invariance
  • mnist2d.py: MNIST dataset, each image is single-channel and model is trained to have translational invariance
  • sinusoid.py: mixture of sinusoidal waves, single and multichannel data

Evaluation and Ploting

Evaluation and plots are performed within each file. Functions are located in evaluate.py and plot.py.

Experiments

Run the following commands to train, test, evaluate and plot figures for different settings:

Main Paper

Gaussian 2D: Figure 2(a) - (c)

$ python3 gaussian2d.py --data 1 --noise 0.1 --samples 500 --layers 2000 --eta 0.5 --eps 0.1

Gaussian 3D: Figure 2(d) - (f)

$ python3 gaussian3d.py --data 1 --noise 0.1 --samples 500 --layers 2000 --eta 0.5 --eps 0.1

Rotational-Invariant MNIST: 3(a) - (d)

$ python3 mnist1d.py --samples 10 --channels 15 --outchannels 20 --time 200 --classes 0 1 2 3 4 5 6 7 8 9 --layers 40 --eta 0.5 --eps 0.1  --ksize 5

Translational-Invariant MNIST: 3(e) - (h)

$ python3 mnist2d.py --classes 0 1 2 3 4 5 6 7 8 9 --samples 10 --layers 25 --outchannels 75 --ksize 9 --eps 0.1 --eta 0.5

Appendix

For Iris and Mice Protein:

$ python3 iris.py --layers 4000 --eta 0.1 --eps 0.1
$ python3 mice.py --layers 4000 --eta 0.1 --eps 0.1

For 1D signals (Sinusoids):

$ python3 sinusoid.py --time 150 --samples 400 --channels 7 --layers 2000 --eps 0.1 --eta 0.1 --data 7 --kernel 3

For 1D signals (Rotational Invariant MNIST):

$ python3 mnist1d.py --classes 0 1 --samples 2000 --time 200 --channels 5 --layers 3500 --eta 0.5 --eps 0.1

For 2D translational invariant MNIST data:

$ python3 mnist2d.py --classes 0 1 --samples 500 --layers 2000 --eta 0.5 --eps 0.1

Reference

For technical details and full experimental results, please check the paper. Please consider citing our work if you find it helpful to yours:

@article{chan2020deep,
  title={Deep networks from the principle of rate reduction},
  author={Chan, Kwan Ho Ryan and Yu, Yaodong and You, Chong and Qi, Haozhi and Wright, John and Ma, Yi},
  journal={arXiv preprint arXiv:2010.14765},
  year={2020}
}

License and Contributing

  • This README is formatted based on paperswithcode.
  • Feel free to post issues via Github.

Contact

Please contact [email protected] and [email protected] if you have any question on the codes.

Owner
Ryan Chan
Interested in developing principled deep learning algorithms
Ryan Chan
The official implementation of the research paper "DAG Amendment for Inverse Control of Parametric Shapes"

DAG Amendment for Inverse Control of Parametric Shapes This repository is the official Blender implementation of the paper "DAG Amendment for Inverse

Elie Michel 157 Dec 26, 2022
Pytorch implementation of "Grad-TTS: A Diffusion Probabilistic Model for Text-to-Speech"

GradTTS Unofficial Pytorch implementation of "Grad-TTS: A Diffusion Probabilistic Model for Text-to-Speech" (arxiv) About this repo This is an unoffic

HeyangXue1997 103 Dec 23, 2022
A cool little repl-based simulation written in Python

A cool little repl-based simulation written in Python planned to integrate machine-learning into itself to have AI battle to the death before your eye

Em 6 Sep 17, 2022
⚡ H2G-Net for Semantic Segmentation of Histopathological Images

H2G-Net This repository contains the code relevant for the proposed design H2G-Net, which was introduced in the manuscript "Hybrid guiding: A multi-re

André Pedersen 8 Nov 24, 2022
An ever-growing playground of notebooks showcasing CLIP's impressive zero-shot capabilities.

Playground for CLIP-like models Demo Colab Link GradCAM Visualization Naive Zero-shot Detection Smarter Zero-shot Detection Captcha Solver Changelog 2

Kevin Zakka 101 Dec 30, 2022
Do you like Quick, Draw? Well what if you could train/predict doodles drawn inside Streamlit? Also draws lines, circles and boxes over background images for annotation.

Streamlit - Drawable Canvas Streamlit component which provides a sketching canvas using Fabric.js. Features Draw freely, lines, circles, boxes and pol

Fanilo Andrianasolo 325 Dec 28, 2022
PaSST: Efficient Training of Audio Transformers with Patchout

PaSST: Efficient Training of Audio Transformers with Patchout This is the implementation for Efficient Training of Audio Transformers with Patchout Pa

165 Dec 26, 2022
Pose Detection and Machine Learning for real-time body posture analysis during exercise to provide audiovisual feedback on improvement of form.

Posture: Pose Tracking and Machine Learning for prescribing corrective suggestions to improve posture and form while exercising. This repository conta

Pratham Mehta 10 Nov 11, 2022
A non-linear, non-parametric Machine Learning method capable of modeling complex datasets

Fast Symbolic Regression Symbolic Regression is a non-linear, non-parametric Machine Learning method capable of modeling complex data sets. fastsr aim

VAMSHI CHOWDARY 3 Jun 22, 2022
Title: Graduate-Admissions-Predictor

The purpose of this project is create a predictive model capable of identifying the probability of a person securing an admit based on their personal profile parameters. Simplified visualisations hav

Akarsh Singh 1 Jan 26, 2022
Image segmentation with private İstanbul Dataset

Image Segmentation This repo was created for academic research and test result. Repo will update after academic article online. This repo contains wei

İrem KÖMÜRCÜ 9 Dec 11, 2022
NAVER BoostCamp Final Project

CV 14조 final project Super Resolution and Deblur module Inference code & Pretrained weight Repo SwinIR Deblur 실행 방법 streamlit run WebServer/Server_SRD

JiSeong Kim 5 Sep 06, 2022
NVIDIA Deep Learning Examples for Tensor Cores

NVIDIA Deep Learning Examples for Tensor Cores Introduction This repository provides State-of-the-Art Deep Learning examples that are easy to train an

NVIDIA Corporation 10k Dec 31, 2022
DiffSinger: Singing Voice Synthesis via Shallow Diffusion Mechanism (SVS & TTS); AAAI 2022; Official code

DiffSinger: Singing Voice Synthesis via Shallow Diffusion Mechanism This repository is the official PyTorch implementation of our AAAI-2022 paper, in

Jinglin Liu 803 Dec 28, 2022
A sample pytorch Implementation of ACL 2021 research paper "Learning Span-Level Interactions for Aspect Sentiment Triplet Extraction".

Span-ASTE-Pytorch This repository is a pytorch version that implements Ali's ACL 2021 research paper Learning Span-Level Interactions for Aspect Senti

来自丹麦的天籁 10 Dec 06, 2022
StyleGAN2 - Official TensorFlow Implementation

StyleGAN2 - Official TensorFlow Implementation

NVIDIA Research Projects 10.1k Dec 28, 2022
This repository contains the data and code for the paper "Diverse Text Generation via Variational Encoder-Decoder Models with Gaussian Process Priors" ([email protected])

GP-VAE This repository provides datasets and code for preprocessing, training and testing models for the paper: Diverse Text Generation via Variationa

Wanyu Du 18 Dec 29, 2022
This is the repository for CVPR2021 Dynamic Metric Learning: Towards a Scalable Metric Space to Accommodate Multiple Semantic Scales

Intro This is the repository for CVPR2021 Dynamic Metric Learning: Towards a Scalable Metric Space to Accommodate Multiple Semantic Scales Vehicle Sam

39 Jul 21, 2022
Code for "Learning to Segment Rigid Motions from Two Frames".

rigidmask Code for "Learning to Segment Rigid Motions from Two Frames". ** This is a partial release with inference and evaluation code.

Gengshan Yang 157 Nov 21, 2022
An open source implementation of CLIP.

OpenCLIP Welcome to an open source implementation of OpenAI's CLIP (Contrastive Language-Image Pre-training). The goal of this repository is to enable

2.7k Dec 31, 2022