NeuralCompression is a Python repository dedicated to research of neural networks that compress data

Overview

NeuralCompression

LICENSE Build and Test

What's New

About

NeuralCompression is a Python repository dedicated to research of neural networks that compress data. The repository includes tools such as JAX-based entropy coders, image compression models, video compression models, and metrics for image and video evaluation.

NeuralCompression is alpha software. The project is under active development. The API will change as we make releases, potentially breaking backwards compatibility.

Installation

NeuralCompression is a project currently under development. You can install the repository in development mode.

PyPI Installation

First, install PyTorch according to the directions from the PyTorch website. Then, you should be able to run

pip install neuralcompression

to get the latest version from PyPI.

Development Installation

First, clone the repository and navigate to the NeuralCompression root directory. To match your local environment to the test environment, run

pip install -r dev-requirements.txt

Then, you can install the package in development mode by running

pip install -e .

If you are not interested in matching the test environment, then you only need to apply the second step to install.

Repository Structure

We use a 2-tier repository structure. The neuralcompression package contains a core set of tools for doing neural compression research. Code committed to the core package requires stricter linting, high code quality, and rigorous review. The projects folder contains code for reproducing papers and training baselines. Code in this folder is not linted aggressively, we don't enforce type annotations, and it's okay to omit unit tests.

The 2-tier structure enables rapid iteration and reproduction via code in projects that is built on a backbone of high-quality code in neuralcompression.

neuralcompression

  • neuralcompression - base package
    • data - PyTorch data loaders for various data sets
    • entropy_coders - lossless compression algorithms in JAX
      • craystack - an implementation of the rANS algorithm with the craystack API
    • functional - methods for image warping, information cost, etc.
    • layers - building blocks for compression models
    • metrics - torchmetrics classes for assessing model performance
    • models - complete compression models

projects

Getting Started

For an example of package usage, see the Scale Hyperprior for an example of how to train an image compression model in PyTorch Lightning. See DVC for a video compression example.

Contributions

Please read our CONTRIBUTING guide and our CODE_OF_CONDUCT prior to submitting a pull request.

We test all pull requests. We rely on this for reviews, so please make sure any new code is tested. Tests for neuralcompression go in the tests folder in the root of the repository. Tests for individual projects go in those projects' own tests folder.

We use black for formatting, isorst for import sorting, flake8 for linting, mypy for type checking. We enforce these on the neuralcompression package, but not in the projects folder.

License

NeuralCompression is MIT licensed, as found in the LICENSE file.

Cite

If you find NeuralCompression useful in your work, feel free to cite

@misc{muckley2021neuralcompression,
    author={Matthew Muckley and Jordan Juravsky and Daniel Severo and Mannat Singh and Quentin Duval and Karen Ullrich},
    title={NeuralCompression},
    howpublished={\url{https://github.com/facebookresearch/NeuralCompression}},
    year={2021}
}
Comments
  • Dependency and Build Fixes

    Dependency and Build Fixes

    This is a potpourri of fixes.

    Dependencies Currently, we require very specific package versions for any install. This is too restrictive for a research package where researchers may have a variety of reasons to carefully tailor their environment. This PR resolves the issue by allowing general installation to have flexible packages. To maintain CI stability and functionality, the fixed version requirements have been moved to dev.

    Build We build our CPP extension using load instead of setup.py. This removes PyTorch as a dependency at install time and removes the need for us to distribute binaries. This required adding ninja as a dependency. Hopefully we can get rid of it eventually, but this temporarily should fix distribution.

    CI testing isort was misconfigured and missed a lot of import changes. I fixed the config and ran it on all files to update import sorting.

    Other While implementing this I also found a few other minor issues with respect to versioning, tests, and formatting that I resolved.

    This PR makes NeuralCompression require Python 3.8 due to its usage of importlib.

    Testing Testing done via CI.

    CLA Signed 
    opened by mmuckley 4
  • Documentation builds on ReadTheDocs are failing

    Documentation builds on ReadTheDocs are failing

    Bug

    Documentation builds on ReadTheDocs are failing.

    Steps

    See here: https://readthedocs.org/projects/neuralcompression/builds/15172473/

    Expected behavior

    Docs should build without error.

    Environment

    ReadTheDocs

    Context

    See here: https://readthedocs.org/projects/neuralcompression/builds/15172473/

    bug 
    opened by mmuckley 4
  • `ContinuousEntropy` layer

    `ContinuousEntropy` layer

    Abstract base class (ABC) for implementing continuous entropy layers.

    The abstract class pre-computes integer probability tables based on a prior distribution, which can be used across different platforms by a range encoder and decoder. The class also provides abstract methods for compression, decompression, quantization, and reconstruction.

    enhancement CLA Signed 
    opened by 0x00b1 4
  • `NonNegativeParameterization` layer

    `NonNegativeParameterization` layer

    closes #86

    Non-negative parameterization as required by generalized divisive normalization (GDN) activations. The parameter is subjected to an invertible transformation that slows down the learning rate for small values.

    A brief usage example:

    import torch
    from torch.nn import Module, Parameter
    import torch.nn.functional
    
    from torch import Tensor
    
    from ._non_negative_parameterization import NonNegativeParameterization
    
    
    class GeneralizedDivisiveNormalization(Module):
        def __init__(
            self,
            in_channels: int,
            inverse: bool = False,
            beta_min: float = 1e-6,
            gamma_init: float = 0.1,
        ):
            super(GeneralizedDivisiveNormalization, self).__init__()
    
            self._inverse = inverse
    
            self._reparameterized_beta = NonNegativeParameterization(
                torch.ones(in_channels),
                minimum=beta_min,
            )
    
            self._beta = Parameter(
                self._reparameterized_beta.initialized,
            )
    
            self._reparameterized_gamma = NonNegativeParameterization(
                gamma_init * torch.eye(in_channels),
            )
    
            self._gamma = Parameter(
                self._reparameterized_gamma.initialized,
            )
    
        def forward(self, x: Tensor) -> Tensor:
            _, channels, _, _ = x.size()
    
            y = torch.nn.functional.conv2d(
                x ** 2,
                torch.reshape(
                    self._reparameterized_gamma(self._gamma),
                    (channels, channels, 1, 1)
                ),
                self._reparameterized_beta(self._beta),
            )
    
            if self._inverse:
                return x * torch.sqrt(y)
    
            return x * torch.rsqrt(y)
    
    import torch
    import torch.testing
    
    from neuralcompression.layers import GeneralizedDivisiveNormalization
    
    
    class TestGeneralizedDivisiveNormalization:
        def test_backward(self):
            x = torch.rand((1, 32, 16, 16), requires_grad=True)
    
            generalized_divisive_normalization = GeneralizedDivisiveNormalization(32)
    
            y = generalized_divisive_normalization(x)
    
            y.backward(x)
    
            assert y.shape == x.shape
    
            assert x.grad is not None
    
            assert x.grad.shape == x.shape
    
            torch.testing.assert_allclose(
                x / torch.sqrt(1 + 0.1 * (x ** 2)),
                y,
            )
    
            generalized_divisive_normalization = GeneralizedDivisiveNormalization(
                32,
                inverse=True,
            )
    
            y = generalized_divisive_normalization(x)
    
            y.backward(x)
    
            assert y.shape == x.shape
    
            assert x.grad is not None
    
            assert x.grad.shape == x.shape
    
            torch.testing.assert_allclose(
                x * torch.sqrt(1 + 0.1 * (x ** 2)),
                y,
            )
    
    enhancement CLA Signed 
    opened by 0x00b1 4
  • pad image at inference time, remove resize

    pad image at inference time, remove resize

    Changes

    At inference time, pad the image instead of doing an interpolation-based resize, which gives poor results (e.g. on PSNR) when the input image heigh and/or width is not exactly divisible by the downsampling factor (=2^{number of downsampling layers}).

    CLA Signed 
    opened by desi-ivanova 3
  • Replace license docstrings with comments

    Replace license docstrings with comments

    The intention was three-fold:

    1. consolidate copyright formatting across .py sources
    2. simplify the implementation of #100
    3. remove copyright headers from module documentation

    Changes

    • [x] replaces license docstrings with comments
    CLA Signed 
    opened by 0x00b1 3
  • survival_function op

    survival_function op

    closes #75

    Survival function of x. Generally defined as 1 - distribution.cdf(x).

    Unit test tests whether the returned result matches the returned result of scipy.stats.norm.sf.

    enhancement CLA Signed 
    opened by 0x00b1 3
  • Update triggers for CI

    Update triggers for CI

    This PR alters the triggers for continuous integration. Previously, we triggered all tests on both pushes and pull requests. This meant that within a PR we would have "duplicate" (but not really duplicate) checks. What we really want on a PR is just the PR check, so we'll keep that.

    The other thing that's nice to have is to trigger CI when pushing to a branch. That is what this PR will remove, but to replace it we add workflow_dispatch which allows a user to trigger CI with the GitHub UI. So we remove quite a bit of duplicated tests at the cost of making users click a button if they want to test their code before PR.

    Note that this PR only applies to people who push to branches of the repository.

    CLA Signed 
    opened by mmuckley 3
  • HiFiC modules

    HiFiC modules

    Implements the following modules from Mentzer, et al. (2020)

    • HiFiCDiscriminator
    • HiFiCEncoder
    • HiFiCGenerator
    @misc{mentzer2020highfidelity,
          title={High-Fidelity Generative Image Compression}, 
          author={Fabian Mentzer and George Toderici and Michael Tschannen and Eirikur Agustsson},
          year={2020},
          eprint={2006.09965},
          archivePrefix={arXiv},
          primaryClass={eess.IV}
    }
    

    Originally implemented in TensorFlow Compression (TFC) by the author (@relational).

    CLA Signed 
    opened by 0x00b1 3
  • remove metadata from __init__.py

    remove metadata from __init__.py

    Changes

    The metadata special variables from neuralcompression.__init__ were removed.

    This metadata was not currently used and is now available from setup.cfg

    CLA Signed 
    opened by 0x00b1 2
  • Update PyTorch to 1.10.0

    Update PyTorch to 1.10.0

    Closes #127.

    Changes

    • [x] Replaced torch.testing.assert_equal with torch.testing.assert_close
    • [x] Updates torch to 1.10.0
    • [x] Updates torchvision to 0.11.1
    enhancement CLA Signed 
    opened by 0x00b1 2
  • Implement PQ-MIM compression paper

    Implement PQ-MIM compression paper

    opened by mmuckley 0
  • Upstream Google autoencoder models to CompressAI

    Upstream Google autoencoder models to CompressAI

    At the moment we have several model implementations that are already implemented in CompressAI (e.g., Scale Hyperprior, Mean-Scale Hyperprior). At this point CompressAI has pretty good adoption, so we should be able to remove these from our repository and upstream the dependency.

    By default CompressAI doesn't handle reflective image padding for users, so if desired we could include wrappers like that in PR #185 to handle this for users unfamiliar with the functionality of these models.

    enhancement 
    opened by mmuckley 1
Releases(v0.2.1)
  • v0.2.1(Jan 12, 2022)

    This release covers a few small fixes from PRs #171 and #172.

    Dependencies

    • To retrieve versioning information, we now use importlib. This is included only with Python >= 3.8, so NeuralCompression will now only run on versions of Python at least as recent as 3.8. (#171).
    • Install requirements are flexible, whereas dev requirements are fixed (#171). This should improve CI stability while allowing researchers flexibility in tuning their research environment while using NeuralCompression.
    • torch has been removed as a build dependency (#172).
    • Other build dependencies have been modified to be flexible (#172).

    Build System

    • C++ code from _pmf_to_quantized_cdf introduced compilation requirements when running setup.py. Since we didn't configure our build system to handle specific operating systems, this caused a failed release upload to PyPI. The build system has been altered to use torch.utils.cpp_extension.load, which defers compilation to the the user after package installation. We would like to improve this further at some point, but the modifications from #171 gets the package stable. Note: there is a reasonable chance this could fail on non-Linux OS's such as Windows. Those users will still be able to use other package features that don't rely on _pmf_to_quantized_cdf.

    Other

    • Fixed a linting issue where isort was not checking in CI if packages were properly sorted. (#171).
    • Fixed a random test issue (#171).
    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Dec 13, 2021)

    NeuralCompression is a PyTorch-based Python package intended to simplify neural network-based compression research. It is similar to (and shares some of the functionality) of fantastic libraries like TensorFlow Compression and Compress AI.

    The major theme of v0.2.0 release is autoencoders, particularly features useful for implementing existing models by Ballé and features useful to expand on these models in forthcoming research. In addition, 0.2.0 sees some code organization changes and published documentation. I recommend reading the new “Image Compression” example to see some of these changes.

    API Additions

    Data (neuralcompression.data)

    Distributions (neuralcompression.distributions)

    • NoisyNormal: normal distribution with additive identically distributed (i.i.d.) uniform noise.
    • UniformNoise: adapts a continuous distribution via additive identically distributed (i.i.d.) uniform noise.

    Functional (neuralcompression.functional)

    • estimate_tails: estimates approximate tail quantiles.
    • log_cdf: logarithm of the distribution’s cumulative distribution function (CDF).
    • log_expm1: logarithm of e^{x} - 1.
    • log_ndtr: logarithm of the normal cumulative distribution function (CDF).
    • log_survival_function: logarithm of x for a distribution’s survival function.
    • lower_bound: torch.maximum with a gradient for x < bound.
    • lower_tail: approximates lower tail quantile for range coding.
    • ndtr: the normal cumulative distribution function (CDF).
    • pmf_to_quantized_cdf: transforms a probability mass function (PMF) into a quantized cumulative distribution function (CDF) for entropy coding.
    • quantization_offset: computes a distribution-dependent quantization offset.
    • soft_round_conditional_mean: conditional mean of x given noisy soft rounded values.
    • soft_round_inverse: inverse of soft_round.
    • soft_round: differentiable approximation of torch.round.
    • survival_function: survival function of x. Generally defined as 1 - distribution.cdf(x).
    • upper_tail: approximates upper tail quantile for range coding.

    Layers (neuralcompression.layers)

    • AnalysisTransformation2D: applies the 2D analysis transformation over an input signal.
    • ContinuousEntropy: base class for continuous entropy layers.
    • GeneralizedDivisiveNormalization: applies generalized divisive normalization for each channel across a batch of data.
    • HyperAnalysisTransformation2D: applies the 2D hyper analysis transformation over an input signal.
    • HyperSynthesisTransformation2D: applies the 2D hyper synthesis transformation over an input signal.
    • NonNegativeParameterization: the parameter is subjected to an invertible transformation that slows down the learning rate for small values.
    • RateMSEDistortionLoss: rate-distortion loss.
    • SynthesisTransformation2D: applies the 2D synthesis transformation over an input signal.

    Models (neuralcompression.models)

    End-to-end Optimized Image Compression

    End-to-end Optimized Image Compression
    Johannes Ballé, Valero Laparra, Eero P. Simoncelli
    https://arxiv.org/abs/1611.01704
    
    • PriorAutoencoder: base class for implementing prior autoencoder architectures.
    • FactorizedPriorAutoencoder

    High-Fidelity Generative Image Compression

    High-Fidelity Generative Image Compression
    Fabian Mentzer, George Toderici, Michael Tschannen, Eirikur Agustsson
    https://arxiv.org/abs/2006.09965
    
    • HiFiCEncoder
    • HiFiCDiscriminator
    • HiFiCGenerator

    Variational Image Compression with a Scale Hyperprior

    Variational Image Compression with a Scale Hyperprior
    Johannes Ballé, David Minnen, Saurabh Singh, Sung Jin Hwang, Nick Johnston
    https://arxiv.org/abs/1802.01436
    
    • HyperpriorAutoencoder: base class for implementing hyperprior autoencoder architectures.
    • MeanScaleHyperpriorAutoencoder
    • ScaleHyperpriorAutoencoder

    API Changes

    • neuralcompression.functional.hsv2rgb is now neuralcompression.functional.hsv_to_rgb.
    • neuralcompression.functional.learned_perceptual_image_patch_similarity is now neuralcompression.functional.lpips.

    Acknowledgements

    Thank you to the following people for their advice:

    Source code(tar.gz)
    Source code(zip)
  • v0.1.0(Jul 19, 2021)

Owner
Facebook Research
Facebook Research
neural image generation

pixray Pixray is an image generation system. It combines previous ideas including: Perception Engines which uses image augmentation and iteratively op

dribnet 398 Dec 17, 2022
Mengzi Pretrained Models

中文 | English Mengzi 尽管预训练语言模型在 NLP 的各个领域里得到了广泛的应用,但是其高昂的时间和算力成本依然是一个亟需解决的问题。这要求我们在一定的算力约束下,研发出各项指标更优的模型。 我们的目标不是追求更大的模型规模,而是轻量级但更强大,同时对部署和工业落地更友好的模型。

Langboat 424 Jan 04, 2023
Content shared at DS-OX Meetup

Streamlit-Projects Streamlit projects available in this repo: An introduction to Streamlit presented at DS-OX (Feb 26, 2020) meetup Streamlit 101 - Ja

Arvindra 69 Dec 23, 2022
Implements the training, testing and editing tools for "Pluralistic Image Completion"

Pluralistic Image Completion ArXiv | Project Page | Online Demo | Video(demo) This repository implements the training, testing and editing tools for "

Chuanxia Zheng 615 Dec 08, 2022
[NeurIPS-2021] Slow Learning and Fast Inference: Efficient Graph Similarity Computation via Knowledge Distillation

Efficient Graph Similarity Computation - (EGSC) This repo contains the source code and dataset for our paper: Slow Learning and Fast Inference: Effici

24 Dec 31, 2022
Offical implementation of Shunted Self-Attention via Multi-Scale Token Aggregation

Shunted Transformer This is the offical implementation of Shunted Self-Attention via Multi-Scale Token Aggregation by Sucheng Ren, Daquan Zhou, Shengf

156 Dec 27, 2022
Implementation of Hourglass Transformer, in Pytorch, from Google and OpenAI

Hourglass Transformer - Pytorch (wip) Implementation of Hourglass Transformer, in Pytorch. It will also contain some of my own ideas about how to make

Phil Wang 61 Dec 25, 2022
Users can free try their models on SIDD dataset based on this code

SIDD benchmark 1 Train python train.py If you want to train your network, just modify the yaml in the options folder. 2 Validation python validation.p

Yuzhi ZHAO 2 May 20, 2022
A foreign language learning aid using a neural network to predict probability of translating foreign words

Langy Langy is a reading-focused foreign language learning aid orientated towards young children. Reading is an activity that every child knows. It is

Shona Lowden 6 Nov 17, 2021
A deep learning network built with TensorFlow and Keras to classify gender and estimate age.

Convolutional Neural Network (CNN). This repository contains a source code of a deep learning network built with TensorFlow and Keras to classify gend

Pawel Dziemiach 1 Dec 18, 2021
GANmouflage: 3D Object Nondetection with Texture Fields

GANmouflage: 3D Object Nondetection with Texture Fields Rui Guo1 Jasmine Collins

29 Aug 10, 2022
TagLab: an image segmentation tool oriented to marine data analysis

TagLab: an image segmentation tool oriented to marine data analysis TagLab was created to support the activity of annotation and extraction of statist

Visual Computing Lab - ISTI - CNR 49 Dec 29, 2022
Some toy examples of score matching algorithms written in PyTorch

toy_gradlogp This repo implements some toy examples of the following score matching algorithms in PyTorch: ssm-vr: sliced score matching with variance

Ending Hsiao 21 Dec 26, 2022
COVINS -- A Framework for Collaborative Visual-Inertial SLAM and Multi-Agent 3D Mapping

COVINS -- A Framework for Collaborative Visual-Inertial SLAM and Multi-Agent 3D Mapping Version 1.0 COVINS is an accurate, scalable, and versatile vis

ETHZ V4RL 183 Dec 27, 2022
Pytorch implementation for "Density-aware Chamfer Distance as a Comprehensive Metric for Point Cloud Completion" (NeurIPS 2021)

Density-aware Chamfer Distance This repository contains the official PyTorch implementation of our paper: Density-aware Chamfer Distance as a Comprehe

Tong WU 93 Dec 15, 2022
CapsuleVOS: Semi-Supervised Video Object Segmentation Using Capsule Routing

CapsuleVOS This is the code for the ICCV 2019 paper CapsuleVOS: Semi-Supervised Video Object Segmentation Using Capsule Routing. Arxiv Link: https://a

53 Oct 27, 2022
Human Action Controller - A human action controller running on different platforms.

Human Action Controller (HAC) Goal A human action controller running on different platforms. Fun Easy-to-use Accurate Anywhere Fun Examples Mouse Cont

27 Jul 20, 2022
[NeurIPS 2019] Learning Imbalanced Datasets with Label-Distribution-Aware Margin Loss

Learning Imbalanced Datasets with Label-Distribution-Aware Margin Loss Kaidi Cao, Colin Wei, Adrien Gaidon, Nikos Arechiga, Tengyu Ma This is the offi

Kaidi Cao 528 Jan 01, 2023
GT China coal model

GT China coal model The full version of a China coal transport model with a very high spatial reslution. What it does The code works in a few steps: T

0 Dec 13, 2021