Simple torch.nn.module implementation of Alias-Free-GAN style filter and resample

Overview

Alias-Free-Torch

Simple torch module implementation of Alias-Free GAN.

This repository including

Note: Since this repository is unofficial, filter and upsample could be different with official implementation.

Note: 2d lowpass filter is applying sinc instead of jinc (first order Bessel function of the first kind) in paper

Requirements

Due to torch.kaiser_window and torch.i0 are implemeted after 1.7.0, our repository need torch>=1.7.0.

  • Pytorch>=1.7.0

TODO

  • 2d sinc filter
  • 2d resample
  • devide 1d and 2d modules
  • pip packaging

Test results 1d

Filter sine Filter noise
filtersin filternoise
upsample downsample
up2 down10
up256 down100

Test results 2d

Filter L1 norm sine Filter noise
filter2dsin filter2dnoise
upsample downsample
up2d2 downsample2d2
up2d8 downsample2d4
Activation
act

References

  • Alias-Free GAN
  • adefossez/julius
  • A. V. Oppenheim and R. W. Schafer. Discrete-Time Signal Processing. Pearson, International Edition, 3rd edition, 2010

Acknowledgement

This work is done at MINDsLab Inc.

Thanks to teammates at MINDsLab Inc.

Comments
  •  Batched resampling for the new implementation

    Batched resampling for the new implementation

    Hi, thank you very much for the contribution.

    I think the new implementation of resample.Upsample1d and resample.Downsample1d breaks batched resampling when using groups=C without expanding the filter to match the shape. Perhaps the implementation should be like the below (maybe similar goes to 2d):

    Upsample1d.forward()

        # x: [B,C,T]
        def forward(self, x):
            B, C, T = x.shape
            x = F.pad(x, (self.pad, self.pad), mode='reflect')
            # TConv with filter expanded to C with C groups for depthwise op
            x = self.ratio * F.conv_transpose1d(
                x, self.filter.expand(C, -1, -1), stride=self.stride, groups=C)
            pad_left = self.pad * self.stride + (self.kernel_size -
                                                 self.stride) // 2
            pad_right = self.pad * self.stride + (self.kernel_size - self.stride +
                                                  1) // 2
            x = x[..., pad_left:-pad_right]
    

    LowPassFilter1d.forward()

        #input [B,C,T]
        def forward(self, x):
            B, C, T = x.shape
            if self.padding:
                x = F.pad(x, (self.left_pad, self.right_pad),
                          mode=self.padding_mode)
            # Conv with filter expanded to C with C groups for depthwise op
            out = F.conv1d(x, self.filter.expand(C, -1, -1), stride=self.stride, groups=C) # typo 'groupds' btw
            return out
    

    Could you check the correctness? Thanks again for the implementation!

    opened by L0SG 2
  • torch.speical.i1 typo

    torch.speical.i1 typo

    https://github.com/junjun3518/alias-free-torch/blob/f1fddd52fdd068ee475e82ae60c92e1bc24ffe02/src/alias_free_torch/filter.py#L22

    At this line I believe you wanted torch.special.i1.

    opened by torridgristle 2
  • "if self.pad / self.padding" in LowPassFilter2d

    https://github.com/junjun3518/alias-free-torch/blob/258551410ff7bf02e06ece7c597466dc970fe5c7/src/alias_free_torch/filter.py#L165 https://github.com/junjun3518/alias-free-torch/blob/258551410ff7bf02e06ece7c597466dc970fe5c7/src/alias_free_torch/filter.py#L173

    In LowPassFilter2d it looks like if self.pad: should change to if self.padding:, or self.padding = padding should change to self.pad = padding to match LowPassFilter1d.

    opened by torridgristle 1
  • Padding Bool typo

    Padding Bool typo

    https://github.com/junjun3518/alias-free-torch/blob/258551410ff7bf02e06ece7c597466dc970fe5c7/src/alias_free_torch/filter.py#L73

    padding: bool: True, should be padding: bool = True,

    I'm not sure if this causes an error with every version of PyTorch, but it does with PyTorch 1.12.0+cu113 on Python 3.7.13

    opened by torridgristle 1
  • 2D Filter Jinc appears to be wrong

    2D Filter Jinc appears to be wrong

    Here is a plot of the generated 1D sinc filter kernel. sinc looks right

    Here is a plot of the generated 2D jinc filter kernel. jinc looks wrong

    I'd expect it to look more like a series of rings or ripples, rather than a donut or torus.

    jinc filtered noise fft

    The FFT output for randn noise put through the 2D filter doesn't look right either.

    change jinc to sinc in 2d filter

    Changing filter_ = 2 * cutoff * window * jinc(2 * cutoff * time) to filter_ = 2 * cutoff * window * sinc(2 * cutoff * time) in kaiser_jinc_filter2d makes a more familiar kernel.

    change jinc to sinc in 2d filter fft out

    And the FFT output for randn noise put through this 2D filter looks about how I'd expect.

    opened by torridgristle 3
Releases(v0.0.6)
Owner
이준혁(Junhyeok Lee)
Audio/Speech Deep Learning Researcher @mindslab-ai
이준혁(Junhyeok Lee)
Toolbox of models, callbacks, and datasets for AI/ML researchers.

Pretrained SOTA Deep Learning models, callbacks and more for research and production with PyTorch Lightning and PyTorch Website • Installation • Main

Pytorch Lightning 1.4k Dec 30, 2022
Implementation of CrossViT: Cross-Attention Multi-Scale Vision Transformer for Image Classification

CrossViT : Cross-Attention Multi-Scale Vision Transformer for Image Classification This is an unofficial PyTorch implementation of CrossViT: Cross-Att

Rishikesh (ऋषिकेश) 103 Nov 25, 2022
POT : Python Optimal Transport

POT: Python Optimal Transport This open source Python library provide several solvers for optimization problems related to Optimal Transport for signa

Python Optimal Transport 1.7k Dec 31, 2022
PyTorch implementation for NED. It can be used to manipulate the facial emotions of actors in videos based on emotion labels or reference styles.

Neural Emotion Director (NED) - Official Pytorch Implementation Example video of facial emotion manipulation while retaining the original mouth motion

Foivos Paraperas 89 Dec 23, 2022
Use deep learning, genetic programming and other methods to predict stock and market movements

StockPredictions Use classic tricks, neural networks, deep learning, genetic programming and other methods to predict stock and market movements. Both

Linda MacPhee-Cobb 386 Jan 03, 2023
🔎 Monitor deep learning model training and hardware usage from your mobile phone 📱

Monitor deep learning model training and hardware usage from mobile. 🔥 Features Monitor running experiments from mobile phone (or laptop) Monitor har

labml.ai 1.2k Dec 25, 2022
Implementations of polygamma, lgamma, and beta functions for PyTorch

lgamma Implementations of polygamma, lgamma, and beta functions for PyTorch. It's very hacky, but that's usually ok for research use. To build, run: .

Rachit Singh 24 Nov 09, 2021
Minimal PyTorch implementation of Generative Latent Optimization from the paper "Optimizing the Latent Space of Generative Networks"

Minimal PyTorch implementation of Generative Latent Optimization This is a reimplementation of the paper Piotr Bojanowski, Armand Joulin, David Lopez-

Thomas Neumann 117 Nov 27, 2022
Official code repository for the publication "Latent Equilibrium: A unified learning theory for arbitrarily fast computation with arbitrarily slow neurons"

Latent Equilibrium: A unified learning theory for arbitrarily fast computation with arbitrarily slow neurons This repository contains the code to repr

Computational Neuroscience, University of Bern 3 Aug 04, 2022
Sub-Cluster AdaCos: Learning Representations for Anomalous Sound Detection.

Accompanying code for the paper Sub-Cluster AdaCos: Learning Representations for Anomalous Sound Detection.

Kevin Wilkinghoff 6 Dec 01, 2022
This is the official implementation for the paper "Heterogeneous Multi-player Multi-armed Bandits: Closing the Gap and Generalization" in NeurIPS 2021.

MPMAB_BEACON This is code used for the paper "Decentralized Multi-player Multi-armed Bandits: Beyond Linear Reward Functions", Neurips 2021. Requireme

Cong Shen Research Group 0 Oct 26, 2021
Semi-supervised Transfer Learning for Image Rain Removal. In CVPR 2019.

Semi-supervised Transfer Learning for Image Rain Removal This package contains the Python implementation of "Semi-supervised Transfer Learning for Ima

Wei Wei 59 Dec 26, 2022
Code for TIP 2017 paper --- Illumination Decomposition for Photograph with Multiple Light Sources.

Illumination_Decomposition Code for TIP 2017 paper --- Illumination Decomposition for Photograph with Multiple Light Sources. This code implements the

QAY 7 Nov 15, 2020
Neural style in TensorFlow! 🎨

neural-style An implementation of neural style in TensorFlow. This implementation is a lot simpler than a lot of the other ones out there, thanks to T

Anish Athalye 5.5k Dec 29, 2022
Steerable discovery of neural audio effects

Steerable discovery of neural audio effects Christian J. Steinmetz and Joshua D. Reiss Abstract Applications of deep learning for audio effects often

Christian J. Steinmetz 182 Dec 29, 2022
Translation-equivariant Image Quantizer for Bi-directional Image-Text Generation

Translation-equivariant Image Quantizer for Bi-directional Image-Text Generation Woncheol Shin1, Gyubok Lee1, Jiyoung Lee1, Joonseok Lee2,3, Edward Ch

Woncheol Shin 7 Sep 26, 2022
(CVPR 2021) Lifting 2D StyleGAN for 3D-Aware Face Generation

Lifting 2D StyleGAN for 3D-Aware Face Generation Official implementation of paper "Lifting 2D StyleGAN for 3D-Aware Face Generation". Requirements You

Yichun Shi 66 Nov 29, 2022
Developed an optimized algorithm which finds the most optimal path between 2 points in a 3D Maze using various AI search techniques like BFS, DFS, UCS, Greedy BFS and A*

Developed an optimized algorithm which finds the most optimal path between 2 points in a 3D Maze using various AI search techniques like BFS, DFS, UCS, Greedy BFS and A*. The algorithm was extremely

1 Mar 28, 2022
Official implementation for "QS-Attn: Query-Selected Attention for Contrastive Learning in I2I Translation" (CVPR 2022)

QS-Attn: Query-Selected Attention for Contrastive Learning in I2I Translation (CVPR2022) https://arxiv.org/abs/2203.08483 Unpaired image-to-image (I2I

Xueqi Hu 50 Dec 16, 2022
Classification models 1D Zoo - Keras and TF.Keras

Classification models 1D Zoo - Keras and TF.Keras This repository contains 1D variants of popular CNN models for classification like ResNets, DenseNet

Roman Solovyev 12 Jan 06, 2023