PyTorch wrappers for using your model in audacity!

Overview

torchaudacity

This package contains utilities for prepping PyTorch audio models for use in Audacity. More specifically, it provides abstract classes for you to wrap your waveform-to-waveform and waveform-to-labels models (see the Deep Learning for Audacity website to learn more about deep learning models for audacity).

img

Table of Contents


Contributing Models to Audacity

Audacity is equipped with a wrapper framework for deep learning models written in PyTorch. Audacity contains two deep learning tools: Deep Learning Effect and Deep Learning Analyzer.
Deep Learning Effect performs waveform to waveform processing, and is useful for audio-in-audio-out tasks (such as source separation, voice conversion, style transfer, amplifier emulation, etc.), while Deep Learning Analyzer performs waveform to labels processing, and is useful for annotation tasks (such as sound event detection, musical instrument recognition, automatic speech recognition, etc.). torchaudacity contains two abstract classes for serializing two types of models: waveform-to-waveform and waveform-to-labels. The classes are WaveformToWaveform, and WaveformToLabels, respectively.

Choosing an Effect Type

Waveform to Waveform models

Waveform-to-waveform models receive a single multichannel audio track as input, and may write to a variable number of new audio tracks as output.

Example models for waveform-to-waveform effects include source separation, neural upsampling, guitar amplifier emulation, generative models, etc. Output tensors for waveform-to-waveform models must be multichannel waveform tensors with shape (num_output_channels, num_samples). For every audio waveform in the output tensor, a new audio track is created in the Audacity project.

Waveform to Labels models

Waveform-to-labels models receive a single multichannel audio track as input, and may write to an output label track as output. The waveform-to-labels effect can be used for many audio analysis applications, such as voice activity detection, sound event detection, musical instrument recognition, automatic speech recognition, etc. The output for waveform-to-labels models must be a tuple of two tensors. The first tensor corresponds to the class probabilities for each label present in the waveform, shape (num_timesteps, num_classes). The second tensor must contain timestamps with start and stop times for each label, shape (num_timesteps, 2).

Model Metadata

Certain details about the model, such as its sample rate, tool type (e.g. waveform-to-waveform or waveform-to-labels), list of labels, etc. must be provided by the model contributor in a separate metadata.json file. In order to help users choose the correct model for their required task, model contributors are asked to provide a short and long description of the model, the target domain of the model (e.g. speech, music, environmental, etc.), as well as a list of tags or keywords as part of the metadata. For waveform-to-label models, the model contributor may include an optional confidence threshold, where predictions with a probability lower than the confidence threshold are labeled as ``uncertain''.

Metadata Spec

required fields:

  • sample_rate (int)
    • range (0, 396000)
    • Model sample rate. Input tracks will be resampled to this value.
  • domains (List[str])
    • List of data domains for the model. The list should contain any of the following strings (any others will be ignored): ["music", "speech", "environmental", "other"]
  • short_description(str)
    • max 60 chars
    • short description of the model. should contain a brief message with the model's purpose, e.g. "Use me for separating vocals from the background!".
  • long_description (str)
    • max 280 chars
    • long description of the model. Shown in the detailed view of the model UI.
  • tags (List[str])
    • list of tags (to be shown in the detailed view)
    • each tag should be 15 characters max
    • max 5 tags per model.
  • labels (List[str)
    • output labels for the model. Depending on the effect type, this field means different things
    • waveform-to-waveform
      • name of each output source (e.g. drums, bass, vocal). To create the track name for each output source, each one of the labels will be appended to the mixture track's name.
    • waveform-to-labels:
      • labeler models should output a list of class probabilities with shape (n_timesteps, n_class) and a list of start/stop timestamps for each label (n_timesteps, 2). The labeler effect will create a add new labels by taking the argmax of each class probability and indexing into the metadata's labels.
  • effect_type (str)
    • Target effect for this model. Must be one of ["waveform-to-waveform", "waveform-to-labels"].
  • multichannel (bool)
    • If multichannel is set to true, stereo tracks are passed to the model as multichannel audio tensors, with shape (2, n). Note that this means that the input could either be a mono track with shape (1, n) or stereo track with shape (2, n).
    • If multichannel is set to false, stereo tracks are downmixed, meaning that the input audio tensor will always be shape (1, n).

Example - Waveform-to-Waveform model

Here's a minimal example for a model that simply boosts volume by multiplying the incoming audio by a factor of 2.

We can sum up the whole process into 4 steps:

  1. Developing your model
  2. Wrapping your model using torchaudio
  3. Creating a metadata document
  4. Exporting to HuggingFace

Developing your model

First, we create our model. There are no internal constraints on what the internal model architecture should be, as long as you can use torch.jit.script or torch.jit.trace to serialize it, and it is able to meet the input-output constraints specified in waveform-to-waveform and waveform-to-labels models.

import torch.nn as nn

class MyVolumeModel(nn.Module):

    def __init__(self):
        super().__init__()

    def forward(self, x: torch.Tensor) -> torch.Tensor:
        # do the neural net magic!
        x = x * 2

        return x

Making sure your model is compatible with torchscript

PyTorch makes it really easy to deploy your Python models in C++ by using torchscript, an intermediate representation format for torch models that can be called in C++. Many of Python's built-in functions are supported by torchscript. However, not all Python operations are supported by the torchscript environment, meaning that you are only allowed to use a subset of Python operations in your model code. See the torch.jit docs to learn more about writing torchscript-compatible code.

If your model computes spectrograms (or requires any kind of preprocessing/postprocessing), make sure those operations are compatible with torchscript, like torchaudio's operation set.

Useful links:

Wrapping your model using torchaudio

Now, we create a wrapper class for our model. Because our model returns an audio waveform as output, we'll use WaveformToWaveform as our parent class. For both WaveformToWaveform and WaveformToLabels, we need to implement the do_forward_pass method with our processing code. See the docstrings for more details.

from torchaudacity import WaveformToWaveform

class MyVolumeModelWrapper(WaveformToWaveform):

    def __init__(self, model):
        model.eval()
        self.model = model
    
    def do_forward_pass(self, x: torch.Tensor) -> torch.Tensor:
        
        # do any preprocessing here! 
        # expect x to be a waveform tensor with shape (n_channels, n_samples)

        output = self.model(x)

        # do any postprocessing here!
        # the return value should be a multichannel waveform tensor with shape (n_channels, n_samples)
    
        return output

Creating a metadata document

Audacity models need a metadata file. See the metadata spec to learn about the required fields.

metadata = {
    'sample_rate': 48000, 
    'domain_tags': ['music', 'speech', 'environmental'],
    'short_description': 'Use me to boost volume by 3dB :).',
    'long_description':  'This description can be a max of 280 characters aaaaaaaaaaaaaaaaaaaa.',
    'tags': ['volume boost'],
    'labels': ['boosted'],
    'effect_type': 'waveform-to-waveform',
    'multichannel': False,
}

All set! We can now proceed to serialize the model to torchscript and save the model, along with its metadata.

from pathlib import Path
from torchaudacity import save_model

# create a root dir for our model
root = Path('booster-net')
root.mkdir(exist_ok=True, parents=True)

# get our model
model = MyVolumeModel()

# wrap it
wrapper = MyVolumeModelWrapper(model)

# serialize it
# an alternative is to use torch.jit.trace
serialized_model = torch.jit.script(serialized_model)

# save!
save_model(serialized_model, metadata, root)

Exporting to HuggingFace

You should now have a directory structure that looks like this:

/booster-net/
/booster-net/model.pt
/booster-net/metadata.json

This will be the repository for your audacity model. Make sure to add a readme with the audacity tag in the YAML metadata, so it show up on the explore tab of Audacity's Deep Learning Tools.

Create a README.md inside booster-net/, and add the following header:

in README.md

---
tags: audacity
---

Awesome! It's time to push to HuggingFace. See their documentation for adding a model to the HuggingFace model hub.

Example - Exporting a Pretrained Asteroid model

See this example notebook, where we serialize a pretrained ConvTasNet model for speech separation using the Asteroid source separation library.


Owner
PhD @interactiveaudiolab
The easiest way to use deep metric learning in your application. Modular, flexible, and extensible. Written in PyTorch.

News March 3: v0.9.97 has various bug fixes and improvements: Bug fixes for NTXentLoss Efficiency improvement for AccuracyCalculator, by using torch i

Kevin Musgrave 5k Jan 02, 2023
Bunch of optimizer implementations in PyTorch

Bunch of optimizer implementations in PyTorch

Hyeongchan Kim 76 Jan 03, 2023
TorchSSL: A PyTorch-based Toolbox for Semi-Supervised Learning

TorchSSL: A PyTorch-based Toolbox for Semi-Supervised Learning

1k Dec 28, 2022
A lightweight wrapper for PyTorch that provides a simple declarative API for context switching between devices, distributed modes, mixed-precision, and PyTorch extensions.

A lightweight wrapper for PyTorch that provides a simple declarative API for context switching between devices, distributed modes, mixed-precision, and PyTorch extensions.

Fidelity Investments 56 Sep 13, 2022
Reformer, the efficient Transformer, in Pytorch

Reformer, the Efficient Transformer, in Pytorch This is a Pytorch implementation of Reformer https://openreview.net/pdf?id=rkgNKkHtvB It includes LSH

Phil Wang 1.8k Jan 06, 2023
Learning Sparse Neural Networks through L0 regularization

Example implementation of the L0 regularization method described at Learning Sparse Neural Networks through L0 regularization, Christos Louizos, Max W

AMLAB 202 Nov 10, 2022
Pytorch bindings for Fortran

Pytorch bindings for Fortran

Dmitry Alexeev 46 Dec 29, 2022
3D-RETR: End-to-End Single and Multi-View3D Reconstruction with Transformers

3D-RETR: End-to-End Single and Multi-View 3D Reconstruction with Transformers (BMVC 2021) Zai Shi*, Zhao Meng*, Yiran Xing, Yunpu Ma, Roger Wattenhofe

Zai Shi 36 Dec 21, 2022
Unofficial PyTorch implementation of DeepMind's Perceiver IO with PyTorch Lightning scripts for distributed training

Unofficial PyTorch implementation of DeepMind's Perceiver IO with PyTorch Lightning scripts for distributed training

Martin Krasser 251 Dec 25, 2022
A Closer Look at Structured Pruning for Neural Network Compression

A Closer Look at Structured Pruning for Neural Network Compression Code used to reproduce experiments in https://arxiv.org/abs/1810.04622. To prune, w

Bayesian and Neural Systems Group 140 Dec 05, 2022
PyGCL: Graph Contrastive Learning Library for PyTorch

PyGCL is an open-source library for graph contrastive learning (GCL), which features modularized GCL components from published papers, standardized evaluation, and experiment management.

GCL: Graph Contrastive Learning Library for PyTorch 592 Jan 07, 2023
PyTorch extensions for fast R&D prototyping and Kaggle farming

Pytorch-toolbelt A pytorch-toolbelt is a Python library with a set of bells and whistles for PyTorch for fast R&D prototyping and Kaggle farming: What

Eugene Khvedchenya 1.3k Jan 05, 2023
Training RNNs as Fast as CNNs (https://arxiv.org/abs/1709.02755)

News SRU++, a new SRU variant, is released. [tech report] [blog] The experimental code and SRU++ implementation are available on the dev branch which

ASAPP Research 2.1k Jan 01, 2023
Fast and Easy-to-use Distributed Graph Learning for PyTorch Geometric

Fast and Easy-to-use Distributed Graph Learning for PyTorch Geometric

Quiver Team 221 Dec 22, 2022
A collection of extensions and data-loaders for few-shot learning & meta-learning in PyTorch

Torchmeta A collection of extensions and data-loaders for few-shot learning & meta-learning in PyTorch. Torchmeta contains popular meta-learning bench

Tristan Deleu 1.7k Jan 06, 2023
GPU-accelerated PyTorch implementation of Zero-shot User Intent Detection via Capsule Neural Networks

GPU-accelerated PyTorch implementation of Zero-shot User Intent Detection via Capsule Neural Networks This repository implements a capsule model Inten

Joel Huang 15 Dec 24, 2022
pip install antialiased-cnns to improve stability and accuracy

Antialiased CNNs [Project Page] [Paper] [Talk] Making Convolutional Networks Shift-Invariant Again Richard Zhang. In ICML, 2019. Quick & easy start Ru

Adobe, Inc. 1.6k Dec 28, 2022
lookahead optimizer (Lookahead Optimizer: k steps forward, 1 step back) for pytorch

lookahead optimizer for pytorch PyTorch implement of Lookahead Optimizer: k steps forward, 1 step back Usage: base_opt = torch.optim.Adam(model.parame

Liam 318 Dec 09, 2022
On the Variance of the Adaptive Learning Rate and Beyond

RAdam On the Variance of the Adaptive Learning Rate and Beyond We are in an early-release beta. Expect some adventures and rough edges. Table of Conte

Liyuan Liu 2.5k Dec 27, 2022
ocaml-torch provides some ocaml bindings for the PyTorch tensor library.

ocaml-torch provides some ocaml bindings for the PyTorch tensor library. This brings to OCaml NumPy-like tensor computations with GPU acceleration and tape-based automatic differentiation.

Laurent Mazare 369 Jan 03, 2023