Time-stretch audio clips quickly with PyTorch (CUDA supported)! Additional utilities for searching efficient transformations are included.

Overview

Torch Time Stretch

Time-stretch audio clips quickly with PyTorch (CUDA supported)! Additional utilities for searching efficient transformations are included.

View on PyPI / View Documentation

Publish to PyPI Run tests PyPI version Number of downloads from PyPI per month Python version support Code Style: Black

About

This package includes two main features:

  • Time-stretch audio clips quickly using PyTorch (with CUDA support)
  • Calculate efficient time-stretch targets (useful for augmentation, where speed is more important than precise time-stretches)

Also check out torch-pitch-shift, a sister project for pitch-shifting.

Installation

pip install torch-time-stretch

Usage

Example

Check out example.py to see torch-time-stretch in action!

Documentation

See the documentation page for detailed documentation!

Contributing

Please feel free to submit issues or pull requests!

You might also like...
Additional code for Stable-baselines3 to load and upload models from the Hub.

Hugging Face x Stable-baselines3 A library to load and upload Stable-baselines3 models from the Hub. Installation With pip Examples [Todo: add colab t

BYOL for Audio: Self-Supervised Learning for General-Purpose Audio Representation
BYOL for Audio: Self-Supervised Learning for General-Purpose Audio Representation

BYOL for Audio: Self-Supervised Learning for General-Purpose Audio Representation This is a demo implementation of BYOL for Audio (BYOL-A), a self-sup

Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more

JAX: Autograd and XLA Quickstart | Transformations | Install guide | Neural net libraries | Change logs | Reference docs | Code search News: JAX tops

Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more

JAX: Autograd and XLA Quickstart | Transformations | Install guide | Neural net libraries | Change logs | Reference docs | Code search News: JAX tops

Extending JAX with custom C++ and CUDA code

Extending JAX with custom C++ and CUDA code This repository is meant as a tutorial demonstrating the infrastructure required to provide custom ops in

Several simple examples for popular neural network toolkits calling custom CUDA operators.
Several simple examples for popular neural network toolkits calling custom CUDA operators.

Neural Network CUDA Example Several simple examples for neural network toolkits (PyTorch, TensorFlow, etc.) calling custom CUDA operators. We provide

Picasso: A CUDA-based Library for Deep Learning over 3D Meshes

The Picasso Library is intended for complex real-world applications with large-scale surfaces, while it also performs impressively on the small-scale applications over synthetic shape manifolds. We have upgraded the point cloud modules of SPH3D-GCN from homogeneous to heterogeneous representations, and included the upgraded modules into this latest work as well. We are happy to announce that the work is accepted to IEEE CVPR2021.

Code for "Learning Structural Edits via Incremental Tree Transformations" (ICLR'21)

Learning Structural Edits via Incremental Tree Transformations Code for "Learning Structural Edits via Incremental Tree Transformations" (ICLR'21) 1.

This Repo is the official CUDA implementation of ICCV 2019 Oral paper for CARAFE: Content-Aware ReAssembly of FEatures

Introduction This Repo is the official CUDA implementation of ICCV 2019 Oral paper for CARAFE: Content-Aware ReAssembly of FEatures. @inproceedings{Wa

Comments
  • RuntimeError: The size of tensor a (40264) must match the size of tensor b (173) at non-singleton dimension 1

    RuntimeError: The size of tensor a (40264) must match the size of tensor b (173) at non-singleton dimension 1

    I use same code in https://github.com/KentoNishi/torch-time-stretch/blob/master/example.py but get below error

    (librosa) ➜  torch-time-stretch git:(master) ✗ python example.py 
    Traceback (most recent call last):
      File "/home/jackie/code/github/torch-time-stretch/example.py", line 48, in <module>
        test_time_stretch_2_up()
      File "/home/jackie/code/github/torch-time-stretch/example.py", line 20, in test_time_stretch_2_up
        up = time_stretch(sample, Fraction(1, 2), SAMPLE_RATE)
      File "/home/jackie/code/github/torch-time-stretch/torch_time_stretch/main.py", line 116, in time_stretch
        output = stretcher(output)
      File "/home/jackie/anaconda3/envs/librosa/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
        return forward_call(*input, **kwargs)
      File "/home/jackie/anaconda3/envs/librosa/lib/python3.9/site-packages/torchaudio/transforms/_transforms.py", line 1059, in forward
        return F.phase_vocoder(complex_specgrams, rate, self.phase_advance)
      File "/home/jackie/anaconda3/envs/librosa/lib/python3.9/site-packages/torchaudio/functional/functional.py", line 743, in phase_vocoder
        phase = angle_1 - angle_0 - phase_advance
    RuntimeError: The size of tensor a (40264) must match the size of tensor b (173) at non-singleton dimension 1
    
    opened by Jackiexiao 4
  • Example ratios are reversed.

    Example ratios are reversed.

    Love it, thanks for making this! Tiny thing: In the example test_time_stretch_2_up should use 1/2 as a ratio, not 2/1. test_time_stretch_2_down should use that 2/1 (it's stretching the clip length by 2x).

    opened by hdemmer 1
  • Does it with mono-channel wav files?

    Does it with mono-channel wav files?

    my audio clip is in mono 16khz audio, [ 0 0 0 ... 63 100 127], so it will throw

    ---> 15 down = time_stretch(sample, Fraction(2, 1), SAMPLE_RATE)
         16 wavfile.write(
         17     "./stretched_down_2.wav",
         18     SAMPLE_RATE,
         19     np.swapaxes(down.cpu()[0].numpy(), 0, 0).astype(dtype),
         20 )
    
    File /opt/conda/envs/classify-audio/lib/python3.9/site-packages/torch_time_stretch/main.py:108, in time_stretch(input, stretch, sample_rate, n_fft, hop_length)
        106 if not hop_length:
        107     hop_length = n_fft // 32
    --> 108 batch_size, channels, samples = input.shape
        109 # resampler = T.Resample(sample_rate, int(sample_rate / stretch)).to(input.device)
        110 output = input
    
    ValueError: not enough values to unpack (expected 3, got 2)
    
    opened by ti3x 0
Releases(v1.0.3)
Owner
Kento Nishi
17-year-old programmer at Lynbrook High School, with strong interests in AI/Machine Learning. Open source developer and researcher at the Four Eyes Lab.
Kento Nishi
Kaggle G2Net Gravitational Wave Detection : 2nd place solution

Kaggle G2Net Gravitational Wave Detection : 2nd place solution

Hiroshechka Y 33 Dec 26, 2022
Deploy a ML inference service on a budget in less than 10 lines of code.

BudgetML is perfect for practitioners who would like to quickly deploy their models to an endpoint, but not waste a lot of time, money, and effort trying to figure out how to do this end-to-end.

1.3k Dec 25, 2022
A benchmark dataset for mesh multi-label-classification based on cube engravings introduced in MeshCNN

Double Cube Engravings This script creates a dataset for multi-label mesh clasification, with an intentionally difficult setup for point cloud classif

Yotam Erel 1 Nov 30, 2021
A Runtime method overload decorator which should behave like a compiled language

strongtyping-pyoverload A Runtime method overload decorator which should behave like a compiled language there is a override decorator from typing whi

20 Oct 31, 2022
The self-supervised goal reaching benchmark introduced in Discovering and Achieving Goals via World Models

Lexa-Benchmark Codebase for the self-supervised goal reaching benchmark introduced in 'Discovering and Achieving Goals via World Models'. Setup Create

1 Oct 14, 2021
Hydra Lightning Template for Structured Configs

Hydra Lightning Template for Structured Configs Template for creating projects with pytorch-lightning and hydra. How to use this template? Create your

Model-driven Machine Learning 4 Jul 19, 2022
Mercer Gaussian Process (MGP) and Fourier Gaussian Process (FGP) Regression

Mercer Gaussian Process (MGP) and Fourier Gaussian Process (FGP) Regression We provide the code used in our paper "How Good are Low-Rank Approximation

Aristeidis (Ares) Panos 0 Dec 13, 2021
Speech Recognition is an important feature in several applications used such as home automation, artificial intelligence

Speech Recognition is an important feature in several applications used such as home automation, artificial intelligence, etc. This article aims to provide an introduction on how to make use of the S

RISHABH MISHRA 1 Feb 13, 2022
Pgn2tex - Scripts to convert pgn files to latex document. Useful to build books or pdf from pgn studies

Pgn2Latex (WIP) A simple script to make pdf from pgn files and studies. It's sti

12 Jul 23, 2022
Unofficial PyTorch implementation of Google AI's VoiceFilter system

VoiceFilter Note from Seung-won (2020.10.25) Hi everyone! It's Seung-won from MINDs Lab, Inc. It's been a long time since I've released this open-sour

MINDs Lab 883 Jan 07, 2023
Official Implementation of LARGE: Latent-Based Regression through GAN Semantics

LARGE: Latent-Based Regression through GAN Semantics [Project Website] [Google Colab] [Paper] LARGE: Latent-Based Regression through GAN Semantics Yot

83 Dec 06, 2022
Code I use to automatically update my videos' metadata on YouTube

mCodingYouTube This repository contains the code I use to automatically update my videos' metadata on YouTube, including: titles, descriptions, tags,

James Murphy 19 Oct 07, 2022
DISTIL: Deep dIverSified inTeractIve Learning.

DISTIL: Deep dIverSified inTeractIve Learning. An active/inter-active learning library built on py-torch for reducing labeling costs.

decile-team 110 Dec 06, 2022
Third party Pytorch implement of Image Processing Transformer (Pre-Trained Image Processing Transformer arXiv:2012.00364v2)

ImageProcessingTransformer Third party Pytorch implement of Image Processing Transformer (Pre-Trained Image Processing Transformer arXiv:2012.00364v2)

61 Jan 01, 2023
RM Operation can equivalently convert ResNet to VGG, which is better for pruning; and can help RepVGG perform better when the depth is large.

RMNet: Equivalently Removing Residual Connection from Networks This repository is the official implementation of "RMNet: Equivalently Removing Residua

184 Jan 04, 2023
Code for CVPR2019 Towards Natural and Accurate Future Motion Prediction of Humans and Animals

Motion prediction with Hierarchical Motion Recurrent Network Introduction This work concerns motion prediction of articulate objects such as human, fi

Shuang Wu 85 Dec 11, 2022
TEA: A Sequential Recommendation Framework via Temporally Evolving Aggregations

TEA: A Sequential Recommendation Framework via Temporally Evolving Aggregations Requirements python 3.6 torch 1.9 numpy 1.19 Quick Start The experimen

DMIRLAB 4 Oct 16, 2022
TreeSubstitutionCipher - Encryption system based on trees and substitution

Tree Substitution Cipher Generation Algorithm: Generate random tree. Tree nodes

stepa 1 Jan 08, 2022
SAN for Product Attributes Prediction

SAN Heterogeneous Star Graph Attention Network for Product Attributes Prediction This repository contains the official PyTorch implementation for ADVI

Xuejiao Zhao 9 Dec 12, 2022
KITTI-360 Annotation Tool is a framework that developed based on python(cherrypy + jinja2 + sqlite3) as the server end and javascript + WebGL as the front end.

KITTI-360 Annotation Tool is a framework that developed based on python(cherrypy + jinja2 + sqlite3) as the server end and javascript + WebGL as the front end.

86 Dec 12, 2022