Implicit neural differentiable FM synthesizer

Related tags

Audiofmsynth
Overview

Implicit neural differentiable FM synthesizer

Replicate

The purpose of this project is to emulate arbitrary sounds with FM synthesis, where the parameters of the FM synth are learned by optimization.

This idea was conceived and implemented during the Neural Audio Synthesis Hackathon 2021. Thanks to Ben Hayes for organizing the workshop and to Mia Chiquier for pointing me towards SIREN!

Architecture

Please refer to FMNet and Envelope in synth.py for the actual architectural details.

This model takes as input a list of time steps t_1, t_2, ..., sampled at some sample rate, and outputs an audio signal in the same sample rate.

Similar to SIREN, it feeds the input time step values through sinusoidal activation functions initialized with specific weights. In this work we initialize weights to 127 musical pitches from C#-1 to G9. We call this layer the "carrier".

We only use a single sinusoidal layer, but we modulate the frequencies of this layer with a summed output from a separate cosine layer with 127 cosine nodes, also initialized from musical pitches C#-1 to G9. We refer to this layer as the "modulator"

Each carrier and modulator node has both a frequency and an amplitude component. We learn a global phase in the range (0, 2*pi) that is shared among all carrier and modulator frequencies. This is effectively a global "bias" term to the sinusoidal activation functions.

The goal of this project is to provide a differentiable emulation of a simple FM synthesizer, so we take a softmax of both carrier and modulator layers' amplitudes.

In addition to carrier and modulator amplitudes we also learn separate amplitude envelope curves for each carrier and modulator node. The envelope is modeled by the bell curve function 1 / sqrt((1 + t * slope) + (slope + offset)).

Optimization

This model learns a implicit neural representation for a target audio signal. This means that we optimize the network once for every target signal.

We use the L2 loss between the generated audio signal and the target audio signal as the main loss function.

We also provide optional additional loss terms that maximize the "spikiness" of carrier and modulator amplitude vectors, in order to make the network pick a single carrier and modulator frequency. This term is optional since it sometimes learns more interesting sounds when several carrier and modulators are active.

We use the ADAM optimizer with a learning rate of 0.01.

Inference

Since this is an implicit neural representation, we can generate outputs at arbitrary sample rates and resolutions. This allows for seamless time stretching and upscaling.

The inference code also supports "stereo detuning" to create musically interesting sounds.

Owner
Andreas Jansson
Machine learning and music
Andreas Jansson
Marsyas - Music Analysis, Retrieval and Synthesis for Audio Signals

Welcome to MARSYAS. MARSYAS is a software framework for rapid prototyping of audio applications, with flexibility and extensibility as primary concer

Marsyas Developers Group 364 Oct 31, 2022
Open-Source Tools & Data for Music Source Separation: A Pragmatic Guide for the MIR Practitioner

Open-Source Tools & Data for Music Source Separation: A Pragmatic Guide for the MIR Practitioner

IELab@ Korea University 0 Nov 12, 2021
Official implementation of A cappella: Audio-visual Singing VoiceSeparation, from BMVC21

Y-Net Official implementation of A cappella: Audio-visual Singing VoiceSeparation, British Machine Vision Conference 2021 Project page: ipcv.github.io

Juan F. Montesinos 12 Oct 22, 2022
Omniscient Mozart, being able to transcribe everything in the music, including vocal, drum, chord, beat, instruments, and more.

OMNIZART Omnizart is a Python library that aims for democratizing automatic music transcription. Given polyphonic music, it is able to transcribe pitc

MCTLab 1.3k Jan 08, 2023
BART aids transcribe tasks by taking a source audio file and creating automatic repeated loops, allowing transcribers to listen to fragments multiple times

BART (Beyond Audio Replay Technology) aids transcribe tasks by taking a source audio file and creating automatic repeated loops, allowing transcribers to listen to fragments multiple times (with poss

2 Feb 04, 2022
Gateware for the Terasic/Arrow DECA board, to become a USB2 high speed audio interface

DECA USB Audio Interface DECA based USB 2.0 High Speed audio interface Status / current limitations enumerates as class compliant audio device on Linu

Hans Baier 16 Mar 21, 2022
Voice to Text using Raspberry Pi

This module will help to convert your voice (speech) into text using Speech Recognition Library. You can control the devices or you can perform the desired tasks by the word recognition

Raspberry_Pi Pakistan 2 Dec 15, 2021
Analyze, visualize and process sound field data recorded by spherical microphone arrays.

Sound Field Analysis toolbox for Python The sound_field_analysis toolbox (short: sfa) is a Python port of the Sound Field Analysis Toolbox (SOFiA) too

Division of Applied Acoustics at Chalmers University of Technology 69 Nov 23, 2022
Codes for "Efficient Long-Range Attention Network for Image Super-resolution"

ELAN Codes for "Efficient Long-Range Attention Network for Image Super-resolution", arxiv link. Dependencies & Installation Please refer to the follow

xindong zhang 124 Dec 22, 2022
Gammatone-based spectrograms, using gammatone filterbanks or Fourier transform weightings.

Gammatone Filterbank Toolkit Utilities for analysing sound using perceptual models of human hearing. Jason Heeris, 2013 Summary This is a port of Malc

Jason Heeris 188 Dec 14, 2022
cross-library (GStreamer + Core Audio + MAD + FFmpeg) audio decoding for Python

audioread Decode audio files using whichever backend is available. The library currently supports: Gstreamer via PyGObject. Core Audio on Mac OS X via

beetbox 419 Dec 26, 2022
Python library for audio and music analysis

librosa A python package for music and audio analysis. Documentation See https://librosa.org/doc/ for a complete reference manual and introductory tut

librosa 5.6k Jan 06, 2023
Graphical interface to control granular sound synthesis.

Granular sound synthesis interface SoundGrain is a graphical interface where users can draw and edit trajectories to control granular sound synthesis

Olivier BΓ©langer 122 Dec 10, 2022
Linear Prediction Coefficients estimation from mel-spectrogram implemented in Python based on Levinson-Durbin algorithm.

LPC_for_TTS Linear Prediction Coefficients estimation from mel-spectrogram implemented in Python based on Levinson-Durbin algorithm. 基于Levinson-Durbin

Zewang ZHANG 58 Nov 17, 2022
𝙰 π™Όπšžπšœπš’πšŒ π™±πš˜πš π™²πš›πšŽπšŠπšπšŽπš π™±πš’ πšƒπšŽπšŠπš–π™³πš•πš πŸ’–

TeamDltmusic 𝙰 π™Όπšžπšœπš’πšŒ π™±πš˜πš π™²πš›πšŽπšŠπšπšŽπš π™±πš’ πšƒπšŽπšŠπš–π™³πš•πš πŸ’– Deploy String Session String Click hear you can find string session OR join He

TeamDlt 5 Jan 18, 2022
This Bot can extract audios and subtitles from video files

Send any valid video file and the bot shows you available streams in it that can be extracted!!

TroJanzHEX 56 Nov 22, 2022
Speech recognition module for Python, supporting several engines and APIs, online and offline.

SpeechRecognition Library for performing speech recognition, with support for several engines and APIs, online and offline. Speech recognition engine/

Anthony Zhang 6.7k Jan 08, 2023
Pyroomacoustics is a package for audio signal processing for indoor applications. It was developed as a fast prototyping platform for beamforming algorithms in indoor scenarios.

Summary Pyroomacoustics is a software package aimed at the rapid development and testing of audio array processing algorithms. The content of the pack

Audiovisual Communications Laboratory 1k Jan 09, 2023
Code for paper 'Audio-Driven Emotional Video Portraits'.

Audio-Driven Emotional Video Portraits [CVPR2021] Xinya Ji, Zhou Hang, Kaisiyuan Wang, Wayne Wu, Chen Change Loy, Xun Cao, Feng Xu [Project] [Paper] G

197 Dec 31, 2022
:speech_balloon: SpeechPy - A Library for Speech Processing and Recognition: http://speechpy.readthedocs.io/en/latest/

SpeechPy Official Project Documentation Table of Contents Documentation Which Python versions are supported Citation How to Install? Local Installatio

Amirsina Torfi 870 Dec 27, 2022