PyTorch implementation of MSBG hearing loss model and MBSTOI intelligibility metric

Overview

PyTorch implementation of MSBG hearing loss model and MBSTOI intelligibility metric

This repository contains the implementation of MSBG hearing loss model and MBSTOI intellibility metric in PyTorch. The models are differentiable and can be used as a loss function to train a neural network. Both models follow Python implementation of MSBG and MBSTOI provided by organizers of Clarity Enhancement challenge. Please check the implementation at Clarity challenge repository for more information about the models.

Please note that the differentiable models are approximations of the original models and are intended to be used to train neural networks, not to give exactly the same outputs as the original models.

Requirements and installation

The model uses parts of the functionality of the original MSBG and MBSTOI models. First, download the Clarity challenge repository and set its location as CLARITY_ROOT. To install the necessary requirements:

pip install -r requirements.txt
pushd .
cd $CLARITY_ROOT/projects/MSBG/packages/matlab_mldivide
python setup.py install
popd

Additionally, set paths to the Clarity repository and this repository in path.sh and run the path.sh script before using the provided modules.

. path.sh

Tests and example script

Directory tests contains scipts to test the correspondance of the differentiable modules compared to their original implementation. To run the tests, you need the Clarity data, which can be obtained from the Clarity challenge repository. Please set the paths to the data in the scripts.

MSBG test

The tests of the hearing loss compare the outputs of functions provided by the original implementation and the differentiable version. The output shows the mean differences of the output signals

Test measure_rms, mean difference 9.629646580133766e-09
Test src_to_cochlea_filt forward, mean difference 9.830486283616455e-16
Test src_to_cochlea_filt backward, mean difference 6.900756131702976e-15
Test smear, mean difference 0.00019685214410863303
Test gammatone_filterbank, mean difference 5.49958965492409e-07
Test compute_envelope, mean difference 4.379759604381869e-06
Test recruitment, mean difference 3.1055169855373764e-12
Test cochlea, mean difference 2.5698933453410134e-06
Test hearing_loss, mean difference 2.2326804706160673e-06

MBSTOI test

The test of the intelligbility metric compares the MBSTOI values obtained by the original and differentiable model over the development set of Clarity challenge. The following graph shows the comparison. Correspondance of MBSTOI metrics.

Example script

The script example.py shows how to use the provided module as a loss function for training the neural network. In the script, we use a simple small model and overfit on one example. The descreasing loss function confirms that the provided modules are differentiable.

Loss function with MSBG and MBSTOI loss

Citation

If you use this work, please cite:

@inproceedings{Zmolikova2021BUT,
  author    = {Zmolikova, Katerina and \v{C}ernock\'{y}, Jan "Honza"},
  title     = {{BUT system for the first Clarity enhancement challenge}},
  year      = {2021},
  booktitle = {The Clarity Workshop on Machine Learning Challenges for Hearing Aids (Clarity-2021)},
}
Owner
BUT <a href=[email protected]">
Code release for "Detecting Twenty-thousand Classes using Image-level Supervision".

Detecting Twenty-thousand Classes using Image-level Supervision Detic: A Detector with image classes that can use image-level labels to easily train d

Meta Research 1.3k Jan 04, 2023
Source code for "MusCaps: Generating Captions for Music Audio" (IJCNN 2021)

MusCaps: Generating Captions for Music Audio Ilaria Manco1 2, Emmanouil Benetos1, Elio Quinton2, Gyorgy Fazekas1 1 Queen Mary University of London, 2

Ilaria Manco 57 Dec 07, 2022
We present a regularized self-labeling approach to improve the generalization and robustness properties of fine-tuning.

Overview This repository provides the implementation for the paper "Improved Regularization and Robustness for Fine-tuning in Neural Networks", which

NEU-StatsML-Research 21 Sep 08, 2022
A blender add-on that automatically re-aligns wrong axis objects.

Auto Align A blender add-on that automatically re-aligns wrong axis objects. Usage There are three options available in the 3D Viewport Sidebar It

29 Nov 25, 2022
A repo to show how to use custom dataset to train s2anet, and change backbone to resnext101

A repo to show how to use custom dataset to train s2anet, and change backbone to resnext101

jedibobo 3 Dec 28, 2022
Autotype on websites that have copy-paste disabled like Moodle, HackerEarth contest etc.

Autotype A quick and small python script that helps you autotype on websites that have copy paste disabled like Moodle, HackerEarth contests etc as it

Tushar 32 Nov 03, 2022
Indoor Panorama Planar 3D Reconstruction via Divide and Conquer

HV-plane reconstruction from a single 360 image Code for our paper in CVPR 2021: Indoor Panorama Planar 3D Reconstruction via Divide and Conquer (pape

sunset 36 Jan 03, 2023
Empirical Study of Transformers for Source Code & A Simple Approach for Handling Out-of-Vocabulary Identifiers in Deep Learning for Source Code

Transformers for variable misuse, function naming and code completion tasks The official PyTorch implementation of: Empirical Study of Transformers fo

Bayesian Methods Research Group 56 Nov 15, 2022
Weakly Supervised Scene Text Detection using Deep Reinforcement Learning

Weakly Supervised Scene Text Detection using Deep Reinforcement Learning This repository contains the setup for all experiments performed in our Paper

Emanuel Metzenthin 3 Dec 16, 2022
SPLADE: Sparse Lexical and Expansion Model for First Stage Ranking

SPLADE 🍴 + 🥄 = 🔎 This repository contains the weights for four models as well as the code for running inference for our two papers: [v1]: SPLADE: S

NAVER 170 Dec 28, 2022
Automatic Calibration for Non-repetitive Scanning Solid-State LiDAR and Camera Systems

ACSC Automatic extrinsic calibration for non-repetitive scanning solid-state LiDAR and camera systems. System Architecture 1. Dependency Tested with U

KINO 192 Dec 13, 2022
Hierarchical User Intent Graph Network for Multimedia Recommendation

Hierarchical User Intent Graph Network for Multimedia Recommendation This is our Pytorch implementation for the paper: Hierarchical User Intent Graph

6 Jan 05, 2023
BERT model training impelmentation using 1024 A100 GPUs for MLPerf Training v1.1

Pre-trained checkpoint and bert config json file Location of checkpoint and bert config json file This MLCommons members Google Drive location contain

SAIT (Samsung Advanced Institute of Technology) 12 Apr 27, 2022
Pytorch implementation of face attention network

Face Attention Network Pytorch implementation of face attention network as described in Face Attention Network: An Effective Face Detector for the Occ

Hooks 312 Dec 09, 2022
LieTransformer: Equivariant Self-Attention for Lie Groups

LieTransformer This repository contains the implementation of the LieTransformer used for experiments in the paper LieTransformer: Equivariant Self-At

OxCSML (Oxford Computational Statistics and Machine Learning) 50 Dec 28, 2022
This repo holds codes of the ICCV21 paper: Visual Alignment Constraint for Continuous Sign Language Recognition.

VAC_CSLR This repo holds codes of the paper: Visual Alignment Constraint for Continuous Sign Language Recognition.(ICCV 2021) [paper] Prerequisites Th

Yuecong Min 64 Dec 19, 2022
fcn by tensorflow

Update An example on how to integrate this code into your own semantic segmentation pipeline can be found in my KittiSeg project repository. tensorflo

9 May 22, 2022
Multi-Task Temporal Shift Attention Networks for On-Device Contactless Vitals Measurement (NeurIPS 2020)

MTTS-CAN: Multi-Task Temporal Shift Attention Networks for On-Device Contactless Vitals Measurement Paper Xin Liu, Josh Fromm, Shwetak Patel, Daniel M

Xin Liu 106 Dec 30, 2022
This repo contains the official code and pre-trained models for the Dynamic Vision Transformer (DVT).

Dynamic-Vision-Transformer (Pytorch) This repo contains the official code and pre-trained models for the Dynamic Vision Transformer (DVT). Not All Ima

210 Dec 18, 2022
HiFi-GAN: High Fidelity Denoising and Dereverberation Based on Speech Deep Features in Adversarial Networks

HiFiGAN Denoiser This is a Unofficial Pytorch implementation of the paper HiFi-GAN: High Fidelity Denoising and Dereverberation Based on Speech Deep F

Rishikesh (ऋषिकेश) 134 Dec 27, 2022