Simplified diarization pipeline using some pretrained models - audio file to diarized segments in a few lines of code

Overview

simple_diarizer

Open In Colab

Simplified diarization pipeline using some pretrained models.

Made to be a simple as possible to go from an input audio file to diarized segments.

import soundfile as sf
import matplotlib.pyplot as plt

from simple_diarizer.diarizer import Diarizer
from simple_diarizer.utils import combined_waveplot

diar = Diarizer(
                  embed_model='xvec', # 'xvec' and 'ecapa' supported
                  cluster_method='sc' # 'ahc' and 'sc' supported
               )

segments = diar.diarize(WAV_FILE, num_speakers=NUM_SPEAKERS)

signal, fs = sf.read(WAV_FILE)
combined_waveplot(signal, fs, segments)
plt.show()

Source Video

"Some Quick Advice from Barack Obama!"

YouTube Thumbnail

Pre-trained Models

The following pretrained models are used:

Demo

Open In Colab

It can be checked out in the above link, where it will try and diarize any input YouTube URL. It will also use YouTube's autogenerated transcriptions to produce a speaker labelled transcription.

Hopefully this can be of use as a free basic tool to produce a diarized transcript of a video/audio of interest.

Other References

Planned Features

Comments
  • WIP - Make an installable package

    WIP - Make an installable package

    Description:

    • Include requirements.txt.
    • Add setup*. files to build a package.
    • Create a folder simple_diarizer to store source code.
    • Create Github Workflow to publish the package.

    How to test:

    • Run command pip install .
    • Outside project folder type python and from simple_diarizer import diarizer

    Notes:

    • Cannot use python 3.10.x yet

    Source code to test:

    from simple_diarizer.utils import (convert_wavfile, download_youtube_wav)
    
    from simple_diarizer.diarizer import Diarizer
    import tempfile
    
    YOUTUBE_ID = "HyKmkLEtQbs"
    
    with tempfile.TemporaryDirectory() as outdir:
        yt_file = download_youtube_wav(YOUTUBE_ID, outdir)
    
        wav_file = convert_wavfile(yt_file, f"{outdir}/{YOUTUBE_ID}_converted.wav")
    
        print(f"wav file: {wav_file}")
    
        diar = Diarizer(
            embed_model='ecapa', # supported types: ['xvec', 'ecapa']
            cluster_method='sc', # supported types: ['ahc', 'sc']
            window=1.5, # size of window to extract embeddings (in seconds)
            period=0.75 # hop of window (in seconds)
        )
    
        NUM_SPEAKERS = 2
    
        segments = diar.diarize(wav_file, 
                                num_speakers=NUM_SPEAKERS,
                                outfile=f"{outdir}/{YOUTUBE_ID}.rttm")
    
        print(segments)     
    
    opened by johnidm 16
  • "[Errno 30] Read-only file system: 'pretrained_models'"

    I am using macOS and I am getting error "[Errno 30] Read-only file system: 'pretrained_models'" From what I can tell, the pretrained models are being fetched if you do not have them.

    However, the save location is the root directory which is read-only. This is where I believe is the target directory "./pretrained_model_checkpoints"

    Is there another location that can be used that can be used?

    PythonKit/Python.swift:706: Fatal error: 'try!' expression unexpectedly raised an error: Python exception: [Errno 30] Read-only file system: 'pretrained_models' Traceback: File "/Users/wedwards/Documents/Development/A_PythonKit_Test/A_PythonKit_Test/Simple Diarizer.py", line 42, in diar = Diarizer( File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/simple_diarizer/diarizer.py", line 48, in init self.embed_model = EncoderClassifier.from_hparams( File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/speechbrain/pretrained/interfaces.py", line 342, in from_hparams hparams_local_path = fetch( File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/speechbrain/pretrained/fetching.py", line 86, in fetch savedir.mkdir(parents=True, exist_ok=True) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/pathlib.py", line 1179, in mkdir self.parent.mkdir(parents=True, exist_ok=True) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/pathlib.py", line 1175, in mkdir self._accessor.mkdir(self, mode)

    2022-11-11 13:14:00.531470-0500 A_PythonKit_Test[69382:7584330] PythonKit/Python.swift:706: Fatal error: 'try!' expression unexpectedly raised an error: Python exception: [Errno 30] Read-only file system: 'pretrained_models' Traceback: File "/Users/wedwards/Documents/Development/A_PythonKit_Test/A_PythonKit_Test/Simple Diarizer.py", line 42, in diar = Diarizer( File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/simple_diarizer/diarizer.py", line 48, in init self.embed_model = EncoderClassifier.from_hparams( File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/speechbrain/pretrained/interfaces.py", line 342, in from_hparams hparams_local_path = fetch( File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/speechbrain/pretrained/fetching.py", line 86, in fetch savedir.mkdir(parents=True, exist_ok=True) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/pathlib.py", line 1179, in mkdir self.parent.mkdir(parents=True, exist_ok=True) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/pathlib.py", line 1175, in mkdir self._accessor.mkdir(self, mode)

    opened by MrEdwards007 5
  • Latest Python and packages

    Latest Python and packages

    The current release prevents use of Python 3.10 and requires specific versions of Beautiful Soup and PyTube.

    I've forked the repo to overcome these version limitations and it's working for me. I haven't made a pull request, however, as your repo doesn't have tests and I don't know whether there is a use case which would be broken by my changes.

    Can you please remove these version limitations if they're not needed?

    Thanks for the repo - it's effective and much easier to use than SpeechBrain.

    opened by andrewmackie 3
  • takes 1 positional argument but 2 were given

    takes 1 positional argument but 2 were given

    running a demo on google co-lab i am getting the following error, any idea how to resolve this,

    File "/root/anaconda3/envs/simple/lib/python3.8/site-packages/speechbrain/pretrained/fetching.py", line 116, in fetch fetched_file = huggingface_hub.cached_download(url, use_auth_token) TypeError: cached_download() takes 1 positional argument but 2 were given

    opened by SanaullahOfficial 2
  • AttributeError when running Diarizer in simple_diarizer.diarizer

    AttributeError when running Diarizer in simple_diarizer.diarizer

    Hi there!

    When running the following code in Python 3.7 on a fresh conda environment in Ubuntu 22.04

    from simple_diarizer.diarizer import Diarizer
    
    diar = Diarizer(
                        embed_model='xvec', # 'xvec' and 'ecapa' suported
                        cluster_method='sc' # 'ahc' and 'sc' supported
                    )
    

    I get the following error:

    <ipython-input-3-286690ce0195> in <module>
          1 diar = Diarizer(
          2                     embed_model='xvec', # 'xvec' and 'ecapa' suported
    ----> 3                     cluster_method='sc' # 'ahc' and 'sc' supported
          4                 )
    
    ~/anaconda3/envs/test/lib/python3.7/site-packages/simple_diarizer/diarizer.py in __init__(self, embed_model, cluster_method, window, period)
         44             self.embed_model = EncoderClassifier.from_hparams(source="speechbrain/spkrec-xvect-voxceleb",
         45                                                               savedir="pretrained_models/spkrec-xvect-voxceleb",
    ---> 46                                                               run_opts=self.run_opts)
         47         if embed_model == 'ecapa':
         48             self.embed_model = EncoderClassifier.from_hparams(source="speechbrain/spkrec-ecapa-voxceleb",
    
    ~/anaconda3/envs/test/lib/python3.7/site-packages/speechbrain/pretrained/interfaces.py in from_hparams(cls, source, hparams_file, pymodule_file, overrides, savedir, use_auth_token, **kwargs)
        349         # Load the modules:
        350         with open(hparams_local_path) as fin:
    --> 351             hparams = load_hyperpyyaml(fin, overrides)
        352 
        353         # Pretraining:
    
    ~/anaconda3/envs/test/lib/python3.7/site-packages/hyperpyyaml/core.py in load_hyperpyyaml(yaml_stream, overrides, overrides_must_match)
        187 
        188     # Remove items that start with "__"
    --> 189     removal_keys = [k for k in hparams.keys() if k.startswith("__")]
        190     for key in removal_keys:
        191         del hparams[key]
    
    AttributeError: 'str' object has no attribute 'keys'
    opened by masonhargrave 2
  • Make project installable

    Make project installable

    Hi @cvqluu, this project is amazing, thanks for sharing.

    I have some experience in packaging projects in Python.

    What do you think I make these items on your to-do list?

    • Add to PyPi (make pip installable)
    • requirements.txt

    If you authorize me, I will start doing this now and submit pull requests for your review and approval.

    opened by johnidm 1
  • Added ipython depedency

    Added ipython depedency

    Tested on local machine using:

    pip install --user git+https://github.com/cvqluu/[email protected]
    

    Fix for https://github.com/cvqluu/simple_diarizer/issues/12

    opened by cvqluu 0
  • Bump ipython from 7.30.1 to 7.31.1

    Bump ipython from 7.30.1 to 7.31.1

    Bumps ipython from 7.30.1 to 7.31.1.

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Undeclared IPython dependency

    Undeclared IPython dependency

    The current package (0.0.12 on PyPI) cannot run without IPython, but this is missing from requirements.txt

    Steps to reproduce (outside of a Jupyter notebook):

    pip install simple-diarizer
    
    # index.py
    from simple_diarizer.diarizer import Diarizer
    

    Output:

    File "[redacted]\index.py", line 1, in <module>
        from simple_diarizer.diarizer import Diarizer
    File "[redacted]\lib\site-packages\simple_diarizer\diarizer.py", line 13, in <module>
        from .utils import check_wav_16khz_mono, convert_wavfile
    File "[redacted]\lib\site-packages\simple_diarizer\utils.py", line 8, in <module>
        from IPython.display import Audio, display
    ModuleNotFoundError: No module named 'IPython'
    
    opened by DavidRalph 1
  • waveplot_perspeaker causes argument out of range error

    waveplot_perspeaker causes argument out of range error

    While running through your code example, testing the workflow on a different audio file produced the following output:

    C:\Users\xxx\Miniconda3\envs\simple_diarizer_env\lib\site-packages\IPython\lib\display.py:187: RuntimeWarning: invalid value encountered in divide
      scaled = data / normalization_factor * 32767
    ---------------------------------------------------------------------------
    error                                     Traceback (most recent call last)
    Cell In [18], line 1
    ----> 1 waveplot_perspeaker(signal, fs, segments)
    
    File ~\Miniconda3\envs\simple_diarizer_env\lib\site-packages\simple_diarizer\utils.py:166, in waveplot_perspeaker(signal, fs, segments)
        164 if "words" in seg:
        165     pprint(seg["words"])
    --> 166 display(Audio(speech, rate=fs))
        167 print("=" * 40 + "\n")
    
    File ~\Miniconda3\envs\simple_diarizer_env\lib\site-packages\IPython\lib\display.py:130, in Audio.__init__(self, data, filename, url, embed, rate, autoplay, normalize, element_id)
        128 if rate is None:
        129     raise ValueError("rate must be specified when data is a numpy array or list of audio samples.")
    --> 130 self.data = Audio._make_wav(data, rate, normalize)
    
    File ~\Miniconda3\envs\simple_diarizer_env\lib\site-packages\IPython\lib\display.py:162, in Audio._make_wav(data, rate, normalize)
        160 waveobj.setsampwidth(2)
        161 waveobj.setcomptype('NONE','NONE')
    --> 162 waveobj.writeframes(scaled)
        163 val = fp.getvalue()
        164 waveobj.close()
    
    File ~\Miniconda3\envs\simple_diarizer_env\lib\wave.py:437, in Wave_write.writeframes(self, data)
        436 def writeframes(self, data):
    --> 437     self.writeframesraw(data)
        438     if self._datalength != self._datawritten:
        439         self._patchheader()
    
    File ~\Miniconda3\envs\simple_diarizer_env\lib\wave.py:426, in Wave_write.writeframesraw(self, data)
        424 if not isinstance(data, (bytes, bytearray)):
        425     data = memoryview(data).cast('B')
    --> 426 self._ensure_header_written(len(data))
        427 nframes = len(data) // (self._sampwidth * self._nchannels)
        428 if self._convert:
    
    File ~\Miniconda3\envs\simple_diarizer_env\lib\wave.py:467, in Wave_write._ensure_header_written(self, datasize)
        465 if not self._framerate:
        466     raise Error('sampling rate not specified')
    --> 467 self._write_header(datasize)
    
    File ~\Miniconda3\envs\simple_diarizer_env\lib\wave.py:479, in Wave_write._write_header(self, initlength)
        477 except (AttributeError, OSError):
        478     self._form_length_pos = None
    --> 479 self._file.write(struct.pack('<L4s4sLHHLLHH4s',
        480     36 + self._datalength, b'WAVE', b'fmt ', 16,
        481     WAVE_FORMAT_PCM, self._nchannels, self._framerate,
        482     self._nchannels * self._framerate * self._sampwidth,
        483     self._nchannels * self._sampwidth,
        484     self._sampwidth * 8, b'data'))
        485 if self._form_length_pos is not None:
        486     self._data_length_pos = self._file.tell()
    
    error: argument out of range
    

    Any ideas what the issue could be? It works fine on other audio files, and everything up to this point seems to run without error.

    opened by dcruiz01 1
Releases(v0.0.13)
Owner
Chau
PhD student at the University of Edinburgh, CSTR
Chau
Hierarchical unsupervised and semi-supervised topic models for sparse count data with CorEx

Anchored CorEx: Hierarchical Topic Modeling with Minimal Domain Knowledge Correlation Explanation (CorEx) is a topic model that yields rich topics tha

Greg Ver Steeg 592 Dec 18, 2022
MMDA - multimodal document analysis

MMDA - multimodal document analysis

AI2 75 Jan 04, 2023
Code for "Parallel Instance Query Network for Named Entity Recognition", accepted at ACL 2022.

README Code for Two-stage Identifier: "Parallel Instance Query Network for Named Entity Recognition", accepted at ACL 2022. For details of the model a

Yongliang Shen 45 Nov 29, 2022
SASE : Self-Adaptive noise distribution network for Speech Enhancement with heterogeneous data of Cross-Silo Federated learning

SASE : Self-Adaptive noise distribution network for Speech Enhancement with heterogeneous data of Cross-Silo Federated learning We propose a SASE mode

Tower 1 Nov 20, 2021
A simple chatbot based on chatterbot that you can use for anything has basic features

Chatbotium A simple chatbot based on chatterbot that you can use for anything has basic features. I have some errors Read the paragraph below: Known b

Herman 1 Feb 16, 2022
Beyond the Imitation Game collaborative benchmark for enormous language models

BIG-bench 🪑 The Beyond the Imitation Game Benchmark (BIG-bench) will be a collaborative benchmark intended to probe large language models, and extrap

Google 1.3k Jan 01, 2023
✨Fast Coreference Resolution in spaCy with Neural Networks

✨ NeuralCoref 4.0: Coreference Resolution in spaCy with Neural Networks. NeuralCoref is a pipeline extension for spaCy 2.1+ which annotates and resolv

Hugging Face 2.6k Jan 04, 2023
Open source annotation tool for machine learning practitioners.

doccano doccano is an open source text annotation tool for humans. It provides annotation features for text classification, sequence labeling and sequ

7.1k Jan 01, 2023
TPlinker for NER 中文/英文命名实体识别

本项目是参考 TPLinker 中HandshakingTagging思想,将TPLinker由原来的关系抽取(RE)模型修改为命名实体识别(NER)模型。

GodK 113 Dec 28, 2022
Negative sampling for solving the unlabeled entity problem in NER. ICLR-2021 paper: Empirical Analysis of Unlabeled Entity Problem in Named Entity Recognition.

Negative Sampling for NER Unlabeled entity problem is prevalent in many NER scenarios (e.g., weakly supervised NER). Our paper in ICLR-2021 proposes u

Yangming Li 128 Dec 29, 2022
🧪 Cutting-edge experimental spaCy components and features

spacy-experimental: Cutting-edge experimental spaCy components and features This package includes experimental components and features for spaCy v3.x,

Explosion 65 Dec 30, 2022
Py65 65816 - Add support for the 65C816 to py65

Add support for the 65C816 to py65 Py65 (https://github.com/mnaberez/py65) is a

4 Jan 04, 2023
simpleT5 is built on top of PyTorch-lightning⚡️ and Transformers🤗 that lets you quickly train your T5 models.

Quickly train T5 models in just 3 lines of code + ONNX support simpleT5 is built on top of PyTorch-lightning ⚡️ and Transformers 🤗 that lets you quic

Shivanand Roy 220 Dec 30, 2022
GrammarTagger — A Neural Multilingual Grammar Profiler for Language Learning

GrammarTagger — A Neural Multilingual Grammar Profiler for Language Learning GrammarTagger is an open-source toolkit for grammatical profiling for lan

Octanove Labs 27 Jan 05, 2023
[EMNLP 2021] LM-Critic: Language Models for Unsupervised Grammatical Error Correction

LM-Critic: Language Models for Unsupervised Grammatical Error Correction This repo provides the source code & data of our paper: LM-Critic: Language M

Michihiro Yasunaga 98 Nov 24, 2022
The aim of this task is to predict someone's English proficiency based on a text input.

English_proficiency_prediction_NLP The aim of this task is to predict someone's English proficiency based on a text input. Using the The NICT JLE Corp

1 Dec 13, 2021
PUA Programming Language written in Python.

pua-lang PUA Programming Language written in Python. Installation git clone https://github.com/zhaoyang97/pua-lang.git cd pua-lang pip install . Try

zy 4 Feb 19, 2022
To be a next-generation DL-based phenotype prediction from genome mutations.

Sequence -----------+-- 3D_structure -- 3D_module --+ +-- ? | |

Eric Alcaide 18 Jan 11, 2022
Paddle2.x version AI-Writer

Paddle2.x 版本AI-Writer 用魔改 GPT 生成网文。Tuned GPT for novel generation.

yujun 74 Jan 04, 2023
Data loaders and abstractions for text and NLP

torchtext This repository consists of: torchtext.data: Generic data loaders, abstractions, and iterators for text (including vocabulary and word vecto

3.2k Dec 30, 2022