End-to-end text to speech system using gruut and onnx. There are 40 voices available across 8 languages.

Related tags

Text Data & NLPlarynx
Overview

Larynx

End-to-end text to speech system using gruut and onnx. There are 40 voices available across 8 languages.

$ docker run -it -p 5002:5002 rhasspy/larynx:en-us

Larynx screenshot

Larynx's goals are:

  • "Good enough" synthesis to avoid using a cloud service
  • Faster than realtime performance on a Raspberry Pi 4 (with low quality vocoder)
  • Broad language support (8 languages)
  • Voices trained purely from public datasets

Samples

Listen to voice samples from all of the pre-trained models.


Docker Installation

Pre-built Docker images for each language are available for the following platforms:

  • linux/amd64 - desktop/laptop/server
  • linux/arm64 - Raspberry Pi 64-bit
  • linux/arm/v7 - Raspberry Pi 32-bit

Run the Larynx web server with:

$ docker run -it -p 5002:5002 rhasspy/larynx:<LANG>

where is one of:

  • de-de - German
  • en-us - U.S. English
  • es-es - Spanish
  • fr-fr - French
  • it-it - Italian
  • nl - Dutch
  • ru-ru - Russian
  • sv-se - Swedish

Visit http://localhost:5002 for the test page. See http://localhost:5002/openapi/ for HTTP endpoint documentation.

A larger docker image with all languages is also available as rhasspy/larynx

Debian Installation

Pre-built Debian packages are available for download.

There are three different kinds of packages, so you can install exactly what you want and no more:

  • larynx-tts__.deb
    • Base Larynx code and dependencies (always required)
    • ARCH is one of amd64 (most desktops, laptops), armhf (32-bit Raspberry Pi), arm64 (64-bit Raspberry Pi)
  • larynx-tts-lang-__all.deb
    • Language-specific data files (at least one required)
    • See above for a list of languages
  • larynx-tts-voice-__all.deb
    • Voice-specific model files (at least one required)
    • See samples to decide which voice(s) to choose

As an example, let's say you want to use the "harvard-glow_tts" voice for English on an amd64 laptop for Larynx version 0.4.0. You would need to download these files:

  1. larynx-tts_0.4.0_amd64.deb
  2. larynx-tts-lang-en-us_0.4.0_all.deb
  3. larynx-tts-voice-en-us-harvard-glow-tts_0.4.0_all.deb

Once downloaded, you can install the packages all at once with:

sudo apt install \
  ./larynx-tts_0.4.0_amd64.deb \
  ./larynx-tts-lang-en-us_0.4.0_all.deb \
  ./larynx-tts-voice-en-us-harvard-glow-tts_0.4.0_all.deb

From there, you may run the larynx command or larynx-server to start the web server.

Python Installation

$ pip install larynx

For Raspberry Pi (ARM), you will first need to manually install phonetisaurus.

For 32-bit ARM systems, a pre-built onnxruntime wheel is available (official 64-bit wheels are available in PyPI).

Language Download

Larynx uses gruut to transform text into phonemes. You must install the appropriate gruut language before using Larynx. U.S. English is included with gruut, but for other languages:

$ python3 -m gruut <LANGUAGE> download

Voice/Vocoder Download

Voices and vocoders are available to download from the release page. They can be extracted anywhere, and the directory simply needs to be referenced in the command-line (e,g, --voices-dir /path/to/voices).


Web Server

You can run a local web server with:

$ python3 -m larynx.server --voices-dir /path/to/voices

Visit http://localhost:5002 to view the site and try out voices. See http://localhost/5002/openapi for documentation on the available HTTP endpoints.

The following default settings can be applied (for when they're not provided in an API call):

  • --quality - vocoder quality (high/medium/low, default: high)
  • --noise-scale - voice volatility (0-1, default: 0.333)
  • --length-scale - voice speed (<1 is faster, default: 1.0)

You may also set --voices-dir to change where your voices/vocoders are stored. The directory structure should be /.

See --help for more options.

MaryTTS Compatible API

To use Larynx as a drop-in replacement for a MaryTTS server (e.g., for use with Home Assistant), run:

$ docker run -it -p 59125:5002 rhasspy/larynx:<LANG>

The /process HTTP endpoint should now work for voices formatted as / such as en-us/harvard-glow_tts.

You can specify the vocoder by adding ; to the MaryTTS voice.

For example: en-us/harvard-glow_tts;hifi_gan:vctk_small will use the lowest quality (but fastest) vocoder. This is usually necessary to get decent performance on a Raspberry Pi.

Available vocoders are:

  • hifi_gan:universal_large (best quality, slowest, default)
  • hifi_gan:vctk_medium (medium quality)
  • hifi_gan:vctk_small (lowest quality, fastest)

Command-Line Example

The command below synthesizes multiple sentences and saves them to a directory. The --csv command-line flag indicates that each sentence is of the form id|text where id will be the name of the WAV file.

$ cat << EOF |
s01|The birch canoe slid on the smooth planks.
s02|Glue the sheet to the dark blue background.
s03|It's easy to tell the depth of a well.
s04|These days a chicken leg is a rare dish.
s05|Rice is often served in round bowls.
s06|The juice of lemons makes fine punch.
s07|The box was thrown beside the parked truck.
s08|The hogs were fed chopped corn and garbage.
s09|Four hours of steady work faced us.
s10|Large size in stockings is hard to sell.
EOF
  larynx \
    --debug \
    --csv \
    --voice harvard-glow_tts \
    --quality high \
    --output-dir wavs \
    --denoiser-strength 0.001

You can use the --interactive flag instead of --output-dir to type sentences and have the audio played immediately using the play command from sox.

GlowTTS Settings

The GlowTTS voices support two additional parameters:

  • --noise-scale - determines the speaker volatility during synthesis (0-1, default is 0.333)
  • --length-scale - makes the voice speaker slower (> 1) or faster (< 1)

Vocoder Settings

  • --denoiser-strength - runs the denoiser if > 0; a small value like 0.005 is recommended.

List Voices and Vocoders

$ larynx --list

Text to Speech Models

Vocoders

  • Hi-Fi GAN
    • Universal large
    • VCTK "medium"
    • VCTK "small"
  • WaveGlow
    • 256 channel trained on LJ Speech
Comments
  • Dot (.) stops synthesis

    Dot (.) stops synthesis

    I am new to Larynx, so maybe my question can be answered easily and quickly, but I couldn't find anything to fix it.

    Whenever a dot character is encountered, synthesis ends. I don't even need multiple sentences, but if it encounters something like X (feat. Y) it just says X feat. I am using Larynx over opentts in Home Assistant, but this can easily replicated in the GUI as well. So how exactly can I fix this? And maybe for later, how exactly can I synthesize multiple sentences? Thank you very much in advance, the voices are superb!

    bug 
    opened by chainria 7
  • MaryTTS API interface is not 100% compatible

    MaryTTS API interface is not 100% compatible

    Hi Michael,

    congratulations for your Larynx v1.0 release :partying_face: . Great work, as usual :slightly_smiling_face:.

    I've been trying to use Larynx with the new SEPIA v0.24.0 client since it has an option now to use MaryTTS compatible TTS systems directly, but encountered some issues:

    • The /voices endpoint is not delivering information in the same format. The MaryTTS API response is: [voice] [language] [gender] [tech=hmm] but Larynx is giving [laguage]/[voice]. Since I'm automatically parsing the string it currently fails to get the right language.
    • The /voices endpoint will show all voices including the ones that haven't been downloaded yet.
    • The Larynx quality parameter is not accessible.

    The last point is not really a MaryTTS compatibility issue, but it would be great to get each voice as 'low, medium, high' variation from the 'voices' endpoint, so the user could actually choose them from the list.

    I believe the Larynx MaryTTS endpoints are mostly for Home-Assistant support and I'm not sure how HA is parsing the voices list (maybe it doesn't parse it at all or just uses the whole string), but it would be great to get the original format from the /voices endpoint. Would you be willing to make these changes? :innocent: :grin:

    opened by fquirin 6
  • Required versions for python and pip

    Required versions for python and pip

    I have a working setup on a recent linux box (with python 3.8). But now I have to use an older computer (python 3.5, pip 8.1.1) and I run into trouble:

     Using cached https://files.pythonhosted.org/packages/f8/4d/a2.../larynx-0.3.1.tar.gz
     Complete output from command python setup.py egg_info:
     Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/tmp/pip-build-sbulg1mn/larynx/setup.py", line 13
        long_description: str = ""
                        ^
     SyntaxError: invalid syntax
    

    What are the minimum versions required by larynx at the moment?

    opened by svenha 6
  • SSL error when downloading new tts

    SSL error when downloading new tts

    Steps to reproduce:

    1. Run larynx-server on NixOS with Docker
    2. Attempt to download a tts

    Full error output:

    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/app/.venv/lib/python3.7/site-packages/quart/app.py", line 1827, in full_dispatch_request
        result = await self.dispatch_request(request_context)
      File "/app/.venv/lib/python3.7/site-packages/quart/app.py", line 1875, in dispatch_request
        return await handler(**request_.view_args)
      File "/app/larynx/server.py", line 667, in api_download
        tts_model_dir = download_voice(voice_name, voices_dirs[0], url)
      File "/app/larynx/utils.py", line 78, in download_voice
        response = urllib.request.urlopen(link)
      File "/usr/lib/python3.7/urllib/request.py", line 222, in urlopen
        return opener.open(url, data, timeout)
      File "/usr/lib/python3.7/urllib/request.py", line 531, in open
        response = meth(req, response)
      File "/usr/lib/python3.7/urllib/request.py", line 641, in http_response
        'http', request, response, code, msg, hdrs)
      File "/usr/lib/python3.7/urllib/request.py", line 563, in error
        result = self._call_chain(*args)
      File "/usr/lib/python3.7/urllib/request.py", line 503, in _call_chain
        result = func(*args)
      File "/usr/lib/python3.7/urllib/request.py", line 755, in http_error_302
        return self.parent.open(new, timeout=req.timeout)
      File "/usr/lib/python3.7/urllib/request.py", line 525, in open
        response = self._open(req, data)
      File "/usr/lib/python3.7/urllib/request.py", line 543, in _open
        '_open', req)
      File "/usr/lib/python3.7/urllib/request.py", line 503, in _call_chain
        result = func(*args)
      File "/usr/lib/python3.7/urllib/request.py", line 1367, in https_open
        context=self._context, check_hostname=self._check_hostname)
      File "/usr/lib/python3.7/urllib/request.py", line 1326, in do_open
        raise URLError(err)
    urllib.error.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1056)>
    
    bug 
    opened by 18fadly-anthony 5
  • German voice

    German voice

    Did you think about using pavoque-data for German voice? https://github.com/marytts/pavoque-data/releases/tag/v0.2 It has better quality in comparing with thorsten audio dataset. It has restriction for commercial using but as I see you already have a voice with that restriction.

    enhancement 
    opened by alt131 5
  • How to change port number when running from docker

    How to change port number when running from docker

    I'm using the docker command mentioned in https://github.com/rhasspy/larynx#docker-installation

    However, when I flip the port number in the docker run command, it doesn't work the way I want it to. May I know the right way to host larynx in a different port, if there is one

    opened by thevickypedia 4
  • MaryTTS emulation and Home Assistant

    MaryTTS emulation and Home Assistant

    I'm having trouble setting up the MaryTTS component in Home Assistant to work with Larynx. In particular, there are several parameters that can be defined in yaml. The docs give this example:

    tts:
      - platform: marytts
        host: "localhost"
        port: 59125
        codec: "WAVE_FILE"
        voice: "cmu-slt-hsmm"
        language: "en_US"
        effect:
          Volume: "amount:2.0;"
    

    Larynx is up and running and I can generate speech via localhost:59125. I'd like to use a specific voice and quality setting with Home Assistant's TTS. I tried setting the following:

    ...
        voice: "harvard-glow_tts"
        language: "en_us"
    ...
    

    But Home Assistant's log shows an error saying that "en_us" is not a valid language ("en_US" is, though).

    What are the correct parameters necessary to use a specific voice? And would it be possible to use an effect key to set the voice quality (high, medium, low)?

    enhancement 
    opened by hawkeye217 4
  • New languages need a link

    New languages need a link

    Thanks @synesthesiam for this excellent tool.

    I followed the 'Python installation' method (on Ubuntu 20.10) and added the language de-de via python3 -m gruut de-de download. Before I could use the new language, I had to add a link in ~/.local/lib/python3.8/site-packages/gruut/data/ to ~/.config/gruut/de-de; otherwise the new language was not found.

    bug 
    opened by svenha 3
  • How to send text to larynx SERVER using BASH script?

    How to send text to larynx SERVER using BASH script?

    Impressive program!

    Using a bash script, how do I send text to the larynx SERVER?

    When using the larynx CLI, there is a 5-15 second startup delay (which I'm trying to eliminate). So I'd like to start a server, then send the text to the server to avoid this startup delay.

    Unfortunately, I've failed to find documentation or an example on how to connect to the server using a bash script. I've experimented with the "--daemon" option, and studied the larynx-server example, but failed to find a solution. (Please forgive my inexperience with TTS.)

    I'd appreciate an example (or pointer to documentation) showing how to send text to the larynx server using a bash script. Thank you.

    opened by gvimlag 2
  • Version/tag mismatch when downloading voices for 1.0.0 release

    Version/tag mismatch when downloading voices for 1.0.0 release

    Looks like the Github version tag is 1.0 but the code is looking for 1.0.0. The assets exist on Github with 1.0 in the path, but I'm getting this error when trying to download voices from the web interface:

    larynx.utils.VoiceDownloadError: Failed to download voice en-us_kathleen-glow_tts from http://github.com/rhasspy/larynx/releases/download/v1.0.0/en-us_kathleen-glow_tts.tar.gz: HTTP Error 404: Not Found
    
    opened by hawkeye217 2
  • OpenAPI page broken

    OpenAPI page broken

    I am getting HTTP 500 returned when I go to http://localhost:5002/openapi/ - The browser page says "Fetch error undefined /openapi/swagger.json"

    On the command line, I tried find /usr/local/python3/ -name '*swagger*' and only got results for the swagger_ui package in site-packages.

    bug 
    opened by polski-g 2
  • Improve performance with caching

    Improve performance with caching

    I hope to gain some understanding about how feasible and useful it is to cache certain (intermediate) outputs. If large parts of phrases are re-used often, couldn't they be cached (perhaps on multiple levels) to improve response time? And if so, the cache could be pre-populated with expected outputs by speaking them all once. For example a program that reads the time of day could have a cache for 'The time is' as well as numbers up to 59. The expected reduction in response time would depend on which parts of the process actually take the most time, which I'm not sure about.

    opened by JeroenvdV 0
  • Dates like

    Dates like "1700s" and "1980s" are replaced with the current date

    I was really confused for a moment the first time I encountered this. :-)

    To reproduce:

    1. Run the Larynx server via the latest Docker container.
    2. Open the web interface. Paste a phrase like "It was popular in the distant past of the 1700s." and press "Speak".

    https://user-images.githubusercontent.com/3179832/199944142-5600025a-cc4a-4821-9be1-9bed10e5ecf9.mp4

    opened by dbohdan 0
  • voices-dir option of larynx.server doesn't work

    voices-dir option of larynx.server doesn't work

    I'm using larynx==1.1.0

    Even when I specified --voices-dir with directory which contains correct voice models, it seems not be used.

    python3 -m larynx.server --voices-dir /var/lib/larynx-model/voices
    
    DEBUG:larynx.utils:Downloading voice/vocoder for en-us_cmu_eey-glow_tts to /root/.local/share/larynx/voices from http://github.com/rhasspy/larynx/releases/download/v1.0/en-us_cmu_eey-glow_tts.tar.gz
    en-us_cmu_eey-glow_tts:  18%|████████████████████▍           
    
    opened by syundo0730 0
  • ImportError: cannot import name 'escape' from 'jinja2'

    ImportError: cannot import name 'escape' from 'jinja2'

    Python installation method results in the error in larynx-server on Ubuntu 22.04.1

    (larynx_venv) [email protected]:~/Downloads/larynx-master$ larynx-server 
    Traceback (most recent call last):
      File "/home/user/Downloads/larynx-master/larynx_venv/bin/larynx-server", line 5, in <module>
        from larynx.server.__main__ import main
      File "/home/user/Downloads/larynx-master/larynx_venv/lib/python3.10/site-packages/larynx/server.py", line 24, in <module>
        import quart_cors
      File "/home/user/Downloads/larynx-master/larynx_venv/lib/python3.10/site-packages/quart_cors/__init__.py", line 5, in <module>
        from quart import abort, Blueprint, current_app, make_response, Quart, request, Response, websocket
      File "/home/user/Downloads/larynx-master/larynx_venv/lib/python3.10/site-packages/quart/__init__.py", line 3, in <module>
        from jinja2 import escape, Markup
    ImportError: cannot import name 'escape' from 'jinja2' (/home/user/Downloads/larynx-master/larynx_venv/lib/python3.10/site-packages/jinja2/__init__.py)
    
    
    opened by faveoled 1
  • Browser request for favicon.ico returns HTTP 500 error and error on console

    Browser request for favicon.ico returns HTTP 500 error and error on console

    It would be nice to trap the favicon.ico HTTP request and return a simple icon so that the browser does not get a 500 error and the client does not log the error message.

    ERROR:larynx:404 Not Found: The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.
    Traceback (most recent call last):
      File "/home/larynx/app/.venv/lib/python3.9/site-packages/quart/app.py", line 1490, in full_dispatch_request
        result = await self.dispatch_request(request_context)
      File "/home/larynx/app/.venv/lib/python3.9/site-packages/quart/app.py", line 1530, in dispatch_request
        self.raise_routing_exception(request_)
      File "/home/larynx/app/.venv/lib/python3.9/site-packages/quart/app.py", line 1038, in raise_routing_exception
        raise request.routing_exception
      File "/home/larynx/app/.venv/lib/python3.9/site-packages/quart/ctx.py", line 64, in match_request
        ) = self.url_adapter.match(  # type: ignore
      File "/home/larynx/app/.venv/lib/python3.9/site-packages/werkzeug/routing.py", line 2041, in match
        raise NotFound()
    werkzeug.exceptions.NotFound: 404 Not Found: The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.
    
    opened by mrdvt92 0
Releases(v1.1)
Owner
Rhasspy
Offline voice assistant
Rhasspy
Beyond Masking: Demystifying Token-Based Pre-Training for Vision Transformers

beyond masking Beyond Masking: Demystifying Token-Based Pre-Training for Vision Transformers The code is coming Figure 1: Pipeline of token-based pre-

Yunjie Tian 23 Sep 27, 2022
Official Pytorch implementation of Test-Agnostic Long-Tailed Recognition by Test-Time Aggregating Diverse Experts with Self-Supervision.

This repository is the official Pytorch implementation of Test-Agnostic Long-Tailed Recognition by Test-Time Aggregating Diverse Experts with Self-Supervision.

vanint 101 Dec 30, 2022
Code for the paper in Findings of EMNLP 2021: "EfficientBERT: Progressively Searching Multilayer Perceptron via Warm-up Knowledge Distillation".

This repository contains the code for the paper in Findings of EMNLP 2021: "EfficientBERT: Progressively Searching Multilayer Perceptron via Warm-up Knowledge Distillation".

Chenhe Dong 28 Nov 10, 2022
Code from the paper "High-Performance Brain-to-Text Communication via Handwriting"

Code from the paper "High-Performance Brain-to-Text Communication via Handwriting"

Francis R. Willett 305 Dec 22, 2022
Official source for spanish Language Models and resources made @ BSC-TEMU within the "Plan de las Tecnologías del Lenguaje" (Plan-TL).

Spanish Language Models 💃🏻 Corpora 📃 Corpora Number of documents Size (GB) BNE 201,080,084 570GB Models 🤖 RoBERTa-base BNE: https://huggingface.co

PlanTL-SANIDAD 203 Dec 20, 2022
A method to generate speech across multiple speakers

VoiceLoop PyTorch implementation of the method described in the paper VoiceLoop: Voice Fitting and Synthesis via a Phonological Loop. VoiceLoop is a n

Facebook Archive 873 Dec 15, 2022
Local cross-platform machine translation GUI, based on CTranslate2

DesktopTranslator Local cross-platform machine translation GUI, based on CTranslate2 Download Windows Installer You can either download a ready-made W

Yasmin Moslem 29 Jan 05, 2023
2021语言与智能技术竞赛:机器阅读理解任务

LICS2021 MRC 1. 项目&任务介绍 本项目基于官方给定的baseline(DuReader-Checklist-BASELINE)进行二次改造,对整个代码框架做了简单的重构,对核心网络结构添加了注释,解耦了数据读取的模块,并添加了阈值确认的功能,一些小的细节也做了改进。 本次任务为202

roar 29 Dec 05, 2022
A Practitioner's Guide to Natural Language Processing

Learn how to process, classify, cluster, summarize, understand syntax, semantics and sentiment of text data with the power of Python! This repository contains code and datasets used in my book, Text

Dipanjan (DJ) Sarkar 1.5k Jan 03, 2023
Tool which allow you to detect and translate text.

Text detection and recognition This repository contains tool which allow to detect region with text and translate it one by one. Description Two pretr

Damian Panek 176 Nov 28, 2022
CCKS-Title-based-large-scale-commodity-entity-retrieval-top1

- 基于标题的大规模商品实体检索top1 一、任务介绍 CCKS 2020:基于标题的大规模商品实体检索,任务为对于给定的一个商品标题,参赛系统需要匹配到该标题在给定商品库中的对应商品实体。 输入:输入文件包括若干行商品标题。 输出:输出文本每一行包括此标题对应的商品实体,即给定知识库中商品 ID,

43 Nov 11, 2022
Conditional probing: measuring usable information beyond a baseline

Conditional probing: measuring usable information beyond a baseline

John Hewitt 20 Dec 15, 2022
An End-to-End Trainable Neural Network for Image-based Sequence Recognition and Its Application to Scene Text Recognition

CRNN paper:An End-to-End Trainable Neural Network for Image-based Sequence Recognition and Its Application to Scene Text Recognition 1. create your ow

Tsukinousag1 3 Apr 02, 2022
Recognition of 38 speech commands in russian. Based on Yandex Cup 2021 ML Challenge: ASR

Speech_38_ru_commands Recognition of 38 speech commands in russian. Based on Yandex Cup 2021 ML Challenge: ASR Программа умеет распознавать 38 ключевы

Andrey 9 May 05, 2022
PyTranslator é simultaneamente um editor e tradutor de texto com diversos recursos e interface feito com coração e 100% em Python

PyTranslator O Que é e para que serve o PyTranslator? PyTranslator é simultaneamente um editor e tradutor de texto em com interface gráfica que usa a

Elizeu Barbosa Abreu 1 May 12, 2022
PyTorch original implementation of Cross-lingual Language Model Pretraining.

XLM NEW: Added XLM-R model. PyTorch original implementation of Cross-lingual Language Model Pretraining. Includes: Monolingual language model pretrain

Facebook Research 2.7k Dec 27, 2022
SimBERT升级版(SimBERTv2)!

RoFormer-Sim RoFormer-Sim,又称SimBERTv2,是我们之前发布的SimBERT模型的升级版。 介绍 https://kexue.fm/archives/8454 训练 tensorflow 1.14 + keras 2.3.1 + bert4keras 0.10.6 下载

317 Dec 23, 2022
Tool to add main subject to items on Wikidata using a WMFs CirrusSearch for named entity recognition or a manually supplied list of QIDs

ItemSubjector Tool made to add main subject statements to items based on the title using a home-brewed CirrusSearch-based Named Entity Recognition alg

Dennis Priskorn 9 Nov 17, 2022
A PyTorch implementation of VIOLET

VIOLET: End-to-End Video-Language Transformers with Masked Visual-token Modeling A PyTorch implementation of VIOLET Overview VIOLET is an implementati

Tsu-Jui Fu 119 Dec 30, 2022
Simple Annotated implementation of GPT-NeoX in PyTorch

Simple Annotated implementation of GPT-NeoX in PyTorch This is a simpler implementation of GPT-NeoX in PyTorch. We have taken out several optimization

labml.ai 101 Dec 03, 2022