PyTorch Implementation of ByteDance's Cross-speaker Emotion Transfer Based on Speaker Condition Layer Normalization and Semi-Supervised Training in Text-To-Speech

Overview

Cross-Speaker-Emotion-Transfer - PyTorch Implementation

PyTorch Implementation of ByteDance's Cross-speaker Emotion Transfer Based on Speaker Condition Layer Normalization and Semi-Supervised Training in Text-To-Speech.

Quickstart

DATASET refers to the names of datasets such as RAVDESS in the following documents.

Dependencies

You can install the Python dependencies with

pip3 install -r requirements.txt

Also, install fairseq (official document, github) to utilize LConvBlock. Please check here to resolve any issue on installing it. Note that Dockerfile is provided for Docker users, but you have to install fairseq manually.

Inference

You have to download the pretrained models and put them in output/ckpt/DATASET/.

To extract soft emotion tokens from a reference audio, run

python3 synthesize.py --text "YOUR_DESIRED_TEXT" --speaker_id SPEAKER_ID --ref_audio REF_AUDIO_PATH --restore_step RESTORE_STEP --mode single --dataset DATASET

Or, to use hard emotion tokens from an emotion id, run

python3 synthesize.py --text "YOUR_DESIRED_TEXT" --speaker_id SPEAKER_ID --emotion_id EMOTION_ID --restore_step RESTORE_STEP --mode single --dataset DATASET

The dictionary of learned speakers can be found at preprocessed_data/DATASET/speakers.json, and the generated utterances will be put in output/result/.

Batch Inference

Batch inference is also supported, try

python3 synthesize.py --source preprocessed_data/DATASET/val.txt --restore_step RESTORE_STEP --mode batch --dataset DATASET

to synthesize all utterances in preprocessed_data/DATASET/val.txt. Please note that only the hard emotion tokens from a given emotion id are supported in this mode.

Training

Datasets

The supported datasets are

  • RAVDESS: This portion of the RAVDESS contains 1440 files: 60 trials per actor x 24 actors = 1440. The RAVDESS contains 24 professional actors (12 female, 12 male), vocalizing two lexically-matched statements in a neutral North American accent. Speech emotions includes calm, happy, sad, angry, fearful, surprise, and disgust expressions. Each expression is produced at two levels of emotional intensity (normal, strong), with an additional neutral expression.

Your own language and dataset can be adapted following here.

Preprocessing

  • For a multi-speaker TTS with external speaker embedder, download ResCNN Softmax+Triplet pretrained model of philipperemy's DeepSpeaker for the speaker embedding and locate it in ./deepspeaker/pretrained_models/.

  • Run

    python3 prepare_align.py --dataset DATASET
    

    for some preparations.

    For the forced alignment, Montreal Forced Aligner (MFA) is used to obtain the alignments between the utterances and the phoneme sequences. Pre-extracted alignments for the datasets are provided here. You have to unzip the files in preprocessed_data/DATASET/TextGrid/. Alternately, you can run the aligner by yourself.

    After that, run the preprocessing script by

    python3 preprocess.py --dataset DATASET
    

Training

Train your model with

python3 train.py --dataset DATASET

Useful options:

  • To use Automatic Mixed Precision, append --use_amp argument to the above command.
  • The trainer assumes single-node multi-GPU training. To use specific GPUs, specify CUDA_VISIBLE_DEVICES=<GPU_IDs> at the beginning of the above command.

TensorBoard

Use

tensorboard --logdir output/log

to serve TensorBoard on your localhost. The loss curves, synthesized mel-spectrograms, and audios are shown.

Notes

  • The current implementation is not trained in a semi-supervised way due to the small dataset size. But it can be easily activated by specifying target speakers and passing no emotion ID with no emotion classifier loss.
  • In Decoder, 15 X 1 LConv Block is used instead of 17 X 1 due to memory issues.
  • Two options for embedding for the multi-speaker TTS setting: training speaker embedder from scratch or using a pre-trained philipperemy's DeepSpeaker model (as STYLER did). You can toggle it by setting the config (between 'none' and 'DeepSpeaker').
  • DeepSpeaker on RAVDESS dataset shows clear identification among speakers. The following figure shows the T-SNE plot of extracted speaker embedding.

  • For vocoder, HiFi-GAN and MelGAN are supported.

Citation

Please cite this repository by the "Cite this repository" of About section (top right of the main page).

References

Comments
  • loading state dict ——size mismatch

    loading state dict ——size mismatch

    I have a problem when I use your pre-trained model for synthesis. However, the following error happens:

    RuntimeError: Error(s) in loading state_dict for XSpkEmoTrans: size mismatch for duratin_predictor.lconv_stack.0.conv_layer.weight: copying a param with shape torch.Size([2, 3]) from checkpoint, the shape in current model is torch.Size([2, 1, 3]). size mismatch for decoder.lconv_stack.0.conv_layer.weight: copying a param with shape torch.Size([8, 15]) from checkpoint, the shape in current model is torch.Size([8, 1, 15]). size mismatch for decoder.lconv_stack.1.conv_layer.weight: copying a param with shape torch.Size([8, 15]) from checkpoint, the shape in current model is torch.Size([8, 1, 15]). size mismatch for decoder.lconv_stack.2.conv_layer.weight: copying a param with shape torch.Size([8, 15]) from checkpoint, the shape in current model is torch.Size([8, 1, 15]). size mismatch for decoder.lconv_stack.3.conv_layer.weight: copying a param with shape torch.Size([8, 15]) from checkpoint, the shape in current model is torch.Size([8, 1, 15]). size mismatch for decoder.lconv_stack.4.conv_layer.weight: copying a param with shape torch.Size([8, 15]) from checkpoint, the shape in current model is torch.Size([8, 1, 15]). size mismatch for decoder.lconv_stack.5.conv_layer.weight: copying a param with shape torch.Size([8, 15]) from checkpoint, the shape in current model is torch.Size([8, 1, 15]).

    opened by cythc 2
  • Closed Issue

    Closed Issue

    Hi, I synthesized some samples with the provided pretrained models and the speaker embeedding from philipperemy's DeepSpeaker repo. However, the sampled results were bad in that all of the words were garbled and I could not hear any words.

    I am not sure if I am doing anything wrong since I just cloned your repository, downloaded the RAVDESS data and did everything listed in the README.md. Based on how I was able to generate samples, I do not think I am doing anything wrong, but was anyone able to synthesize good speech? And to the author of this repo @keonlee9420 do you mind uploading some samples generated from the pretrained models from the README.md?

    Thanks in advance.

    opened by jinny1208 0
  • The generated wav is not good

    The generated wav is not good

    Hi, thank you for open source the wonderful work ! I followed your instructions 1) install lightconv_cuda, 2) download the checkpoint, 3) download the speaker embedding npy. However, the generated result is not good.

    Below is my running command

    python3 synthesize.py \
      --text "Hello world" \
      --speaker_id Actor_22 \
      --emotion_id sad \
      --restore_step 450000 \
      --mode single \
      --dataset RAVDESS
    
    # sh run.sh 
    2022-11-30 13:45:22.626404: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
    Device of XSpkEmoTrans: cuda
    Removing weight norm...
    Raw Text Sequence: Hello world
    Phoneme Sequence: {HH AH0 L OW1 W ER1 L D}
    

    ENV

    python 3.6.8
    fairseq                 0.10.2
    torch                   1.7.0+cu110
    CUDA 11.0
    

    Hello world_Actor_22_sad

    Hello world_Actor_22_sad.wav.zip

    opened by pangtouyuqqq 1
  • Synthesis with other person out of RAVDESS

    Synthesis with other person out of RAVDESS

    Hello author, Firstly, thank you for giving this repo, it is really nice. I have a question that:

    1. I download CMU data with single person with 100 audios and make speaker embedding vector and synthesis with this, the performance is not good. I cannot detect any words.
    2. Should we need to fine-tuning deep-speaker model to generate speaker embedding with my data.

    Thank you

    opened by hathubkhn 5
  • Error using the pretrained model

    Error using the pretrained model

    I'm trying to run synthesize with the pretrained model, like such:

    python3 synthesize.py --text "This sentence is a test" --speaker_id Actor_01 --emotion_id neutral --restore_step 450000  --dataset RAVDESS --mode single
    

    but I get an error in layer size:

    Traceback (most recent call last):
      File "synthesize.py", line 206, in <module>
        model = get_model(args, configs, device, train=False,
      File "/home/jrings/diviai/installs/Cross-Speaker-Emotion-Transfer/utils/model.py", line 27, in get_model
        model.load_state_dict(model_dict, strict=False)
      File "<...>/torch/nn/modules/module.py", line 1604, in load_state_dict
        raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
    RuntimeError: Error(s) in loading state_dict for XSpkEmoTrans:
    	size mismatch for emotion_emb.etl.embed: copying a param with shape torch.Size([8, 64]) from checkpoint, the shape in current model is torch.Size([9, 64]).
    	size mismatch for duratin_predictor.lconv_stack.0.conv_layer.weight: copying a param with shape torch.Size([2, 1, 3]) from checkpoint, the shape in current model is torch.Size([2, 3]).
    	size mismatch for decoder.lconv_stack.0.conv_layer.weight: copying a param with shape torch.Size([8, 1, 15]) from checkpoint, the shape in current model is torch.Size([8, 15]).
    	size mismatch for decoder.lconv_stack.1.conv_layer.weight: copying a param with shape torch.Size([8, 1, 15]) from checkpoint, the shape in current model is torch.Size([8, 15]).
    	size mismatch for decoder.lconv_stack.2.conv_layer.weight: copying a param with shape torch.Size([8, 1, 15]) from checkpoint, the shape in current model is torch.Size([8, 15]).
    	size mismatch for decoder.lconv_stack.3.conv_layer.weight: copying a param with shape torch.Size([8, 1, 15]) from checkpoint, the shape in current model is torch.Size([8, 15]).
    	size mismatch for decoder.lconv_stack.4.conv_layer.weight: copying a param with shape torch.Size([8, 1, 15]) from checkpoint, the shape in current model is torch.Size([8, 15]).
    	size mismatch for decoder.lconv_stack.5.conv_layer.weight: copying a param with shape torch.Size([8, 1, 15]) from checkpoint, the shape in current model is torch.Size([8, 15]).
    
    opened by jrings 1
  • speaker embedding npy file not found

    speaker embedding npy file not found

    Hi,

    I am facing the following issue while synthesizing using pretrained model.

    Removing weight norm... Traceback (most recent call last): File "synthesize.py", line 234, in )) if load_spker_embed else None File "/home/sagar/tts/Cross-Speaker-Emotion-Transfer/venv/lib/python3.7/site-packages/numpy/lib/npyio.py", line 417, in load fid = stack.enter_context(open(os_fspath(file), "rb")) FileNotFoundError: [Errno 2] No such file or directory: './preprocessed_data/RAVDESS/spker_embed/Actor_19-spker_embed.npy'

    Please suggest any way out. Thanks in advance -Sagar

    opened by raikarsagar 4
Releases(v0.2.0)
Owner
Keon Lee
Expressive Speech Synthesis | Conversational AI | Open-domain Dialog | NLP | Generative Models | Empathic Computing | HCI
Keon Lee
Source code for our paper "Molecular Mechanics-Driven Graph Neural Network with Multiplex Graph for Molecular Structures"

Molecular Mechanics-Driven Graph Neural Network with Multiplex Graph for Molecular Structures Code for the Multiplex Molecular Graph Neural Network (M

shzhang 59 Dec 10, 2022
Lingvo is a framework for building neural networks in Tensorflow, particularly sequence models.

Lingvo is a framework for building neural networks in Tensorflow, particularly sequence models.

2.7k Jan 05, 2023
Sketch-Based 3D Exploration with Stacked Generative Adversarial Networks

pix2vox [Demonstration video] Sketch-Based 3D Exploration with Stacked Generative Adversarial Networks. Generated samples Single-category generation M

Takumi Moriya 232 Nov 14, 2022
A basic duplicate image detection service using perceptual image hash functions and nearest neighbor search, implemented using faiss, fastapi, and imagehash

Duplicate Image Detection Getting Started Install dependencies pip install -r requirements.txt Run service python main.py Testing Test with pytest How

Matthew Podolak 21 Nov 11, 2022
Provide baselines and evaluation metrics of the task: traffic flow prediction

Note: This repo is adpoted from https://github.com/UNIMIBInside/Smart-Mobility-Prediction. Due to technical reasons, I did not fork their code. Introd

Zhangzhi Peng 11 Nov 02, 2022
PyTorch implementation of Memory-based semantic segmentation for off-road unstructured natural environments.

MemSeg: Memory-based semantic segmentation for off-road unstructured natural environments Introduction This repository is a PyTorch implementation of

11 Nov 28, 2022
Pytorch implementation of NeurIPS 2021 paper: Geometry Processing with Neural Fields.

Geometry Processing with Neural Fields Pytorch implementation for the NeurIPS 2021 paper: Geometry Processing with Neural Fields Guandao Yang, Serge B

Guandao Yang 162 Dec 16, 2022
Official implementation of "Generating 3D Molecules for Target Protein Binding"

Generating 3D Molecules for Target Protein Binding This is the official implementation of the GraphBP method proposed in the following paper. Meng Liu

DIVE Lab, Texas A&M University 74 Dec 07, 2022
Official repository for the paper "Can You Learn an Algorithm? Generalizing from Easy to Hard Problems with Recurrent Networks"

Easy-To-Hard The official repository for the paper "Can You Learn an Algorithm? Generalizing from Easy to Hard Problems with Recurrent Networks". Gett

Avi Schwarzschild 52 Sep 08, 2022
ICRA 2021 - Robust Place Recognition using an Imaging Lidar

Robust Place Recognition using an Imaging Lidar A place recognition package using high-resolution imaging lidar. For best performance, a lidar equippe

Tixiao Shan 293 Dec 27, 2022
ProFuzzBench - A Benchmark for Stateful Protocol Fuzzing

ProFuzzBench - A Benchmark for Stateful Protocol Fuzzing ProFuzzBench is a benchmark for stateful fuzzing of network protocols. It includes a suite of

155 Jan 08, 2023
an implementation of 3D Ken Burns Effect from a Single Image using PyTorch

3d-ken-burns This is a reference implementation of 3D Ken Burns Effect from a Single Image [1] using PyTorch. Given a single input image, it animates

Simon Niklaus 1.4k Dec 28, 2022
Implementation of STAM (Space Time Attention Model), a pure and simple attention model that reaches SOTA for video classification

STAM - Pytorch Implementation of STAM (Space Time Attention Model), yet another pure and simple SOTA attention model that bests all previous models in

Phil Wang 109 Dec 28, 2022
A library of scripts that interact with the PythonTurtle module to create games, drawings, and more

TurtleLib TurtleLib is a library of scripts that interact with the PythonTurtle module to create games, drawings, and more! Using the Scripts Copy or

1 Jan 15, 2022
Transparent Transformer Segmentation

Transparent Transformer Segmentation Introduction This repository contains the data and code for IJCAI 2021 paper Segmenting transparent object in the

谢恩泽 140 Jan 02, 2023
Repository for benchmarking graph neural networks

Benchmarking Graph Neural Networks Updates Nov 2, 2020 Project based on DGL 0.4.2. See the relevant dependencies defined in the environment yml files

NTU Graph Deep Learning Lab 2k Jan 03, 2023
improvement of CLIP features over the traditional resnet features on the visual question answering, image captioning, navigation and visual entailment tasks.

CLIP-ViL In our paper "How Much Can CLIP Benefit Vision-and-Language Tasks?", we show the improvement of CLIP features over the traditional resnet fea

310 Dec 28, 2022
HGCN: Harmonic Gated Compensation Network For Speech Enhancement

HGCN The official repo of "HGCN: Harmonic Gated Compensation Network For Speech Enhancement", which was accepted at ICASSP2022. How to use step1: Calc

ScorpioMiku 33 Nov 14, 2022
Learning High-Speed Flight in the Wild

Learning High-Speed Flight in the Wild This repo contains the code associated to the paper Learning Agile Flight in the Wild. For more information, pl

Robotics and Perception Group 391 Dec 29, 2022
Yet another video caption

Yet another video caption

Fan Zhimin 5 May 26, 2022