VQMIVC - Vector Quantization and Mutual Information-Based Unsupervised Speech Representation Disentanglement for One-shot Voice Conversion

Overview

VQMIVC: Vector Quantization and Mutual Information-Based Unsupervised Speech Representation Disentanglement for One-shot Voice Conversion (Interspeech 2021)

arXiv GitHub Stars download

Run VQMIVC on Replicate

Integrated to Huggingface Spaces with Gradio. See Gradio Web Demo.

Pre-trained models: google-drive or here | Paper demo

This paper proposes a speech representation disentanglement framework for one-shot/any-to-any voice conversion, which performs conversion across arbitrary speakers with only a single target-speaker utterance for reference. Vector quantization with contrastive predictive coding (VQCPC) is used for content encoding and mutual information (MI) is introduced as the correlation metric during training, to achieve proper disentanglement of content, speaker and pitch representations, by reducing their inter-dependencies in an unsupervised manner.

📢 Update

Many thanks to ericguizzo & AK391!

  1. A Replicate demo is provided online, so you can play our pre-trained models there, have fun!
  2. VQMIVC can be trained and tested inside a Docker environment via Cog now.
  3. Gradio Web Demo is available, another online demo!

TODO

  • Add more details on how to use Cog for development

Requirements

Python 3.6 is used, install apex for speeding up training (optional), other requirements are listed in 'requirements.txt':

pip install -r requirements.txt

Quick start with pre-trained models

ParallelWaveGAN is used as the vocoder, so firstly please install ParallelWaveGAN to try the pre-trained models:

python convert_example.py -s {source-wav} -r {reference-wav} -c {converted-wavs-save-path} -m {model-path} 

For example:

python convert_example.py -s test_wavs/p225_038.wav -r test_wavs/p334_047.wav -c converted -m checkpoints/useCSMITrue_useCPMITrue_usePSMITrue_useAmpTrue/VQMIVC-model.ckpt-500.pt 

The converted wav is put in 'converted' directory.

Training and inference:

  • Step1. Data preparation & preprocessing
  1. Put VCTK corpus under directory: 'Dataset/'

  2. Training/testing speakers split & feature (mel+lf0) extraction:

     python preprocess.py
    
  • Step2. model training:
  1. Training with mutual information minimization (MIM):

     python train.py use_CSMI=True use_CPMI=True use_PSMI=True
    
  2. Training without MIM:

     python train.py use_CSMI=False use_CPMI=False use_PSMI=False 
    
  • Step3. model testing:
  1. Put PWG vocoder under directory: 'vocoder/'

  2. Inference with model trained with MIM:

     python convert.py checkpoint=checkpoints/useCSMITrue_useCPMITrue_usePSMITrue_useAmpTrue/model.ckpt-500.pt
    
  3. Inference with model trained without MIM:

     python convert.py checkpoint=checkpoints/useCSMIFalse_useCPMIFalse_usePSMIFalse_useAmpTrue/model.ckpt-500.pt
    

Citation

If the code is used in your research, please Star our repo and cite our paper:

@inproceedings{wang21n_interspeech,
  author={Disong Wang and Liqun Deng and Yu Ting Yeung and Xiao Chen and Xunying Liu and Helen Meng},
  title={{VQMIVC: Vector Quantization and Mutual Information-Based Unsupervised Speech Representation Disentanglement for One-Shot Voice Conversion}},
  year=2021,
  booktitle={Proc. Interspeech 2021},
  pages={1344--1348},
  doi={10.21437/Interspeech.2021-283}
}

Acknowledgements:

  • The content encoder is borrowed from VectorQuantizedCPC, which also inspires the negative sampling within-utterance for CPC;
  • The speaker encoder is borrowed from AdaIN-VC;
  • The decoder is modified from AutoVC;
  • Estimation of mutual information is modified from CLUB;
  • Speech features extraction is based on espnet and Pyworld.
Comments
  • The issue of  vocoder  in Inference progress

    The issue of vocoder in Inference progress

    Hi Sir,

    Thank you for your sharing firstly.

    Now I meet a issure about the inference as below:

    raceback (most recent call last): File "convert.py", line 201, in convert(config) File "convert.py", line 194, in convert subprocess.call(cmd) File "/home/tts/xxxx/softWare/miniConda/miniconda3/envs/ft_tts/lib/python3.6/subprocess.py", line 287, in call with Popen(*popenargs, **kwargs) as p: File "/home/tts/xxxx/softWare/miniConda/miniconda3/envs/ft_tts/lib/python3.6/subprocess.py", line 729, in init restore_signals, start_new_session) File "/home/tts/xxxx/softWare/miniConda/miniconda3/envs/ft_tts/lib/python3.6/subprocess.py", line 1364, in _execute_child raise child_exception_type(errno_num, err_msg, err_filename) PermissionError: [Errno 13] Permission denied: 'parallel-wavegan-decode'

    What can I do to solve this problem? The pretrain vocoder I have been put in the vocoder dir.

    (tts) [[email protected] VQMIVC]$ ll vocoder/ 总用量 4 lrwxrwxrwx 1 xxxx xxxx 53 6月 25 10:50 checkpoint-3000000steps.pkl -> ../pretrain_model/vocoder/checkpoint-3000000steps.pkl lrwxrwxrwx 1 xxxx xxxx 36 6月 25 10:50 config.yml -> ../pretrain_model/vocoder/config.yml -rw-r--r-- 1 xxxx xxxx 39 6月 24 17:53 README.md lrwxrwxrwx 1 xxxx xxxx 34 6月 25 10:50 stats.h5 -> ../pretrain_model/vocoder/stats.h5

    opened by TaoTaoFu 14
  • How to slove this problem?

    How to slove this problem?

    Dear Phd WANG: When I run the convert.py file, I meet this problem and i can not slove it, can you give me some suggest? Thank you very much! Error: Traceback (most recent call last): File "convert.py", line 168, in convert '--feats-scp', f'{str(out_dir)}/feats.1.scp', '--outdir', str(out_dir)]) File "/home/liyp/anaconda3/envs/xll/lib/python3.6/subprocess.py", line 287, in call with Popen(*popenargs, **kwargs) as p: File "/home/liyp/anaconda3/envs/xll/lib/python3.6/subprocess.py", line 729, in init restore_signals, start_new_session) File "/home/liyp/anaconda3/envs/xll/lib/python3.6/subprocess.py", line 1364, in _execute_child raise child_exception_type(errno_num, err_msg, err_filename) FileNotFoundError: [Errno 2] No such file or directory: 'parallel-wavegan-decode': 'parallel-wavegan-decode'

    opened by Hu-chengyang 9
  • preprocess issue

    preprocess issue

    After downloaded the VCTK Corpus and copy the file under /Dataset (and create a directory '/Dataset/VCTK-Corpus/' to include the file: speaker-info.txt), I run the preprocess.py and get the following result. How can I fix this?

    (voice-clone) C:\Python\VQMIVC>python preprocess.py all_spks: ['257', '294', '304', '297', '226', '282', '247', '330', '361', '252', '293', '306', '340', '231', '268', '283', '243', '334', '315', '269', '285', '310', '230', '311', '374', '307', '286', '323', '245', '227', '239', '240', '363', '284', '251', '318', '246', '265', '244', '228', '333', '276', '255', '225', '308', '260', '339', '312', '336', '347', '345', '258', '335', '270', '376', '237', '316', '326', '364', '273', '263', '259', '267', '292', '232', '229', '254', '264', '287', '278', '236', '317', '272', '233', '234', '248', '249', '305', '299', '281', '302', '329', '262', '351', '288', '298', '250', '343', '256', '300', '275', '341', '279', '277', '271', '241', '303', '274', '313', '266', '301', '253', '261', '314', '295', '360', '362', '238'] len(spk_wavs): 0 len(spk_wavs): 0 len(spk_wavs): 0 . . . len(spk_wavs): 0 len(spk_wavs): 0 len(spk_wavs): 0 0 0 0 extract log-mel... 0it [00:00, ?it/s] normalize log-mel... Traceback (most recent call last): File "preprocess.py", line 141, in mels = np.concatenate(mels, 0) File "<array_function internals>", line 6, in concatenate ValueError: need at least one array to concatenate

    opened by Chuk101 8
  • Question About Batch Size, number of Epochs and Learning Rate

    Question About Batch Size, number of Epochs and Learning Rate

    Hi @Wendison , I've already has trained some models (with VCTK subsets and external speakers) and could notice that a bigger batch size doesn't necessarily results in better audio quality for the same 500 epochs, in some cases, audio quality could be worse (For male References). My question is:

    Do you have any report or experiments with different Batch Sizes, Number of Epochs (Why 500 and not 600 or more), and Learning Rates for different batch sizes?

    If not, what advice could you provide regarding the Batch Size and the number of Epochs? The bigger the better?

    For complex data like this there should be an improvement on bigger batches, but learning rate or number of epochs should be tuned.

    Thank You.

    opened by jlmarrugom 6
  • about the model question

    about the model question

    I try to train the model again,after I finished the process.I used the model that trained by myself to voice conversion, but I got noting. could you give me some advice. I have done all things follow the ReadME

    opened by Mike66666 4
  • Add Docker environment & web demo

    Add Docker environment & web demo

    Hey @Wendison! 👋

    I really liked your implementation and it works very well with any kind of voice! Really funny :)

    This pull request makes it possible to run your model inside a Docker environment, which makes it easier for other people to run it. We're using an open source tool called Cog to make this process easier.

    This also means we can make a web page where other people can try out your model! View it here: https://replicate.ai/wendison/vqmivc

    Claim your page here so you can edit it, and we'll feature it on our website and tweet about it too.

    In case you're wondering who I am, I'm from Replicate, where we're trying to make machine learning reproducible. We got frustrated that we couldn't run all the really interesting ML work being done. So, we're going round implementing models we like. 😊

    opened by ericguizzo 4
  • The CPCLoss

    The CPCLoss

    I read related papers, but still do not understand the CPC loss computaiton.

        labels = torch.zeros(
            self.n_speakers_per_batch * self.n_utterances_per_speaker, length,
            dtype=torch.long, device=z.device
        )
    
        loss = F.cross_entropy(f, labels)
    

    Can someone explain it for me. Why labels of zeros and cross_entropy used here?

    opened by Liujingxiu23 4
  • NameError: name 'amp' is not defined .   File

    NameError: name 'amp' is not defined . File "train.py", line 407, in train_model

    I am getting below error.

    File "train.py", line 407, in train_model optimizer, optimizer_cs_mi_net, optimizer_ps_mi_net, optimizer_cp_mi_net, scheduler, amp, epoch, checkpoint_dir, cfg) NameError: name 'amp' is not defined

    opened by geni120 3
  • Improper converted audio when source = reference

    Improper converted audio when source = reference

    Hi, I tried using python convert_example.py -s test_wavs/jane3.wav -r test_wavs/jane3.wav -c converted -m checkpoints/useCSMITrue_useCPMITrue_usePSMITrue_useAmpTrue/VQMIVC-model.ckpt-500.pt to check out how the results are when source audio and reference audio are same. But the output is mostly silent. Am I missing something? To reproduce the results, the audio files and vocoder are uploaded here

    Source and reference: https://drive.google.com/file/d/1bPAQ9UaKJF1gNNCtkeDmySxLv_uXW1HN/view?usp=sharing Converted: https://drive.google.com/file/d/1TmxjpHx3WY3nKRwy5lz04LWfKAo69qwW/view?usp=sharing CC: @Wendison

    opened by vishalbhavani 3
  • What is the

    What is the "parallel-wavegan-decode" in cmd = ['parallel-wavegan-decode', '--checkpoint',...] ,it is a folder???

    Thanks for your code, but I have some problems, In code: cmd = ['parallel-wavegan-decode', '--checkpoint',...], Is it a folder? If so, what does this folder contain? My system told me it couldn't be found

    opened by DIO385 2
  • Where can I  get the silence trimmed VCTK corpus?

    Where can I get the silence trimmed VCTK corpus?

    Hi,

    Thank you for sharing your code! I wonder where can I get the silence trimmed VCTK corpus? Since the VCTK dataset I have only contains *.wav file and in your preprocess.py script it seems that all audio files are *.flac format, I cannot run the script.

    opened by Aria-K-Alethia 2
  • voice conversion not happens after fine-tuned with pretrained model

    voice conversion not happens after fine-tuned with pretrained model

    Hi @Wendison

    Thank you so much for this great work.

    I fine-tuned (resumed) pretrained model (use_CSMI=True use_CPMI=True use_PSMI=True) with indicTTS dataset (20 speakers - each having 1 hour audios)

    the model trained with 1000 epochs.

    Quality gets better for the target speaker. but source speaker modulation is not converted.

    Can you please give your suggestions?

    Thanks

    opened by MuruganR96 0
  • Training for Indian Multi-Speaker/Multi-lingual VC

    Training for Indian Multi-Speaker/Multi-lingual VC

    Hi, @Wendison Thank you so much for your excellent work. very nice paper.

    When I saw this reply on the below issues, it helped me to motivate to go further.

    https://github.com/Wendison/VQMIVC/issues/14#issuecomment-937900528

    https://github.com/Wendison/VQMIVC/issues/17#issuecomment-971136691

    I am trying Common Voice Indian English Multi-Speakers and VCTK Training. I need a few suggestions from you

    Steps:

    1. I add Common Voice Indian English Multi-Speakers (40 speakers - each having 30 minutes Datasets) along with VCTK 109 Speakers. and start training use_CSMI=True use_CPMI=True use_PSMI=True

    2. After the model is trained with good accuracy, will go for fine-tuning with other Indian regional languages of Common Voice (Tamil, Hindi, Urdu, etc)

    is this approach good?

    @Wendison kindly request, please give your suggestions. Thanks

    opened by MuruganR96 0
  • What do z_dim and c_dim stand for?

    What do z_dim and c_dim stand for?

    Dear PHD: Could you tell me what do z_dim:64 and c_dim:256 in config/model/default stand for?And what n_embeddings: 512 in config/model/default stand for?Thank you very much.

    opened by Hu-chengyang 4
  • Training Loss Abnormal

    Training Loss Abnormal

    @andreasjansson @Wendison Hello, sorry to interrupt you! I'm a rookie of voice model. I have trained the model in VCTK-Corpus-0.92.zip dataset by "python3 train.py use_CSMI=True use_CPMI=True use_PSMI=True" in NVIDIA V100S. But after 65 epochs, the train loss are as follows: image Could you give me some advice? Thank you very much!

    opened by Haoyanlong 3
  • lf0 question about convert phase

    lf0 question about convert phase

    Hi, I wonder why you normalize f0 series before feeding to the f0encoder in convert.py. However, this kind of normalization for f0 isn't used in preprocessing phase.

    opened by powei-C 3
  • How to solve this problem?

    How to solve this problem?

    Dear PHD: I try to train a vocoder, and I have installed parallelwavegan,and I run the command: run.sh,however it came out with the traceback: Traceback (most recent call last): File "/home/liyp/anaconda3/envs/xll/bin/parallel-wavegan-preprocess", line 11, in load_entry_point('parallel-wavegan', 'console_scripts', 'parallel-wavegan-preprocess')() File "/data2/hcy/VQMIVC-main/vocoder/ParallelWaveGAN/parallel_wavegan/bin/preprocess.py", line 186, in main ), f"{utt_id} seems to have a different sampling rate."

    I find that the sampling rate is 24000hz,however the sampling rate of the VQMIVC is 16000,could you tell me how to modify the sampling rate?

    opened by Hu-chengyang 3
Owner
Disong Wang
PhD student @ CUHK, focus on voice conversion & speech synthesis.
Disong Wang
ReSSL: Relational Self-Supervised Learning with Weak Augmentation

ReSSL: Relational Self-Supervised Learning with Weak Augmentation This repository contains PyTorch evaluation code, training code and pretrained model

mingkai 45 Oct 25, 2022
[ECCV 2020] Reimplementation of 3DDFAv2, including face mesh, head pose, landmarks, and more.

Stable Head Pose Estimation and Landmark Regression via 3D Dense Face Reconstruction Reimplementation of (ECCV 2020) Towards Fast, Accurate and Stable

Remilia Scarlet 221 Dec 30, 2022
This repository contains the official implementation code of the paper Transformer-based Feature Reconstruction Network for Robust Multimodal Sentiment Analysis

This repository contains the official implementation code of the paper Transformer-based Feature Reconstruction Network for Robust Multimodal Sentiment Analysis, accepted at ACMMM 2021.

Ziqi Yuan 10 Sep 30, 2022
Light-Head R-CNN

Light-head R-CNN Introduction We release code for Light-Head R-CNN. This is my best practice for my research. This repo is organized as follows: light

jemmy li 835 Dec 06, 2022
Supercharging Imbalanced Data Learning WithCausal Representation Transfer

ECRT: Energy-based Causal Representation Transfer Code for Supercharging Imbalanced Data Learning With Energy-basedContrastive Representation Transfer

Zidi Xiu 11 May 02, 2022
[CVPR 2021] Forecasting the panoptic segmentation of future video frames

Panoptic Segmentation Forecasting Colin Graber, Grace Tsai, Michael Firman, Gabriel Brostow, Alexander Schwing - CVPR 2021 [Link to paper] We propose

Niantic Labs 44 Nov 29, 2022
Code for paper "Which Training Methods for GANs do actually Converge? (ICML 2018)"

GAN stability This repository contains the experiments in the supplementary material for the paper Which Training Methods for GANs do actually Converg

Lars Mescheder 885 Jan 01, 2023
Python wrapper class for OpenVINO Model Server. User can submit inference request to OVMS with just a few lines of code

Python wrapper class for OpenVINO Model Server. User can submit inference request to OVMS with just a few lines of code.

Yasunori Shimura 7 Jul 27, 2022
Classifying audio using Wavelet transform and deep learning

Audio Classification using Wavelet Transform and Deep Learning A step-by-step tutorial to classify audio signals using continuous wavelet transform (C

Aditya Dutt 17 Nov 29, 2022
cisip-FIRe - Fast Image Retrieval

Fast Image Retrieval (FIRe) is an open source image retrieval project release by Center of Image and Signal Processing Lab (CISiP Lab), Universiti Malaya. This project implements most of the major bi

CISiP Lab 39 Nov 25, 2022
Code accompanying the paper "Knowledge Base Completion Meets Transfer Learning"

Knowledge Base Completion Meets Transfer Learning This code accompanies the paper Knowledge Base Completion Meets Transfer Learning published at EMNLP

14 Nov 27, 2022
Predictive AI layer for existing databases.

MindsDB is an open-source AI layer for existing databases that allows you to effortlessly develop, train and deploy state-of-the-art machine learning

MindsDB Inc 12.2k Jan 03, 2023
A modern pure-Python library for reading PDF files

pdf A modern pure-Python library for reading PDF files. The goal is to have a modern interface to handle PDF files which is consistent with itself and

6 Apr 06, 2022
Metric learning algorithms in Python

metric-learn: Metric Learning in Python metric-learn contains efficient Python implementations of several popular supervised and weakly-supervised met

1.3k Dec 28, 2022
Collection of NLP model explanations and accompanying analysis tools

Thermostat is a large collection of NLP model explanations and accompanying analysis tools. Combines explainability methods from the captum library wi

126 Nov 22, 2022
Waymo motion prediction challenge 2021: 3rd place solution

Waymo motion prediction challenge 2021: 3rd place solution 📜 Technical report 🗨️ Presentation 🎉 Announcement 🛆Motion Prediction Channel Website 🛆

158 Jan 08, 2023
Black-Box-Tuning - Black-Box Tuning for Language-Model-as-a-Service

Black-Box-Tuning Source code for paper "Black-Box Tuning for Language-Model-as-a

Tianxiang Sun 149 Jan 04, 2023
GMFlow: Learning Optical Flow via Global Matching

GMFlow GMFlow: Learning Optical Flow via Global Matching Authors: Haofei Xu, Jing Zhang, Jianfei Cai, Hamid Rezatofighi, Dacheng Tao We streamline the

Haofei Xu 298 Jan 04, 2023
Point cloud processing tool library.

Point Cloud ToolBox This point cloud processing tool library can be used to process point clouds, 3d meshes, and voxels. Environment python 3.7.5 Dep

ZhangXinyun 40 Dec 09, 2022
This repository contains the code and models necessary to replicate the results of paper: How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective

Black-Box-Defense This repository contains the code and models necessary to replicate the results of our recent paper: How to Robustify Black-Box ML M

OPTML Group 2 Oct 05, 2022