libritts-small / README.md
mrq
Shortcutting a baseline dataset for VALL-E
bdf43e9
# LibriSpeech-Finetuning for VALL-E
Included is a dataset I've prepared for training with [my fork of a VALL-E implementation](https://git.ecker.tech/mrq/vall-e), sourced from [LibriSpeech-Finetuning](https://dl.fbaipublicfiles.com/librilight/data/librispeech_finetuning.tgz).
>\> What makes this different?
I've trimmed them down to better train against them, as too large of a piece of data will increase VRAM use drastically:
* I re-transcribed using [m-bain/WhisperX](https://github.com/m-bain/whisperX/)'s large-v2 model and using the VAD filter to get near-perfect timestamps.
* I then bias the start by -0.05 seconds, and the ends by 0.05 seconds).
* very short segments are merged with preceding ones to avoid fragmenting too much
* the source audio is then sliced according to each segment, and each segment gets phonemized using [bootphon/phonemizer](https://github.com/bootphon/phonemizer/) (espeak backend).
* finally, the sliced audio is quantized using Encodec, for VALL-E's use.
This will help alleviate problems from the default `max_phoneme` length ignoring a large chunk of the dataset, and relatively evenly distributing lengths.