Home

Awesome

A (Heavily Documented) TensorFlow Implementation of Tacotron: A Fully End-to-End Text-To-Speech Synthesis Model

Requirements

Data

<img src="https://upload.wikimedia.org/wikipedia/commons/7/72/World_English_Bible_Cover.jpg" height="200" align="right"> <img src="https://upload.wikimedia.org/wikipedia/commons/thumb/f/f6/Nick_Offerman_at_UMBC_%28cropped%29.jpg/440px-Nick_Offerman_at_UMBC_%28cropped%29.jpg" height="200" align="right"> <img src="https://image.shutterstock.com/z/stock-vector-lj-letters-four-colors-in-abstract-background-logo-design-identity-in-circle-alphabet-letter-418687846.jpg" height="200" align="right">

We train the model on three different speech datasets.

  1. LJ Speech Dataset
  2. Nick Offerman's Audiobooks
  3. The World English Bible

LJ Speech Dataset is recently widely used as a benchmark dataset in the TTS task because it is publicly available. It has 24 hours of reasonable quality samples. Nick's audiobooks are additionally used to see if the model can learn even with less data, variable speech samples. They are 18 hours long. The World English Bible is a public domain update of the American Standard Version of 1901 into modern English. Its original audios are freely available here. Kyubyong split each chapter by verse manually and aligned the segmented audio clips to the text. They are 72 hours in total. You can download them at Kaggle Datasets.

Training

Sample Synthesis

We generate speech samples based on Harvard Sentences as the original paper does. It is already included in the repo.

Training Curve

<img src="fig/training_curve.png">

Attention Plot

<img src="fig/attention.gif">

Generated Samples

Pretrained Files

Notes

Differences from the original paper

Papers that referenced this repo

Jan. 2018, Kyubyong Park & Tommy Mulc