Home

Awesome

SpeechT5

Unified-modal speech-text pre-training for spoken language processing:

SpeechT5 (ACL 2022): SpeechT5: Unified-Modal Encoder-Decoder Pre-training for Spoken Language Processing

Speech2C (INTERSPEECH 2022): Pre-Training Transformer Decoder for End-to-End ASR Model with Unpaired Speech Data

YiTrans (IWSLT 2022): The YiTrans End-to-End Speech Translation System for IWSLT 2022 Offline Shared Task

SpeechUT (EMNLP 2022): SpeechUT: Bridging Speech and Text with Hidden-Unit for Encoder-Decoder Based Speech-Text Pre-training

SpeechLM (IEEE/ACM TASLP): SpeechLM: Enhanced Speech Pre-Training with Unpaired Textual Data

Speech2S (ICASSP 2023): Joint Pre-Training with Speech and Bilingual Text for Direct Speech to Speech Translation

Prosody-SpeechT5 (ICASSP 2023): Prosody-aware SpeechT5 for Expressive Neural TTS

VATLM (IEEE Transactions on Multimedia): VATLM: Visual-Audio-Text Pre-Training with Unified Masked Prediction for Speech Representation Learning

VALL-E X (Arxiv 2023): Speak Foreign Languages with Your Own Voice: Cross-Lingual Neural Codec Language Modeling

VioLA (Arxiv 2023): VioLA: Unified Codec Language Models for Speech Recognition, Synthesis, and Translation

WavLLM (Arxiv 2024): WavLLM: Towards Robust and Adaptive Speech Large Language Model

<!-- Model introductions, evaluation results, and model inference instructions are located in the corresponding folders. The source code is [https://github.com/microsoft/SpeechT5/tree/main/ModelName]. -->

Update

Pre-Trained Models

ModelPre-training DatasetFine-tuning DatasetModel
SpeechT5 Base960 hrs LibriSpeech + LibriSpeech LM Dataset-HuggingFace<br /> Google Drive
SpeechT5 Base960 hrs LibriSpeech + LibriSpeech LM Dataset100 hrs LibriSpeechHuggingFace<br /> Google Drive
SpeechT5 Large60k hrs Libri-Light + LibriSpeech LM Dataset-Google Drive
Speech2C960 hrs LibriSpeech-Google Drive
Speech2C960 hrs LibriSpeech10 hrs LibriSpeechGoogle Drive
Speech2C960 hrs LibriSpeech100 hrs LibriSpeechGoogle Drive
SpeechLM-P Base960 hrs LibriSpeech + 40M Text-Google drive
SpeechLM-P Base960 hrs LibriSpeech + 40M Text100 hrs LibriSpeechGoogle drive
SpeechLM-H Base960 hrs LibriSpeech + 40M Text-Google drive
SpeechLM-H Base960 hrs LibriSpeech + 40M Text100 hrs LibriSpeechGoogle drive
SpeechLM-P Base960 hrs LibriSpeech + 40M TextEn-De CoVoST-2[Azure Storage]
SpeechLM-P Base960 hrs LibriSpeech + 40M TextEn-Ca CoVoST-2[Azure Storage]
SpeechLM-P Base960 hrs LibriSpeech + 40M TextEn-Ar CoVoST-2[Azure Storage]
SpeechLM-P Base960 hrs LibriSpeech + 40M TextEn-Tr CoVoST-2[Azure Storage]
SpeechLM-P Large60k hrs LibriLight + 40M Text-Google drive
SpeechLM-P Large60k hrs LibriLight + 40M Text960 hrs LibriSpeechGoogle drive
SpeechLM-P Large60k hrs LibriLight + 40M TextEn-De CoVoST-2Google drive
SpeechLM-P Large60k hrs LibriLight + 40M TextEn-Ca CoVoST-2Google drive
SpeechLM-P Large60k hrs LibriLight + 40M TextEn-Ar CoVoST-2Google drive
SpeechLM-P Large60k hrs LibriLight + 40M TextEn-Tr CoVoST-2Google drive
SpeechUT Base (ASR)960 hrs LibriSpeech + 40M Text-[Azure Storage]
SpeechUT Base (ASR)960 hrs LibriSpeech + 40M Text100 hrs LibriSpeech[Azure Storage]
SpeechUT Large (ASR)60k hrs LibriSpeech + 40M Text-[Azure Storage]
SpeechUT Large (ASR)60k hrs LibriSpeech + 40M Text960 hrs LibriSpeech[Azure Storage]
SpeechUT Base (En-De)960 hrs LibriSpeech + 408 hrs MuST-C v1 + 4.6M Text-[Azure Storage]
SpeechUT Base (En-De)960 hrs LibriSpeech + 408 hrs MuST-C v1 + 4.6M TextEn-De MuST-C v1[Azure Storage]
SpeechUT Base (En-Es)960 hrs LibriSpeech + 504 hrs MuST-C v1 + 15M Text-[Azure Storage]
SpeechUT Base (En-Es)960 hrs LibriSpeech + 504 hrs MuST-C v1 + 15M TextEn-Es MuST-C v1[Azure Storage]
SpeechUT Base (En-Fr)960 hrs LibriSpeech + 492 hrs MuST-C v1 + 40M Text-[Azure Storage]
SpeechUT Base (En-Fr)960 hrs LibriSpeech + 492 hrs MuST-C v1 + 40M TextEn-Fr MuST-C v1[Azure Storage]

SpeechT5 Introduction

Motivated by the success of T5 (Text-To-Text Transfer Transformer) in pre-trained natural language processing models, we propose a unified-modal SpeechT5 framework that explores the encoder-decoder pre-training for self-supervised speech/text representation learning. The SpeechT5 framework consists of a shared encoder-decoder network and six modal-specific (speech/text) pre/post-nets. After preprocessing the input speech/text through the pre-nets, the shared encoder-decoder network models the sequence-to-sequence transformation, and then the post-nets generate the output in the speech/text modality based on the output of the decoder.

<img src="SpeechT5/speecht5_framework.png" alt="se" width="1000" />

Leveraging large-scale unlabeled speech and text data, we pre-train SpeechT5 to learn a unified-modal representation, hoping to improve the modeling capability for both speech and text. To align the textual and speech information into this unified semantic space, we propose a cross-modal vector quantization approach that randomly mixes up speech/text states with latent units as the interface between encoder and decoder. Extensive evaluations show the superiority of the proposed SpeechT5 framework on a wide variety of spoken language processing tasks, including automatic speech recognition, speech synthesis, speech translation, voice conversion, speech enhancement, and speaker identification.

<!-- Model introductions, evaluation results, and model inference instructions are located in the corresponding folders. The source code is here [https://github.com/microsoft/SpeechT5/tree/main/SpeechT5]. -->

SpeechT5 Downstream Task Performance

We evaluate our models on typical spoken language processing tasks, including automatic speech recognition, text to speech, speech to text translation, voice conversion, speech enhancement, and speaker identification.

Automatic Speech Recognition

Evaluation on the LibriSpeech

ModelLMdev-cleandev-othertest-cleantest-other
wav2vec2.0 Base-6.113.56.113.3
HuBERT Base-5.513.15.813.3
Baseline (w/o CTC)-5.812.36.212.3
Baseline-4.911.75.011.9
SpeechT5 (w/o CTC)-5.410.75.810.7
SpeechT5-4.310.34.410.4
DiscreteBERT4-gram4.010.94.512.1
wav2vec 2.0 Base4-gram2.77.93.48.0
HuBERT Base4-gram2.77.83.48.1
wav2vec 2.0 BaseTransf.2.26.32.66.3
BaselineTransf.2.36.32.56.3
SpeechT5Transf.2.15.52.45.8

Text-to-Speech

Evaluation on the LibriTTS

ModelNaturalnessMOSCMOS
Ground Truth-3.87-
Baseline2.763.560
SpeechT52.913.65+0.290

Speech Translation

Evaluation on the MUST-C v1

ModelEN-DEEN-FR
Fairseq ST22.7032.90
ESPnet ST22.9132.69
Adapter Tuning24.6334.98
Baseline23.4333.76
SpeechT5 (w/o initializing decoder)24.4434.5
SpeechT525.1835.30

Voice Conversion

Evaluation on the CMU Arctic

ModelWERWERMCDMCD
bdl to sltclb to sltbdl to sltclb to slt
VTN w/ ASR11.110.96.56.11
VTN w/ TTS7.69.16.3313.3
Many-to-many VTN--6.135.97
Baseline21.510.86.266.16
SpeechT57.86.45.935.87

Speech Enhancement

Evaluation on the WSJ0 Hipster AmbientMixtures (WHAM!)

ModelWER
Ground Truth Speech3.2
Noisy Speech76.1
Baseline10.9
SpeechT58.9

Speaker Identification

Evaluation on the VoxCeleb1

ModelAcc
SUPERB, wav2vec 2.0 Base75.18%
SUPERB, HuBERT Base81.42%
SUPERB, HuBERT Large90.33%
SpeechNet, single task86.00%
SpeechNet, multi-task with TTS87.90%
Thin ResNet-3489.00%
Baseline91.92%
SpeechT596.49%

License

This project is licensed under the license found in the LICENSE file in the root directory of this source tree. Portions of the source code are based on the FAIRSEQ and ESPnet projects.

Microsoft Open Source Code of Conduct

Reference

If you find our work is useful in your research, please cite the following paper:

@article{Ao2021SpeechT5,
  title   = {SpeechT5: Unified-Modal Encoder-Decoder Pre-training for Spoken Language Processing},
  author  = {Junyi Ao and Rui Wang and Long Zhou and Chengyi Wang and Shuo Ren and Yu Wu and Shujie Liu and Tom Ko and Qing Li and Yu Zhang and Zhihua Wei and Yao Qian and Jinyu Li and Furu Wei},
  eprint={2110.07205},
  archivePrefix={arXiv},
  primaryClass={eess.AS},
  year={2021}
}
@article{Ao2022Speech2C,
  title   = {Pre-Training Transformer Decoder for End-to-End ASR Model with Unpaired Speech Data},
  author  = {Junyi Ao and Ziqiang Zhang and Long Zhou and Shujie Liu and Haizhou Li and Tom Ko and Lirong Dai and Jinyu Li and Yao Qian and Furu Wei},
  eprint={2203.17113},
  archivePrefix={arXiv},
  primaryClass={cs.SD},
  year={2022}
}
@article{Zhang2022Yitrans,
  title   = {The YiTrans End-to-End Speech Translation System for IWSLT 2022 Offline Shared Task},
  author  = {Zhang, Ziqiang and Ao, Junyi and Zhou, Long and Liu, Shujie and Wei, Furu and Li, Jinyu},
  eprint={2206.05777},
  archivePrefix={arXiv},
  primaryClass={cs.CL},
  year={2022}
}
@article{zhang2022speechut,
  title   = {SpeechUT: Bridging Speech and Text with Hidden-Unit for Encoder-Decoder Based Speech-Text Pre-training},
  author  = {Zhang, Ziqiang and Zhou, Long and Ao, Junyi and Liu, Shujie and Dai, Lirong and Li, Jinyu and Wei, Furu},
  eprint={2210.03730},
  archivePrefix={arXiv},
  primaryClass={cs.CL},
  year={2022}
}
@article{zhang2022speechlm,
  title   = {SpeechLM: Enhanced Speech Pre-Training with Unpaired Textual Data},
  author  = {Zhang, Ziqiang and Chen, Sanyuan and Zhou, Long and Wu, Yu and Ren, Shuo and Liu, Shujie and Yao, Zhuoyuan and Gong, Xun and Dai, Lirong and Li, Jinyu and Wei, Furu},
  eprint={2209.15329},
  archivePrefix={arXiv},
  primaryClass={cs.CL},
  year={2022}
}

Contact Information

For help or issues using SpeechT5 models, please submit a GitHub issue.

For other communications related to SpeechT5, please contact Long Zhou (lozhou@microsoft.com).