Awesome
F5-TTS-ONNX
Run F5-TTS using ONNX Runtime for efficient and flexible text-to-speech processing.
Updates
- 2024/12/22 Update: The code has been updated to support the latest version of SWivid/F5-TTS, enabling successful export to ONNX format.
If you encountered errors with previous versions, please download the latest code and try again. - The latest version accepts audio in
int16
format (short) and also outputs inint16
format. The previous version supported the float format, but it is no longer supported in the current Inference.py.
Features
-
AMD GPU + Windows OS:
- Easy solution using ONNX-DirectML for AMD GPUs on Windows.
- Install ONNX Runtime DirectML:
pip install onnxruntime-directml --upgrade
-
Simple GUI Version:
- Try the easy-to-use GUI version:
F5-TTS-ONNX GUI
- Try the easy-to-use GUI version:
-
NVIDIA TensorRT Support:
- For NVIDIA GPU optimization with TensorRT, visit:
F5-TTS-TRT
- For NVIDIA GPU optimization with TensorRT, visit:
Learn More
- Explore more related projects and resources:
Project Overview
F5-TTS-ONNX
通过 ONNX Runtime 运行 F5-TTS,实现高效灵活的文本转语音处理。
更新
- 2024/12/22 更新:代码已更新以支持最新版本的 SWivid/F5-TTS,成功导出为 ONNX 格式。
如果您之前遇到错误,请下载最新代码并重试。 - 最新版本接收的音频格式为
int16
(short),输出也是int16
格式。上一版本支持 float 格式,但在当前的 Inference.py 中已不再支持。
功能
-
AMD GPU + Windows 操作系统:
- 针对 AMD GPU 的简单解决方案,通过 ONNX-DirectML 在 Windows 上运行。
- 安装 ONNX Runtime DirectML:
pip install onnxruntime-directml --upgrade
-
简单的图形界面版本:
- 体验简单易用的图形界面版本:
F5-TTS-ONNX GUI
- 体验简单易用的图形界面版本:
-
支持 NVIDIA TensorRT:
- 针对 NVIDIA GPU 的 TensorRT 优化,请访问:
F5-TTS-TRT
- 针对 NVIDIA GPU 的 TensorRT 优化,请访问:
了解更多
- 探索更多相关项目和资源:
项目概览