Home

Awesome

Faster Whisper Server

faster-whisper-server is an OpenAI API-compatible transcription server which uses faster-whisper as its backend. Features:

Please create an issue if you find a bug, have a question, or a feature suggestion.

OpenAI API Compatibility ++

See OpenAI API reference for more information.

Quick Start

Hugging Face Space

image

Using Docker Compose (Recommended)

NOTE: I'm using newer Docker Compsose features. If you are using an older version of Docker Compose, you may need need to update.

curl --silent --remote-name https://raw.githubusercontent.com/fedirz/faster-whisper-server/master/compose.yaml

# for GPU support
curl --silent --remote-name https://raw.githubusercontent.com/fedirz/faster-whisper-server/master/compose.cuda.yaml
docker compose --file compose.cuda.yaml up --detach
# for CPU only (use this if you don't have a GPU, as the image is much smaller)
curl --silent --remote-name https://raw.githubusercontent.com/fedirz/faster-whisper-server/master/compose.cpu.yaml
docker compose --file compose.cpu.yaml up --detach

Using Docker

# for GPU support
docker run --gpus=all --publish 8000:8000 --volume ~/.cache/huggingface:/root/.cache/huggingface --detach fedirz/faster-whisper-server:latest-cuda
# for CPU only (use this if you don't have a GPU, as the image is much smaller)
docker run --publish 8000:8000 --volume ~/.cache/huggingface:/root/.cache/huggingface --env WHISPER__MODEL=Systran/faster-whisper-small --detach fedirz/faster-whisper-server:latest-cpu

Using Kubernetes

Follow this tutorial

Usage

If you are looking for a step-by-step walkthrough, check out this YouTube video.

OpenAI API CLI

export OPENAI_API_KEY="cant-be-empty"
export OPENAI_BASE_URL=http://localhost:8000/v1/
openai api audio.transcriptions.create -m Systran/faster-distil-whisper-large-v3 -f audio.wav --response-format text

openai api audio.translations.create -m Systran/faster-distil-whisper-large-v3 -f audio.wav --response-format verbose_json

OpenAI API Python SDK

from openai import OpenAI

client = OpenAI(api_key="cant-be-empty", base_url="http://localhost:8000/v1/")

audio_file = open("audio.wav", "rb")
transcript = client.audio.transcriptions.create(
    model="Systran/faster-distil-whisper-large-v3", file=audio_file
)
print(transcript.text)

cURL

# If `model` isn't specified, the default model is used
curl http://localhost:8000/v1/audio/transcriptions -F "file=@audio.wav"
curl http://localhost:8000/v1/audio/transcriptions -F "file=@audio.mp3"
curl http://localhost:8000/v1/audio/transcriptions -F "file=@audio.wav" -F "stream=true"
curl http://localhost:8000/v1/audio/transcriptions -F "file=@audio.wav" -F "model=Systran/faster-distil-whisper-large-v3"
# It's recommended that you always specify the language as that will reduce the transcription time
curl http://localhost:8000/v1/audio/transcriptions -F "file=@audio.wav" -F "language=en"

curl http://localhost:8000/v1/audio/translations -F "file=@audio.wav"

Live Transcription (using WebSocket)

From live-audio example

https://github.com/fedirz/faster-whisper-server/assets/76551385/e334c124-af61-41d4-839c-874be150598f

websocat installation is required. Live transcription of audio data from a microphone.

ffmpeg -loglevel quiet -f alsa -i default -ac 1 -ar 16000 -f s16le - | websocat --binary ws://localhost:8000/v1/audio/transcriptions