Home

Awesome

<div align="center"> <a href="https://github.com/argmaxinc/WhisperKit#gh-light-mode-only"> <img src="https://github.com/argmaxinc/WhisperKit/assets/1981179/6ac3360b-2f5c-4392-a71a-05c5dda71093" alt="WhisperKit" width="20%" /> </a> <a href="https://github.com/argmaxinc/WhisperKit#gh-dark-mode-only"> <img src="https://github.com/argmaxinc/WhisperKit/assets/1981179/a682ce21-80e0-4a98-a99f-836663538a4f" alt="WhisperKit" width="20%" /> </a>

Unit and Functional Tests

whisperkittools

</div>

Python tools for WhisperKit

Table of Contents

Installation

conda create -n whisperkit python=3.11 -y && conda activate whisperkit
cd WHISPERKIT_ROOT_DIR && pip install -e .

Model Generation

Convert Hugging Face Whisper Models (PyTorch) to WhisperKit (Core ML) format:

whisperkit-generate-model --model-version <model-version> --output-dir <output-dir>

For optional arguments related to model optimizations, please see the help menu with -h

<a name="publish-custom-model"></a> Publishing Models

We host several popular Whisper model versions here. These hosted models are automatically over-the-air deployable to apps integrating WhisperKit such as our example app WhisperAX on TestFlight. If you would like to publish custom Whisper versions that are not already published, you can do so as follows:

huggingface-cli whoami

If you don't have a write token yet, you can generate it here.

MODEL_REPO_ID=my-org/my-whisper-repo-name whisperkit-generate-model --model-version distil-whisper/distil-small.en --output-dir <output-dir>

If the above command is successfuly executed, your model will have been published to hf.co/my-org/my-whisper-repo-name/distil-whisper_distil-small.en!

<a name="evaluate"></a> Model Evaluation

Evaluate (Argmax- or developer-published) models on speech recognition datasets:

whisperkit-evaluate-model --model-version <model-version> --output-dir <output-dir> --evaluation-dataset {librispeech-debug,librispeech,earnings22}

By default, this command uses the latest main branch commits from WhisperKit and searches within Argmax-published model repositories. For optional arguments related to code and model versioning, please see the help menu with -h

We continually publish the evaluation results of Argmax-hosted models here as part of our continuous integration tests.

<a name="evaluate-on-custom-dataset"></a> Model Evaluation on Custom Dataset

If you would like to evaluate WhisperKit models on your own dataset:

export CUSTOM_EVAL_DATASET="my-dataset-name-on-hub"
export DATASET_REPO_OWNER="my-user-or-org-name-on-hub"
export MODEL_REPO_ID="my-org/my-whisper-repo-name" # if evaluating self-published models
whisperkit-evaluate-model --model-version <model-version> --output-dir <output-dir> --evaluation-dataset my-dataset-name-on-hub

Python Inference

Use the unified Python wrapper for several Whisper frameworks:

from whisperkit.pipelines import WhisperKit, WhisperCpp, WhisperMLX

pipe = WhisperKit(whisper_version="openai/whisper-large-v3", out_dir="/path/to/out/dir")
print(pipe("audio.{wav,flac,mp3}"))

Note: Using WhisperCpp requires ffmpeg to be installed. Recommended installation is with brew install ffmpeg

Example SwiftUI App

TestFlight

Source Code (MIT License)

This app serves two purposes:

Note that the app is in beta and we are actively seeking feedback to improve it before widely distributing it.

<a name="qoi"></a> WhisperKit Evaluation Results

Dataset: librispeech

Short-form Audio (<30s/clip) - 5 hours of English audiobook clips

WER (↓)QoI (↑)File Size (MB)Code Commit
large-v2 (WhisperOpenAIAPI)2.351003100N/A
large-v22.7796.63100Link
large-v2_949MB2.494.6949Link
large-v2_turbo2.7696.63100Link
large-v2_turbo_955MB2.4194.6955Link
large-v32.0495.23100Link
large-v3_947MB2.4693.9947Link
large-v3_turbo2.0395.43100Link
large-v3_turbo_954MB2.4793.9954Link
distil-large-v32.4789.71510Link
distil-large-v3_594MB2.9685.4594Link
distil-large-v3_turbo2.4789.71510Link
distil-large-v3_turbo_600MB2.7886.2600Link
small.en3.1285.8483Link
small3.4583483Link
base.en3.9875.3145Link
base4.9767.2145Link
tiny.en5.6163.966Link
tiny7.4752.566Link

Dataset: earnings22

Long-Form Audio (>1hr/clip) - 120 hours of earnings call recordings in English with various accents

WER (↓)QoI (↑)File Size (MB)Code Commit
large-v2 (WhisperOpenAIAPI)16.271003100N/A
large-v315.1758.53100Link
base.en23.496.5145Link
tiny.en28.645.766Link

Explanation

We believe that rigorously measuring the quality of inference is necessary for developers and enterprises to make informed decisions when opting to use optimized or compressed variants of any machine learning model in production. To contextualize WhisperKit, we take the following Whisper implementations and benchmark them using a consistent evaluation harness:

Server-side:

($0.36 per hour of audio as of 02/29/24, 25MB file size limit per request)

On-device:

(All on-device implementations are available for free under MIT license as of 03/19/2024)

WhisperOpenAIAPI sets the reference and we assume that it is using the equivalent of openai/whisper-large-v2 in float16 precision along with additional undisclosed optimizations from OpenAI. In all measurements, we care primarily about per-example no-regressions (quantified as qoi below) which is a stricter metric compared to dataset average Word Error RATE (WER). A 100% qoi preserves perfect backwards-compatibility on the test distribution and avoids "perceived regressions", the phenomenon where per-example known behavior changes after a code/model update and causes divergence in downstream code or breaks the user experience itself (even if dataset averages might stay flat across updates). Pseudocode for qoi:

qoi = []
for example in dataset:
    no_regression = wer(optimized_model(example)) <= wer(reference_model(example))
    qoi.append(no_regression)
qoi = (sum(qoi) / len(qoi)) * 100.

Note that the ordering of models with respect to WER does not necessarily match the ordering with respect to QoI. This is because the reference model gets assigned a QoI of 100% by definition. Any per-example regression by other implementations get penalized while per-example improvements are not rewarded. QoI (higher is better) matters where the production behavior is established by the reference results and the goal is to not regress when switching to an optimized or compressed model. On the other hand, WER (lower is better) matters when there is no established production behavior and one is picking the best quality versus model size trade off point.

We anticipate developers that use Whisper (or similar models) in production to have their own Quality Assurance test sets and whisperkittools offers the tooling necessary to run the same measurements on such custom test sets, please see the Model Evaluation on Custom Dataset for details.

Why are there so many Whisper versions?

WhisperKit is an SDK for building speech-to-text features in apps across a wide range of Apple devices. We are working towards abstracting away the model versioning from the developer so WhisperKit "just works" by deploying the highest-quality model version that a particular device can execute. In the interim, we leave the choice to the developer by providing quality and size trade-offs.

Datasets

Reproducing Results

Benchmark results on this page were automatically generated by whisperkittools using our cluster of Apple Silicon Macs as self-hosted runners on Github Actions. We periodically recompute these benchmarks as part of our CI pipeline. Due to security concerns, we are unable to open up the cluster to the public. However, any Apple Silicon Mac (even with 8GB RAM) can be used to run identical evaluation jobs locally. For reference, our M2 Ultra devices complete a librispeech + openai/whisper-large-v3 evaluation in under 1 hour regardless of the Whisper implementation. Oldest Apple Silicon Macs should take less than 1 day to complete the same evaluation.

Glossary

FAQ

Q1: xcrun: error: unable to find utility "coremlcompiler", not a developer tool or in PATH A1: Ensure Xcode is installed on your Mac and run sudo xcode-select --switch /Applications/Xcode.app/Contents/Developer.

Citation

If you use WhisperKit for something cool or just find it useful, please drop us a note at info@takeargmax.com!

If you use WhisperKit for academic work, here is the BibTeX:

@misc{whisperkit-argmax,
title = {WhisperKit},
author = {Argmax, Inc.},
year = {2024},
URL = {https://github.com/argmaxinc/WhisperKit}
}