Home

Awesome

CI PyPI version Documentation Gitter Forum

CTranslate2

CTranslate2 is a C++ and Python library for efficient inference with Transformer models.

The project implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.

The following model types are currently supported:

Compatible models should be first converted into an optimized model format. The library includes converters for multiple frameworks:

The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.

Key features

Some of these features are difficult to achieve with standard deep learning frameworks and are the motivation for this project.

Installation and usage

CTranslate2 can be installed with pip:

pip install ctranslate2

The Python module is used to convert models and can translate or generate text with few lines of code:

translator = ctranslate2.Translator(translation_model_path)
translator.translate_batch(tokens)

generator = ctranslate2.Generator(generation_model_path)
generator.generate_batch(start_tokens)

See the documentation for more information and examples.

Benchmarks

We translate the En->De test set newstest2014 with multiple models:

The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers.

Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.

CPU

Tokens per secondMax. memoryBLEU
OpenNMT-tf WMT14 model
OpenNMT-tf 2.31.0 (with TensorFlow 2.11.0)209.22653MB26.93
OpenNMT-py WMT14 model
OpenNMT-py 3.0.4 (with PyTorch 1.13.1)275.82012MB26.77
- int8323.31359MB26.72
CTranslate2 3.6.0658.8849MB26.77
- int16733.0672MB26.82
- int8860.2529MB26.78
- int8 + vmap1126.2598MB26.64
OPUS-MT model
Transformers 4.26.1 (with PyTorch 1.13.1)147.32332MB27.90
Marian 1.11.0344.57605MB27.93
- int16330.25901MB27.65
- int8355.84763MB27.27
CTranslate2 3.6.0525.0721MB27.92
- int16596.1660MB27.53
- int8696.1516MB27.65

Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.

GPU

Tokens per secondMax. GPU memoryMax. CPU memoryBLEU
OpenNMT-tf WMT14 model
OpenNMT-tf 2.31.0 (with TensorFlow 2.11.0)1483.53031MB3122MB26.94
OpenNMT-py WMT14 model
OpenNMT-py 3.0.4 (with PyTorch 1.13.1)1795.22973MB3099MB26.77
FasterTransformer 5.36979.02402MB1131MB26.77
- float168592.51360MB1135MB26.80
CTranslate2 3.6.06634.71261MB953MB26.77
- int88567.21005MB807MB26.85
- float1610990.7941MB807MB26.77
- int8 + float168725.4813MB800MB26.83
OPUS-MT model
Transformers 4.26.1 (with PyTorch 1.13.1)1022.94097MB2109MB27.90
Marian 1.11.03241.03381MB2156MB27.92
- float163962.43239MB1976MB27.94
CTranslate2 3.6.05876.41197MB754MB27.92
- int87521.91005MB792MB27.79
- float169296.7909MB814MB27.90
- int8 + float168362.7813MB766MB27.90

Executed with CUDA 11 on a g5.xlarge Amazon EC2 instance equipped with a NVIDIA A10G GPU (driver version: 510.47.03).

Additional resources