Awesome
Zarr Benchmarks
This repository contains benchmarks of Zarr V3 implementations.
[!NOTE] Contributions are welcomed for additional benchmarks, more implementations, or otherwise cleaning up this repository.
Also consider restarting development of the official zarr benchmark repository: https://github.com/zarr-developers/zarr-benchmark
Implementations Benchmarked
LDeakin/zarrs
viaLDeakin/zarrs_tools
- Read executable: zarrs_benchmark_read_sync
- Round trip executable: zarrs_reencode
- Python (v3.12.7):
google/tensorstore
zarr-developers/zarr-python
- With and without the
ZarrsCodecPipeline
fromilan-gold/zarrs-python
- With and without
dask
- With and without the
Benchmark scripts are in the scripts folder and implementation versions are listed in the benchmark charts.
[!WARNING] Python benchmarks are subject to the overheads of Python and may not be using an optimal API/parameters.
Please open a PR if you can improve these benchmarks.
make
Targets
pydeps
: install python dependencies (recommended to activate a venv first)zarrs_tools
: installzarrs_tools
(setCARGO_HOME
to override the installation dir)generate_data
: generate benchmark databenchmark_read_all
: run read all benchmarkbenchmark_read_chunks
: run chunk-by-chunk benchmarkbenchmark_roundtrip
: run roundtrip benchmarkbenchmark_all
: run all benchmarks
Benchmark Data
All datasets are $1024x2048x2048$ uint16
arrays.
Name | Chunk Shape | Shard Shape | Compression | Size |
---|---|---|---|---|
Uncompressed | $256^3$ | None | 8.0 GB | |
Compressed | $256^3$ | blosclz 9 + bitshuffling | 377 MB | |
Compressed + Sharded | $32^3$ | $256^3$ | blosclz 9 + bitshuffling | 1.1 GB |
Benchmark System
- AMD Ryzen 5900X
- 64GB DDR4 3600MHz (16-19-19-39)
- 2TB Samsung 990 Pro
- Ubuntu 22.04 (in Windows 11 WSL2, swap disabled, 32GB available memory)
Round Trip Benchmark
This benchmark measures time and peak memory usage to "round trip" a dataset (potentially chunk-by-chunk).
- The disk cache is cleared between each measurement
- These are best of 3 measurements
Table of raw measurements (benchmarks_roundtrip.md)
Standalone
Dask
Read Chunk-By-Chunk Benchmark
This benchmark measures the the minimum time and peak memory usage to read a dataset chunk-by-chunk into memory.
- The disk cache is cleared between each measurement
- These are best of 1 measurements
Table of raw measurements (benchmarks_read_chunks.md)
Standalone
[!NOTE]
zarr-python
benchmarks with sharding are not visible in this plot
Dask
Read All Benchmark
This benchmark measures the minimum time and and peak memory usage to read an entire dataset into memory.
- The disk cache is cleared between each measurement
- These are best of 3 measurements
Table of raw measurements (benchmarks_read_all.md)