Home

Awesome

<!-- markdownlint-disable --> <p align="center"> <img src="https://github.com/litestar-org/branding/blob/473f54621e55cde9acbb6fcab7fc03036173eb3d/assets/Branding%20-%20SVG%20-%20Transparent/Logo%20-%20Banner%20-%20Inline%20-%20Light.svg#gh-light-mode-only" alt="Litestar Logo - Light" width="100%" height="auto" /> <img src="https://github.com/litestar-org/branding/blob/473f54621e55cde9acbb6fcab7fc03036173eb3d/assets/Branding%20-%20SVG%20-%20Transparent/Logo%20-%20Banner%20-%20Inline%20-%20Dark.svg#gh-dark-mode-only" alt="Litestar Logo - Dark" width="100%" height="auto" /> </p> <!-- markdownlint-restore --> <div align="center"> <!-- prettier-ignore-start -->
ProjectStatus
CI/CDCI
CommunityReddit Discord Matrix Medium Twitter Blog
MetaLitestar Project License - MIT Litestar Sponsors linting - Ruff code style - Black
<!-- prettier-ignore-end --> </div>

api-performance-tests

[!IMPORTANT]
Starlite has been renamed to Litestar

This is an API performance test comparing:

  1. Litestar
  2. Starlite v1.5x
  3. Starlette
  4. FastAPI
  5. Sanic
  6. BlackSheep

Using the bombardier HTTP benchmarking tool.

Test Setup

Setup is identical for all frameworks.

Tests

All tests are run sync and async

Serialization and data sending

Plaintext
JSON

Serializing a dictionary into JSON

Serialization

(only supported by Litestar, Starlite, and FastAPI)

Files

Path amd query parameter handling

All responses return "No Content"

Dependency injection

(not supported by Starlette)

Modifying responses

All responses return "No Content"

Running the tests

Prerequisites

Running tests

  1. Clone this repo
  2. Run poetry install
  3. Run tests with poetry run bench run --rps --latency

After the run, the results will be stored in results/run_<run_mumber>.json

Selecting which frameworks to test

To select a framework, simply pass its name to the run command:

bench run --rps litestar starlite starlette fastapi

Selecting a framework version

Running a specific test

You can run a single test by specifying its full name and category:

bench run --rps litestar -t json:json-1K

Test Settings

-r, --rebuildrebuild docker images
-L, --latencyrun latency tests
-R, --rpsrun RPS tests
-w, --warmupduration of the warmup period (default: 5s)
-e, --endpoint mode [sync|async]endpoint types to select (default: sync, async)
-c, --endpoint-category [plaintext|json|files|params|dynamic-response|dependency-injection|serialization|post-json|post-body]test types to select (default: all)
-d, --durationduration of the rps benchmarks (default: 15s)
-l, --limitmax requests per second for latency benchmarks (default: 20)
-r, --requeststotal number of requests for latency benchmarks (default: 1000)

Analyzing the results

Contributing

PRs are welcome.

Please make sure to install pre-commit on your system, and then execute pre-commit install in the repository root - this will ensure the pre-commit hooks are in place.

After doing this, add a PR with your changes and a clear description of the changes.