Home

Awesome

SQLFlow: DuckDB for streaming data.

SQLFlow enables SQL-based stream-processing, powered by DuckDB. SQLFlow enables writing kafka stream processing logic in pure sql.

SQLFlow supports:

SQLFlow is currently not a good fit for:

SQLFlow is a kafka consumer that embeds SQL for stream transformation:

<img width="754" alt="Screenshot 2023-11-26 at 8 16 47 PM" src="https://github.com/turbolytics/sql-flow/assets/151242797/419d8688-1d08-45ce-b245-1c2c886a3157">

Getting Started

Docker

Docker is the easiest way to get started.

docker pull turbolytics/sql-flow:latest
docker run -v $(PWD)/dev:/tmp/conf -v /tmp/sqlflow:/tmp/sqlflow turbolytics/sql-flow:latest dev invoke /tmp/conf/config/examples/basic.agg.yml /tmp/conf/fixtures/simple.json

['{"city":"New York","city_count":28672}', '{"city":"Baltimore","city_count":28672}']
docker-compose -f dev/kafka-single.yml up -d
python3 cmd/publish-test-data.py --num-messages=10000 --topic="topic-local-docker"
docker exec -it kafka1 kafka-console-consumer --bootstrap-server=kafka1:9092 --topic=output-local-docker
docker run -v $(PWD)/dev:/tmp/conf -v /tmp/sqlflow:/tmp/sqlflow turbolytics/sql-flow:latest run /tmp/conf/config/local.docker.yml
...
...
{"city":"San Francisco504","city_count":1}
{"city":"San Francisco735","city_count":1}
{"city":"San Francisco533","city_count":1}
{"city":"San Francisco556","city_count":1}

The dev invoke command enables testing a sql-flow pipeline configuration on a batch of test data. This enables fast feedback local development before launching a sql-flow consumer that reads from kafka.

Configuration

The heart of sql-flow is the pipeline configuration file. Each configuration file specifies:

<img width="1021" alt="Screenshot 2023-11-26 at 8 10 44 PM" src="https://github.com/turbolytics/sql-flow/assets/151242797/4f286fdc-ac2b-4809-acdb-1dc4d239f883">

Every instance of sql-flow needs a pipeline configuration file.

Recipes

Coming Soon, until then checkout:

Running multiple SQLFlow instances on the same filesystem

Verifying a configuration locally

Development

pip install -r requirements.txt
pip install -r requirements.dev.txt

C_INCLUDE_PATH=/opt/homebrew/Cellar/librdkafka/2.3.0/include LIBRARY_PATH=/opt/homebrew/Cellar/librdkafka/2.3.0/lib pip install confluent-kafka
pytests tests

Benchmarks

Methodology

Each test loads 1MM records into kafka. Each test executes sql-flow consumer until each message is processed. Each test captures the maximum resident memory during the benchmark, and the average throughput of message ingestion.

System

Hardware:
    Hardware Overview:
      Model Name: MacBook Pro
      Model Identifier: MacBookPro18,3
      Model Number: Z15G001X2LL/A
      Chip: Apple M1 Pro
      Total Number of Cores: 10 (8 performance and 2 efficiency)
      Memory: 32 GB
      Activation Lock Status: Enabled
NameThroughputMax RSS MemoryPeak Memory Usage
Simple Aggregation Memory45,000 msgs / sec230 MiB130 MiB
Simple Aggregation Disk36,000 msgs / sec256 MiB102 MiB
Enrichment13,000 msgs /sec368 MiB124 MiB
CSV Disk Join11,500 msgs /sec312 MiB152 MiB
CSV Memory Join33,200 msgs / sec300 MiB107 MiB

Simple Aggregate - Disk / Mem

Performs a simple aggregate. Output is significantly smaller than input.

./benchmark/simple-agg-disk.sh
./benchmark/simple-agg-mem.sh

Enriches

Performs an enrichment. Output is 1:1 records with input, but each output record is enhanced with additional information.

./benchmark/enrichment.sh

CSV Disk Join

./benchmark/csv-disk-join.sh

CSV Memory Join

./benchmark/csv-disk-join.sh

Like SQLFlow? Use SQLFlow? Feature Requests? Please let us know! danny@turbolytics.io