Home

Awesome

tectonicdb

build crate.io doc.rs Minimum Rust version Rust stable

cratedocs.rscrate.io
tectonicdbdoc.rscrate.io
tdb-coredoc.rscrate.io
tdb-server-coredoc.rscrate.io
tdb-clidoc.rscrate.io

tectonicdb is a fast, highly compressed standalone database and streaming protocol for order book ticks.

Why

Installation

There are several ways to install tectonicdb.

  1. Binaries

Binaries are available for download. Make sure to put the path to the binary into your PATH. Currently only build is for Linux x86_64.

  1. Crates
cargo install tectonicdb

This command will download tdb, tdb-server, dtftools binaries from crates.io and build locally.

  1. GitHub

To contribute you will need the copy of the source code on your local machine.

git clone https://github.com/0b01/tectonicdb
cd tectonicdb
cargo build --release
cargo run --release tdb-server

The binaries can be found under target/release.

How to use

It's very easy to setup.

./tdb-server --help

For example:

./tdb-server -vv -a -i 10000
# run the server on INFO verbosity
# turn on autoflush for every 10000 inserts per orderbook

Configuration

To config the Google Cloud Storage and Data Collection Backend integration, the following environment variables are used:

Variable NameDefaultDescription
TDB_HOST0.0.0.0The host to which the database will bind
TDB_PORT9001The port that the database will listen on
TDB_DTF_FOLDERdbName of the directory in which DTF files will be stored
TDB_AUTOFLUSHfalseIf true, recorded orderbook data will automatically be flushed to DTF files every interval inserts.
TDB_FLUSH_INTERVAL1000Every interval inserts, if autoflush is enabled, DTF files will be written from memory to disk.
TDB_GRANULARITY0Record history granularity level
TDB_LOG_FILE_NAMEtdb.logFilename of the log file for the database
TDB_Q_CAPACITY300Capacity of the circular queue for recording history

Client API

CommandDescription
HELPPrints help
PINGResponds PONG
INFOReturns info about table schemas
PERFReturns the answercount of items over time
LOAD [orderbook]Load orderbook from disk to memory
USE [orderbook]Switch the current orderbook
CREATE [orderbook]Create orderbook
GET [n] FROM [orderbook]Returns items
GET [n]Returns n items from current orderbook
COUNTCount of items in current orderbook
COUNT ALLReturns total count from all orderbooks
CLEARDeletes everything in current orderbook
CLEAR ALLDrops everything in memory
FLUSHFlush current orderbook to "Howdisk can
FLUSHALLFlush everything from memory to disk
SUBSCRIBE [orderbook]Subscribe to updates from orderbook
EXISTS [orderbook]Checks if orderbook exists
SUBSCRIBE [orderbook]Subscribe to orderbook

Data commands

USE [dbname]
ADD [ts], [seq], [is_trade], [is_bid], [price], [size];
INSERT 1505177459.685, 139010, t, f, 0.0703620, 7.65064240; INTO dbname

Monitoring

TectonicDB supports monitoring/alerting by periodically sending its usage info to an InfluxDB instance:

    --influx-db <influx_db>                        influxdb db
    --influx-host <influx_host>                    influxdb host
    --influx-log-interval <influx_log_interval>    influxdb log interval in seconds (default is 60)

As a concrete example,

...
$ influx
> CREATE DATABASE market_data;
> ^D
$ tdb --influx-db market_data --influx-host http://localhost:8086 --influx-log-interval 20
...

TectonicDB will send field values disk={COUNT_DISK},size={COUNT_MEM} with tag ob={ORDERBOOK} to market_data measurement which is the same as the dbname.

Additionally, you can query usage information directly with INFO and PERF commands:

  1. INFO reports the current tick count in memory and on disk.

  2. PERF returns recorded tick count history whose granularity can be configured.

Logging

Log file defaults to tdb.log.

Testing

export RUST_TEST_THREADS=1
cargo test

Tests must be run sequentially because some tests depend on dtf files that other tests generate.

Benchmark

tdb client comes with a benchmark mode. This command inserts 1M records into the tdb.

tdb -b 1000000

Using dtf files

Tectonic comes with a commandline tool dtfcat to inspect the file metadata and all the stored events into either JSON or CSV.

Options:

USAGE:
    dtfcat [FLAGS] --input <INPUT>

FLAGS:
    -c, --csv         output csv
    -h, --help        Prints help information
    -m, --metadata    read only the metadata
    -V, --version     Prints version information

OPTIONS:
    -i, --input <INPUT>    file to read

As a library

It is possible to use the Dense Tick Format streaming protocol / file format in a different application. Works nicely with any buffer implementing the Write trait.

Requirements

TectonicDB is a standalone service.

Language bindings:

Additional Features

Changelog