Home

Awesome

Marmot

Go Report Card Discord GitHub

What & Why?

Marmot is a distributed SQLite replicator with leaderless, and eventual consistency. It allows you to build a robust replication between your nodes by building on top of fault-tolerant NATS JetStream.

So if you are running a read heavy website based on SQLite, you should be easily able to scale it out by adding more SQLite replicated nodes. SQLite is probably the most ubiquitous DB that exists almost everywhere, Marmot aims to make it even more ubiquitous for server side applications by building a replication layer on top.

Quick Start

Download latest Marmot and extract package using:

tar vxzf marmot-v*.tar.gz

From extracted directory run examples/run-cluster.sh. Make a change in /tmp/marmot-1.db using:

bash > sqlite3 /tmp/marmot-1.db
sqlite3 > INSERT INTO Books (title, author, publication_year) VALUES ('Pride and Prejudice', 'Jane Austen', 1813);

Now observe changes getting propagated to other database /tmp/marmot-2.db:

bash > sqlite3 /tmp/marmot-2.db
sqlite3 > SELECT * FROM Books;

You should be able to make changes interchangeably and see the changes getting propagated.

Out in wild

Here are some official, and community demos/usages showing Marmot out in wild:

What is the difference from others?

Marmot is essentially a CDC (Change Data Capture) and replication pipeline running top of NATS. It can automatically configure appropriate JetStreams making sure those streams evenly distribute load over those shards, so scaling simply boils down to adding more nodes, and re-balancing those JetStreams (auto rebalancing not implemented yet).

There are a few solutions like rqlite, dqlite, and LiteFS etc. All of them either are layers on top of SQLite (e.g. rqlite, dqlite) that requires them to sit in the middle with network layer in order to provide replication; or intercept physical page level writes to stream them off to replicas. In both cases they require a single primary node where all the writes have to go, and then these changes are applied to multiple readonly replicas.

Marmot on the other hand is born different. It's born to act as a side-car to your existing processes:

Making these choices has multiple benefits:

What happens when there is a race condition?

In Marmot every row is uniquely mapped to a JetStream. This guarantees that for any node to publish changes for a row it has to go through same JetStream as everyone else. If two nodes perform a change to same row in parallel, both of the nodes will compete to publish their change to JetStream cluster. Due to RAFT quorum constraint only one of the writer will be able to get its changes published first. Now as these changes are applied (even the publisher applies its own changes to database) the last writer will always win. This means there is NO serializability guarantee of a transaction spanning multiple tables. This is a design choice, in order to avoid any sort of global locking, and performance.

Stargazers over time

Stargazers over time

Limitations

Right now there are a few limitations on current solution:

Features

Eventually Consistent Leaderless Replication Fault Tolerant Built on NATS

Dependencies

Starting 0.8+ Marmot comes with embedded nats-server with JetStream support. This not only reduces the dependencies/processes that one might have to spin up, but also provides with out-of-box tooling like nat-cli. You can also use existing libraries to build additional tooling and scripts due to standard library support. Here is one example using Deno:

deno run --allow-net https://gist.githubusercontent.com/maxpert/d50a49dfb2f307b30b7cae841c9607e1/raw/6d30803c140b0ba602545c1c0878d3394be548c3/watch-marmot-change-logs.ts -u <nats_username> -p <nats_password> -s <comma_seperated_server_list>

The output will look something like this: image

Production status

CLI Documentation

Marmot picks simplicity, and lesser knobs to configure by choice. Here are command line options you can use to configure marmot:

For more details and internal workings of marmot go to these docs.

FAQs & Community

Our sponsor

Last but not least we would like to thank our sponsors who have been supporting development of this project.

<img src="https://resources.jetbrains.com/storage/products/company/brand/logos/GoLand_icon.png" alt="GoLand logo." height="64" /> <img src="https://resources.jetbrains.com/storage/products/company/brand/logos/jb_beam.png" alt="JetBrains Logo (Main) logo." height="64">