Home

Awesome

Equinox Build Status release NuGet license code size docs status Discord Gitpod ready-to-code

Equinox is a set of low dependency libraries that allow for event-sourced processing against stream-based stores handling:

Not a framework; you compose the libraries into an architecture that fits your apps' evolving needs.

It does not and will not handle projections and subscriptions. See Propulsion for that.

Table of Contents

Getting Started

Design Motivation

Equinox's design is informed by discussions, talks and countless hours of hard and thoughtful work invested into many previous systems, frameworks, samples, forks of samples, the outstanding continuous work of the EventStore founders and team and the wider DDD-CQRS-ES community. It would be unfair to single out even a small number of people despite the immense credit that is due. Some aspects of the implementation are distilled from Jet.com systems dating all the way back to 2013.

An event sourcing system usually needs to address the following concerns:

  1. Storing events with good performance and debugging capabilities
  2. Transaction processing
    • Optimistic concurrency (handle loading conflicting events and retrying if another transaction overlaps on the same stream)
    • Folding events into a State, updating as new events are added
  3. Decoding events using codecs and formats
  4. Framework and application integration
  5. Projections and Reactions

Designing something that supports all of these as a single integrated solution results in an inflexible and difficult to use framework. Thus, Equinox focuses on two central aspects of event sourcing: items 1 and 2 on the list above.

Of course, the other concerns can't be ignored; thus, they are supported via other libraries that focus on them:

Integration with other frameworks (e.g., Equinox wiring into ASP.NET Core) is something that is intentionally avoided; as you build your application, the nature of how you integrate things will naturally evolve.

We believe the fact Equinox is a library is critical:

If you're looking to learn more about and/or discuss Event Sourcing and it's myriad benefits, trade-offs and pitfalls as you apply it to your Domain, look no further than the thriving 4000+ member community on the DDD-CQRS-ES Discord; you'll get patient and impartial world class advice 24x7 (there are #equinox, #eventstore and #sql-stream-store channels for questions or feedback). (invite link)

Features

Currently Supported Data Stores

Components

The components within this repository are delivered as multi-targeted Nuget packages supporting net6.0 (F# >= 6) profiles; each of the constituent elements is designed to be easily swappable as dictated by the task at hand. Each of the components can be inlined or customized easily:-

Core library

Serialization support

Data Store libraries

Projection libraries

Equinox does not focus on projection logic - each store brings its own strengths, needs, opportunities and idiosyncrasies. Here's a list of some relevant libraries from sibling projects that get used with regard to this:

dotnet tool provisioning / benchmarking tool

Starter Project Templates and Sample Applications

Overview

The Propulsion Perspective

Equinox and Propulsion have a Yin and yang relationship; the use cases for both naturally interlock and overlap. It can be relevant to peruse the Propulsion Documentation's Overview Diagrams for the complementary perspective (TL;DR its largely the same topology, with elements that are central here de-emphasized over there, and vice versa)

C4 Context diagram

Equinox focuses on the Consistent Processing element of building an event-sourced system, offering tailored components that interact with a specific Consistent Event Store, as laid out here in this C4 System Context Diagram:

Equinox c4model.com Context Diagram

:point_up: Propulsion elements (which we consider External to Equinox) support the building of complementary facilities as part of an overall Application:

C4 Container diagram

The relevant pieces of the above break down as follows, when we emphasize the Containers aspects relevant to Equinox:

Equinox c4model.com Container Diagram

See Overview section in DOCUMENTATION.md for further drill down

TEMPLATES

The best place to start, sample-wise is with the QuickStart, which walks you through sample code, tuned for approachability, from dotnet new templates stored in a dedicated repo.

SAMPLES

The samples/ folder contains various further examples (some of the templates are derived from these), with the complementary goals of:

<a name="TodoBackend"></a>

TODOBACKEND, see samples/TodoBackend

The repo contains a vanilla ASP.NET Core implementation of the well-known TodoBackend Spec. NB the implementation is largely dictated by spec; no architectural guidance expressed or implied ;). It can be run via:

& dotnet run --project samples/Web -S es # run against eventstore, omit `es` to use in-memory store, or see PROVISIONING EVENTSTORE
start https://www.todobackend.com/specs/index.html?https://localhost:5001/todos # for low-level debugging / validation of hosting arrangements
start https://www.todobackend.com/client/index.html?https://localhost:5001/todos # standard JavaScript UI
start http://localhost:5341/#/events # see logs triggered by `-S` above in https://getseq.net        

STORE, see /samples/Store

The core sample in this repo is the Store sample, which contains code and tests extracted from real implementations (with minor simplifications in some cases).

These facts mean that:

While these things can of course be perfected through PRs, this is definitely not top of the work list for the purposes of this repo. (We'd be delighted to place links to other samples, including cleanups / rewrites of these samples written with different testing platforms, web platforms, or DDD/CQRS/ES design flavors right here).

m-r port, see samples/Store/Domain/InventoryItem.fs

For fun, there's a direct translation of the InventoryItem Aggregate and Command Handler from Greg Young's m-r demo project as one could write it in F# using Equinox. NB any typical presentation of this example includes copious provisos and caveats about it being a toy example written almost a decade ago.

samples/Tutorial (in this repo): Annotated .fsx files with sample aggregate implementations

@ameier38's Tutorial

Andrew Meier has written a very complete tutorial modeling a business domain using Equinox and EventStoreDB; includes Dockerized Suave API, test suite using Expecto, build automation using FAKE, and CI using Codefresh; see the repo and its overview blog post.

QuickStart

Spin up a TodoBackend .fsproj app (storing in Equinox.MemoryStore Simulator)

  1. Make a scratch area

    mkdir ExampleApp
    cd ExampleApp 
    
  2. Use a dotnet new template to get fresh code in your repo

    dotnet new -i Equinox.Templates # see source in https://github.com/jet/dotnet-templates
    dotnet new eqxweb -t # -t for todos, defaults to memory store (-m) # use --help to see options regarding storage subsystem configuration etc
    
  3. Run the TodoBackend:

    dotnet run --project Web
    
  4. Run the standard TodoMvc frontend against your locally-hosted, fresh backend (See generated README.md for more details)

Spin up a TodoBackend .csproj ... with C# code

While Equinox is implemented in F#, and F# is a great fit for writing event-sourced domain models, the APIs are not F#-specific; there's a C# edition of the template. The instructions are identical to the rest, but you need to use the eqxwebcs template instead of eqxweb.

Store data in EventStore

  1. install EventStore locally (requires admin privilege)

    • For Windows, install with Chocolatey:

      cinst eventstore-oss -y # where cinst is an invocation of the Chocolatey Package Installer on Windows
      
    • For OSX, install with brew cask install eventstore

  2. start the local EventStore instance on any OS:

    • Check out the github.com/jet/equinox repo
    • docker compose up

    For more complete instructions, follow https://developers.eventstore.com/server/v21.10/installation.html#use-docker-compose

  3. generate sample app with EventStore wiring from template and start

    dotnet new eqxweb -t -e # -t for todos, -e for eventstore
    dotnet run --project Web
    
  4. browse writes at http://localhost:2113/web/index.html#/streams

Store data in Azure CosmosDB

  1. export 3x env vars (see provisioning instructions)

    $env:EQUINOX_COSMOS_CONNECTION="AccountEndpoint=https://....;AccountKey=....=;"
    $env:EQUINOX_COSMOS_DATABASE="equinox-test"
    $env:EQUINOX_COSMOS_CONTAINER="equinox-test"
    
  2. use the eqx tool to initialize the database and/or container (using preceding env vars)

    dotnet tool uninstall Equinox.Tool -g
    dotnet tool install Equinox.Tool -g --prerelease
    eqx init -ru 400 cosmos # generates a database+container, adds optimized indexes
    
  3. generate sample app from template, with CosmosDB wiring

    dotnet new eqxweb -t -c # -t for todos, -c for cosmos
    dotnet run --project Web
    
  4. Use the eqx tool to dump stats relating the contents of the CosmosDB store

    # run queries to determine how many streams, docs, events there are in the container
    eqx -V stats -P cosmos # -P to run in parallel # -V to show underlying query being used
    
  5. Use the eqx tool to query streams and/or snapshots in a CosmosDB store

    <a name="eqx-query"></a>

    # Add indexing of the `u`nfolds borne by Tip Items: 1) `c` for the case name 2) `d` for fields of uncompressed unfolds 
    eqx init -m serverless --indexunfolds cosmos -d db -c $EQUINOX_COSMOS_VIEWS
    
    # query all streams LIKE "$User-%" with `Snapshotted2` unfolds. Batches of up to 100,000 events
    eqx query -cn '$User' -un Snapshotted2 cosmos -d db -c $EQUINOX_COSMOS_VIEWS -b 100000
    
    # use a wild card (LIKE) for the stream name 
    eqx query -cl '$Us%' -un Snapshotted cosmos -d db -c $EQUINOX_COSMOS_VIEWS -b 100000
    # > Querying Default: SELECT c.u, c.p, c._etag FROM c WHERE c.p LIKE "$Us%" AND EXISTS (SELECT VALUE u FROM u IN c.u WHERE u.c = "Snapshotted") {}
    # > Page 7166s, 7166u, 0e 320.58RU 3.9s {}
    # > Page 1608s, 1608u, 0e 68.59RU 0.9s {}
    # > TOTALS 1c, 8774s, 389.17RU 4.7s {}   
    
    # Skip loading the _etag to simulate a query where you will only render the result (not `Transact` against it)
    eqx query -cn '$User' -m readonly -un Snapshotted cosmos -d db -c $EQUINOX_COSMOS_VIEWS -b 100000
    # > Querying ReadOnly: SELECT c.u FROM c WHERE c.p LIKE "$User-%" AND EXISTS (SELECT VALUE u FROM u IN c.u WHERE u.c = "Snapshotted") {}
    # > Page 8774s, 8774u, 0e 342.33RU 3.8s {}
    # > TOTALS 0c, 8774s, 342.33RU 3.8s {} # 👈 cheaper and only one batch as no .p or ._etag 
    
    # add criteria filtering based on an Uncompressed Unfold
    eqx query -cn '$User' -un EmailIndex -uc 'u.d.email = "a@b.com"' cosmos -d db -c $EQUINOX_COSMOS_VIEWS -b 100000
    # > Querying Default: SELECT c.u, c.p, c._etag FROM c WHERE c.p LIKE "$User-%" AND EXISTS (SELECT VALUE u FROM u IN c.u WHERE u.c = "EmailIndex" AND u.d.email = "a@b.com") {}
    # > Page 0s, 0u, 0e 2.8RU 0.7s {}
    # > TOTALS 0c, 0s, 2.80RU 0.7s {} # 👈 only 2.8RU if nothing is returned
    
    # DUMP ONE STREAM TO A FILE (equivalent to queries performed by CosmosStore.AccessStrategy.Unoptimized)
    # Can be imported into another store via `propulsion sync cosmos from json`
    eqx query -sn 'user-f28fb6feea00550e93ca77b6f29899cd' -o dump-user.json cosmos -d db -c $EQUINOX_COSMOS_CONTAINER -b 9999
    # > Dumping Raw content to ./dump-user.json {}
    # > Querying Raw: SELECT * FROM c WHERE c.p = "user-f28fb6feea00550e93ca77b6f29899cd" AND 1=1 {}
    # > Page 9s, 1u, 10e 3.23RU 0.5s 0.0MiB age 0002.10:04:13 {} # 👈 2.80 if no results, adds per KiB charge if there are results 
    # > TOTALS 1c, 9s, 3.23RU R/W 0.0/0.0MiB 3.9s {}
    
    # DUMP FULL CONTENT OF THE CONTAINER TO A FILE
    # Can be imported into another store via `propulsion sync cosmos from json`
    eqx query -o ../dump-240216.json cosmos -d db -c $EQUINOX_COSMOS_CONTAINER -b 9999                             
    # > Dumping Raw content to ~/dumps/dump-240216.json {}
    # > No StreamName or CategoryName/CategoryLike specified - Unfold Criteria better be unambiguous {}
    # > Querying Raw: SELECT * FROM c WHERE 1=1 AND 1=1 {}
    # > Page 2972s, 748u, 3112e 108.9RU 3.8s 4.0MiB age 0212.18:00:45 {}
    # > Page 3211s, 777u, 3161e 112.29RU 3.3s 4.0MiB age 0212.09:06:02 {}
    # > Page 3003s, 663u, 3172e 110.33RU 3.4s 4.0MiB age 0211.04:09:12 {}
    # <chop>
    # > Page 2768s, 498u, 3153e 107.46RU 3.0s 4.0MiB age 0016.13:09:02 {}
    # > Page 2806s, 505u, 3198e 107.17RU 3.0s 4.0MiB age 0010.18:52:45 {}
    # > Page 2903s, 601u, 3188e 107.53RU 3.1s 4.0MiB age 0004.05:24:51 {}
    # > Page 2638s, 316u, 3019e 93.09RU 2.5s 3.4MiB age 0000.05:08:38 {}
    # > TOTALS 11c, 206,356s, 7,886.75RU R/W 290.4/290.4MiB 225.3s {}
    
  6. Use propulsion sync tool to run a CosmosDB ChangeFeedProcessor

    dotnet tool uninstall Propulsion.Tool -g
    dotnet tool install Propulsion.Tool -g --prerelease
    
    propulsion init -ru 400 cosmos # generates a -aux container for the ChangeFeedProcessor to maintain consumer group progress within
    # -V for verbose ChangeFeedProcessor logging
    # `-g projector1` represents the consumer group - >=1 are allowed, allowing multiple independent projections to run concurrently
    # stats specifies one only wants stats regarding items (other options include `kafka` to project to Kafka)
    # cosmos specifies source overrides (using defaults in step 1 in this instance)
    propulsion -V sync -g projector1 stats from cosmos
    
  7. Generate a CosmosDB ChangeFeedProcessor sample .fsproj (without Kafka producer/consumer), using Propulsion.CosmosStore

    dotnet new -i Equinox.Templates
    
    # note the absence of -k means the projector code will be a skeleton that does no processing besides counting the events
    dotnet new proProjector
    
    # start one or more Projectors
    # `-g projector2` represents the consumer group; >=1 are allowed, allowing multiple independent projections to run concurrently
    # cosmos specifies source overrides (using defaults in step 1 in this instance)
    dotnet run -- -g projector2 cosmos
    
  8. Use propulsion tool to Run a CosmosDB ChangeFeedProcessor, emitting to a Kafka topic

    $env:PROPULSION_KAFKA_BROKER="instance.kafka.mysite.com:9092" # or use -b	
    # `-V` for verbose logging	
    # `projector3` represents the consumer group; >=1 are allowed, allowing multiple independent projections to run concurrently	
    # `-l 5` to report ChangeFeed lags every 5 minutes	
    # `kafka` specifies one wants to emit to Kafka	
    # `temp-topic` is the topic to emit to	
    # `cosmos` specifies source overrides (using defaults in step 1 in this instance)	
    propulsion -V sync -g projector3 -l 5 kafka temp-topic from cosmos	
    
  9. Generate CosmosDB Kafka Projector and Consumer .fsprojects (using Propulsion.Kafka)

    cat readme.md # more complete instructions regarding the code
    
    # -k requests inclusion of Apache Kafka support
    md projector | cd
    dotnet new proProjector -k
    
    # start one or more Projectors (see above for more examples/info re the Projector.fsproj)
    
    $env:PROPULSION_KAFKA_BROKER="instance.kafka.mysite.com:9092" # or use -b
    $env:PROPULSION_KAFKA_TOPIC="topic0" # or use -t
    dotnet run -- -g projector4 -t topic0 cosmos
    
    # generate a consumer app
    md consumer | cd
    dotnet new proConsumer
    
    # start one or more Consumers
    $env:PROPULSION_KAFKA_GROUP="consumer1" # or use -g
    dotnet run -- -t topic0 -g consumer1
    
  10. Generate an Archive container; Generate a ChangeFeedProcessor App to mirror desired streams from the Primary to it

    # once
    eqx init -ru 400 cosmos -c equinox-test-archive
    
    md archiver | cd
    
    # Generate a template app that'll sync from the Primary (i.e. equinox-test)
    # to the Archive (i.e. equinox-test-archive)
    dotnet new proArchiver
    
    # TODO edit Handler.fs to add criteria for what to Archive
    # - Normally you won't want to Archive stuff like e.g. `Sync-` checkppoint streams
    # - Any other ephemeral application streams can be excluded too
    
    # -w 4 # constrain parallel writers in order to leave headroom for readers; Archive container should be cheaper to run
    # -S -t 40 # emit log messages for Sync calls costing > 40 RU
    # -md 20 (or lower) is recommended to be nice to the writers - the archiver can afford to lag
    dotnet run -c Release -- -w 4 -S -t 40 -g ArchiverConsumer `
      cosmos -md 20 -c equinox-test -a equinox-test-aux `
      cosmos -c equinox-test-archive 
    
  11. Use a ChangeFeedProcessor driven from the Archive Container to Prune the Primary

    md pruner | cd
    
    # Generate a template app that'll read from the Archive (i.e. equinox-test-archive)
    # and prune expired events from the Primary (i.e. equinox-test)
    dotnet new proPruner
    
    # TODO edit Handler.fs to add criteria for what to Prune
    # - While its possible to prune the minute it's archived, normally you'll want to allow a time lag before doing so
    
    # -w 2 # constrain parallel pruners in order to not consume RUs excessively on Primary
    # -md 10 (or lower) is recommended to contrain consumption on the Archive - Pruners lagging is rarely critical
    dotnet run -c Release -- -w 2 -g PrunerConsumer `
      cosmos -md 10 -c equinox-test-archive -a equinox-test-aux `
      cosmos -c equinox-test
    

<a name="sqlstreamstore"></a>

Use SqlStreamStore

SqlStreamStore is provided in the samples and the eqx tool:

cd ~/code/equinox

# set up the DB/schema
dotnet run --project tools/Equinox.Tool -- initsql pg -c "connectionstring" -p "u=un;p=password" -s "schema"

# run a benchmark
dotnet run -c Release --project tools/Equinox.Tool -- loadtest -t saveforlater -f 50 -d 5 -C -U pg -c "connectionstring" -p "u=un;p=password" -s "schema"

# run the webserver, -A to autocreate schema on connection
dotnet run --project samples/Web/ -- my -c "mysqlconnectionstring" -A

# set up the DB/schema
eqx initsql pg -c "connectionstring" -p "u=un;p=password" -s "schema"

# run a benchmark
eqx loadtest -t saveforlater -f 50 -d 5 -C -U pg -c "connectionstring" -p "u=un;p=password" -s "schema" 
eqx dump "SavedForLater-ab25cc9f24464d39939000aeb37ea11a" pg -c "connectionstring" -p "u=un;p=password" -s "schema" # show stored JSON (Guid shown in eqx loadtest output) 

<a name="message-db"></a>

Use MessageDB

MessageDb support is provided in the samples and the eqx tool:

Equinox does not provide utilities for configuring or installing MessageDB. See MessageDB's installation documentation.

In addition to the default access strategy of reading the whole stream forwards in batches, the following access strategies are supported in MessageDb:

AccesStrategy.LatestKnownEvent

AccessStrategy.AdjacentSnapshots

<a name="dynamodb"></a>

Use Amazon DynamoDB

DynamoDB is supported in the samples and the eqx tool equivalent to the CosmosDB support as described:

  1. The tooling and samples in this repo default to using the following environment variables (see AWS CLI UserGuide for more detailed guidance as to specific configuration)

    $env:EQUINOX_DYNAMO_SERVICE_URL="https://dynamodb.us-west-2.amazonaws.com" # Simulator: "http://localhost:8000"
    $env:EQUINOX_DYNAMO_ACCESS_KEY_ID="AKIAIOSFODNN7EXAMPLE"
    $env:EQUINOX_DYNAMO_SECRET_ACCESS_KEY="AwJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
    $env:EQUINOX_DYNAMO_TABLE="equinox-test"
    $env:EQUINOX_DYNAMO_TABLE_ARCHIVE="equinox-test-archive"
    
  2. Tour of the tools/samples:

    cd ~/code/equinox
    
    # start the simulator at http://localhost:8000 and an admin console at http://localhost:8001/
    docker compose up dynamodb-local dynamodb-admin -d
    
    # Establish the table in us-east-1 - keys come from $EQUINOX_DYNAMO_ACCESS_KEY_ID and $EQUINOX_DYNAMO_SECRET_ACCESS_KEY
    dotnet run --project tools/Equinox.Tool -- initaws -r 10 -w 10 -s new dynamo -t TableName -su https://dynamodb.us-east-1.amazonaws.com
    
    # Check the status and get the streams ARN - keys come from AWS SDK config for us-east-1 region
    dotnet run --project tools/Equinox.Tool -- stats dynamo -t TableName -sr us-east-1
    
    # run a benchmark
    dotnet run -c Release --project tools/Equinox.Tool -- loadtest -t saveforlater -f 50 -d 5 -CU dynamo
    
    # run the webserver
    dotnet run --project samples/Web/ -- dynamo -t TableName
    
    # run a benchmark connecting to the webserver
    eqx loadtest -t saveforlater -f 50 -d 5 -CU web
    eqx dump "SavedForLater-ab25cc9f24464d39939000aeb37ea11a" dynamo # show stored JSON (Guid shown in eqx loadtest output) 
    
  3. Useful articles

BENCHMARKS

A key facility of this repo is being able to run load tests, either in process against a nominated store, or via HTTP to a nominated instance of samples/Web ASP.NET Core host app. The following test suites are implemented at present:

BUILDING

Please note the QuickStart is probably the best way to gain an overview - these instructions are intended to illustrated various facilities of the build script for people making changes.

build and run

Run, including running the tests that assume you've got a local EventStore and pointers to a CosmosDB database and container prepared (see PROVISIONING):

./build.ps1

build, skipping tests that require a Store instance

./build -s

build, skipping all tests

dotnet pack build.proj

build, skip EventStore tests

./build -se

build, skip EventStore tests, skip auto-provisioning / de-provisioning CosmosDB

./build -se -scp

Run EventStore benchmark on .NET Core (when provisioned)

At present, .NET Core seems to show comparable perf under normal load, but becomes very unpredictable under load. The following benchmark should produce pretty consistent levels of reads and writes, and can be used as a baseline for investigation:

& dotnet run -c Release --project tools/Equinox.Tool -- loadtest -t saveforlater -f 1000 -d 5 -C -U es

run Web benchmark

The CLI can drive the Store and TodoBackend samples in the samples/Web ASP.NET Core app. Doing so requires starting a web process with an appropriate store (EventStore in this example, but can be memory / omitted etc. as in the other examples)

in Window 1

& dotnet run -c Release --project samples/Web -- -C -U es

in Window 2

dotnet tool install -g Equinox.Tool --prerelease # only once
eqx loadtest -t saveforlater -f 200 web

run CosmosDB benchmark (when provisioned)

dotnet run --project tools/Equinox.Tool -- loadtest `
  cosmos -s $env:EQUINOX_COSMOS_CONNECTION -d $env:EQUINOX_COSMOS_DATABASE -c $env:EQUINOX_COSMOS_CONTAINER

PROVISIONING

Provisioning EventStore (when not using -s or -se)

There's a docker-compose.yml file in the root, so installing docker-compose and then running docker-compose up rigs a local 3-node cluster, which is assumed to be configured for Equinox.EventStore.Integration and Equinox.EventStoreDb.Integration

For more complete instructions, follow https://developers.eventstore.com/server/v21.10/installation.html#use-docker-compose

<a name="provisioning-cosmosdb"></a>

Provisioning CosmosDB (when not using build.ps1 -sc to skip verification)

Using Azure Cosmos DB Service

dotnet run --project tools/Equinox.Tool -- init -ru 400 `
    cosmos -s $env:EQUINOX_COSMOS_CONNECTION -d $env:EQUINOX_COSMOS_DATABASE -c $env:EQUINOX_COSMOS_CONTAINER
# Same for a Archive Container for integration testing of the archive store fallback mechanism
$env:EQUINOX_COSMOS_CONTAINER_ARCHIVE="equinox-test-archive"
dotnet run --project tools/Equinox.Tool -- init -ru 400 `
    cosmos -s $env:EQUINOX_COSMOS_CONNECTION -d $env:EQUINOX_COSMOS_DATABASE -c $env:EQUINOX_COSMOS_CONTAINER_ARCHIVE

Using Cosmos Emulator on an Intel Mac

NOTE There's no Apple Silicon emulator available as yet.

NOTE Have not tested with the Windows Emulator, but it should work with analogous steps.

docker compose up equinox-cosmos -d
bash docker-compose-cosmos.sh

Provisioning SqlStreamStore

There's a docker-compose.yml file in the root, so installing docker-compose and then running docker-compose up rigs local equinox-mssql, equinox-mysql and equinox-postgres servers and databases at known ports. NOTE The Equinox.SqlStreamStore.*.Integration suites currently assume this is in place and will otherwise fail.

DEPROVISIONING

Deprovisioning (aka nuking) EventStore data resulting from tests to reset baseline

While EventStore rarely shows any negative effects from repeated load test runs, it can be useful for various reasons to drop all the data generated by the load tests by casting it to the winds:-

# requires admin privilege
rm $env:ProgramData\chocolatey\lib\eventstore-oss\tools\data

Deprovisioning CosmosDB

The provisioning step spins up RUs in CosmosDB for the Container, which will keep draining your account until you reach a spending limit (if you're lucky!). When finished running any test, it's critical to drop the RU allocations back down again via some mechanism (either delete the container or reset the RU provision down to the lowest possible value).

RELEASING

*The perfect is the enemy of the good; all this should of course be automated, but the elephant will be consumed in small bites rather than waiting till someone does it perfectly. This documents the actual release checklist as it stands right now. Any small helping bites much appreciated :pray: *

Tagging releases

This repo uses MinVer; see here for more information on how it works.

All non-alpha releases derive from tagged commits on master or vX branch. The tag defines the nuget package id etc. that the release will bear (dotnet pack uses the MinVer package to grab the value from the commit)

Checklist

FAQ

What is Equinox?

OK, I've read the README and the tagline. I still don't know what it does! Really, what's the TL;DR ?

Should I use Equinox to learn event sourcing ?

You could. However the Equinox codebase itself is not designed to be a tutorial; it's extracted from production systems and optimized; there is no pedagogical mission. FsUno.Prod on the other hand has this specific intention, walking though that is highly recommended. Also EventStore, being a widely implemented and well-respected open source system has some excellent learning materials and documentation with a wide usage community (search for DDD-CQRS-ES Discord).

Having said that, we'd love to see a set of tutorials written by people looking from different angles, and over time will likely do one too ... there's no reason why the answer to this question can't become "of course!"

Can I use it for really big projects?

You can. Folks in Jet do; we also have systems where we have no plans to use it, or anything like it. That's OK; there are systems where having precise control over one's data access is critical. And (shush, don't tell anyone!) some find writing this sort of infrastructure to be a very fun design challenge that beats doing domain modelling any day...

Can I use it for really small projects and tiny microservices?

You can. Folks in Jet do; but we also have systems where we have no plans to use it, or anything like it as it would be overkill even for people familiar with Equinox.

OK, but should I use Equinox for a small project ?

You'll learn a lot from building your own equivalent wrapping layer. Given the array of concerns Equinox is trying to address, there's no doubt that a simpler solution is always possible if you constrain the requirements to specifics of your context with regard to a) scale b) complexity of domain c) degree to which you use or are likely to use >1 data store. You can and should feel free to grab slabs of Equinox's implementation and whack it into an Infrastructure.fs in your project too (note you should adhere to the rules of the Apache 2 license). If you find there's a particular piece you'd really like isolated or callable as a component and it's causing you pain as you're using it over and over in ~ >= 3 projects, please raise an Issue though !

Having said that, getting good logging, some integration tests and getting lots of off-by-one errors off your plate is nice; the point of DDD-CQRS-ES is to get beyond toy examples to the good stuff - Domain Modelling on your actual domain.

What client languages are supported ?

The main language in mind for consumption is of course F# - many would say that F# and event sourcing are a dream pairing; little direct effort has been expended polishing it to be comfortable to consume from other .NET languages, the dotnet new eqxwebcs template represents the current state. In Equinox V4, the DeciderCore interface offers an interface that uses C#-friendly Task and Func types (compared to Decider, which uses async and curried function signatures to provide an idiomatic F# experience, which is possible, but cumbersome to use from C#)

You say I can use volatile memory for integration tests, could this also be used for learning how to get started building event sourcing programs with equinox?

The MemoryStore is intended to implement the complete semantics of a durable store (aside from caching). The main benefit of using it is that any tests using it have zero environment dependencies. In some cases this can be very useful for demo apps or generators (rather than assuming a specific store at a specific endpoint and/or credentials, there is something to point at which does not require configuration or assumptions.). The single problem of course is that it's all in-process; the minute you stop the host, the items on your list will of course disappear. In general, EventStore is also an attractive option for prototyping; the open source edition is trivial to install and has a Web UI that lets you navigate events being produced etc.

OK, so it supports CosmosDB, DynamoDB, EventStoreDB, MessageDB and SqlStreamStore and might even support more in the future. I really don't intend to shift datastores. Period. Why would I take on this complexity only to get the lowest common denominator ?

Yes, you have decisions to make; Equinox is not a panacea - there is no one size fits all. While the philosophy of Equinox is a) provide an opinionated store-neutral Programming Model with a good pull toward a big pit of success, while not closing the door to using store-specific features where relevant, having a dedicated interaction is always going to afford you more power and control.

Is there a guide to building the simplest possible hello world "counter" sample, that simply counts with an add and a subtract event?

Yes; Counter.fsx in th Tutorial project in this repo. It may also be worth starting with the API Guide in DOCUMENTATION.md. An alternate way is to look at the Todo.fs files emitted by dotnet new equinoxweb in the QuickStart.

<a name="why-snapshots-in-stream"></a>

Why do the snapshots go in the same stream in Equinox.EventStore and Equinox.SqlStreamStore ? :pray: @chrisjhoare

I've been looking through the snapshotting code recently. Can see the snapshot events go in the same stream as regular events. Presume this is to save on operations to read/write the streams? and a bit less overhead maintaining two serializers? Are there any other advantages? I quite like it this way but think i saw the geteventstore advice was separate streams so just interested in any other reasoning behind it

The reason GES recommends against is that the entire db is built on writing stuff once in an append only manner (which is a great design from most aspects). This means your choices are:

The answer as to why that strategy is available in in Equinox.EventStore is for based on use cases (the second strategy was actually implemented in a bespoke manner initially by @eiriktsarpalis:

The big win is latency in querying contexts - given that access strategy, you're guaranteed to be able to produce the full state of the aggregate with a single roundtrip (if max batch size is 200, the snapshots are written every 200 items so reading backward 200 guarantees a snapshot will be included)

The secondary benefit is of course that you have an absolute guarantee there will always be a snapshot, and if a given write succeeds, there will definitely be a snapshot in the maxBatchSize window (but it still copes if there isn't - i.e. you can add snapshotting after the fact)

Equinox.SqlStreamStore implements this scheme too - it's easier to do things like e.g. replace the bodies of snapshot events with nulls as a maintenance task in that instance

Initially, Equinox.CosmosStore implemented the same strategy as the Equinox.EventStore (it started as a cut and paste of the it). However the present implementation takes advantage of the fact that in a Document Store, you can ... update documents - thus, snapshots (termed unfolds) are saved in a custom field (it's an array) in the Tip document - every update includes an updated snapshot (which is zipped to save read and write costs) that overwrites the unfolds entirely. You're currently always guaranteed that the snapshots are in sync with the latest event by virtue of how the stored proc writes. The DynamoDB impl follows the same strategy.

I expand (too much!) on some more of the considerations in https://github.com/jet/equinox/blob/master/DOCUMENTATION.md

The other thing that should be pointed out is the caching can typically cover a lot of perf stuff as long as stream lengths stay sane - Snapshotting (esp polluting the stream with snapshot events should definitely be toward the bottom of your list of tactics for managing a stream efficiently given long streams are typically a design smell)

NOTE The newer Equinox.MessageDb store binding implements snapshotting as separated events in a separate category.

<a name="changing-access-strategy"></a>

Changing Access / Representation strategies in Equinox.CosmosStore - what happens?

Does Equinox adapt the stream if we start writing with Equinox.CosmosStore.AccessStrategy.RollingState and change to Snapshotted for instance? It could take the last RollingState writing and make the first snapshot ?

what about the opposite? It deletes all events and start writing RollingState ?

TL;DR yes and no respectively

Some context

Firstly, it's recommended to read the documentation section on Access Strategies

General rules:

The high level skeleton of the loading in a given access strategy is: a) load and decode unfolds from tip (followed by events, if and only if necessary) b) offer the events to an isOrigin function to allow us to stop when we've got a start point (a Reset Event, a relevant snapshot, or, failing that, the start of the stream)

It may be helpful to look at how an AccessStrategy is mapped to isOrigin, toSnapshot and transmute lambdas internally

Aaand answering the question

Whenever a State is being built, it always loads Tip first and shows any events snapshots unfolds in there...

If isOrigin says no to those and/or the EventTypes of those unfolds are not in the union / event type to which the codec is mapping, the next thing is a query backwards of the Batches of events, in order.

All those get pushed onto a stack until we either hit the start, or isOrigin says - yes, we can start from here (at which point all the decoded events are then passed (in forward order) to the fold to make the 'state).

So, if you are doing RollingState or any other mode, there are still events and unfolds; and they all have EventTypes - there are just some standard combos of steps that happen.

If the EventType of the Event or Unfold matches, the fold/evolve will see them and build 'state from that.

Then, whenever you emit events from a decide or interpret, the AccessStrategy will define what happens next; a mix of:

Ouch, not looking forward to reading all that logic :frown: ? Have a read, it's really not that :scream:.

<a name="how-is-expectedVersion-managed"/></a>

Help me understand how the expectedVersion is used with EventStoreDB - it seems very confusing :pray: @dharmaturtle

I'm having some trouble understanding how Equinox+ESDB handles "expected version". Most of the examples use Equinox.Decider.Transact which is storage agnostic and doesn't offer any obvious concurrency checking. In Equinox.EventStore.Context, there's a Sync that takes a Token which holds a streamVersion. Should I be be using that instead of Transact?

The bulk of the implementation is in Equinox/Stream.fs, see the let run function.

There are sequence diagrams in Documentation MD but I'll summarize here:

But why, you might ask? the API is designed such that the token can store any kind of state relevant to the Sync operation.

a. for SqlStreamStore and EventStore, when writing rolling snapshots, we need to retain the index of the last Rolling Snapshot that was written, if we encountered it during loading (e.g. if we read V198-100 and there was a snapshot at at V101, then we need to write a new one iff the events we are writing would make event 101 be > batchSize events away, i.e. we need to always include a RollingSnapshot to maintain the "if you read the last page, it will include a rolling snapshot" guarantee)

b. for CosmosDB, the expectedVersion can actually be an expectedEtag - this is how AccessStrategy.RollingState works - this allows one to update Unfolds without having to add an event every time just to trigger a change in the version

(The second usage did not necessitate an interface change - i.e. the Token mechanism was introduced to handle the first case, and just happened to fit the second case)

Alternatively, I'm seeing in proReactor that there's a decide that does version checking. Is this recommended? code

If you need to know the version in your actual handler, QueryEx and other such APIs alongside Transact expose it (e.g. if you want to include a version to accompany a directly rendered piece of data). (Note that doing this - including a version in a rendering of something should not be a goto strategy - i.e. having APIs that pass around expectedVersion is not a good idea in general)

The typical case for using the version in the output is to be able to publish a versioned summary on a feed, so someone else can build a version-checking idempotent Ingester..... Which brings us to:

For that particular reactor, a different thing is going on though: the input value is versioned, and we don't write if the value is in date e.g. if you reset the checkpoint on the projector, it can re-run all the work idempotently:

a. version 3 of something is never temporarily overwritten with V2 and then V3

b. no redundant writes take place (and no expensive RU costs are incurred in Cosmos)

What kind of values does ISyncContext.Version return; i.e. what numeric value is yielded for an empty stream? :pray: @ragiano215

Independent of the backing store being used, Equinox uses 0-based versioning, i.e. the version value is equal to the number of events in the stream. Each event's Index is 0-based, akin to how a .NET array is numbered:

Side note: for contrast, EventStoreDB employs a different (-1-based) scheme in order to have -1/-2 etc represent various expectedVersion conditions

Note that for Equinox.CosmosStore with a pruner-archiver pair configured, the primary store may have been stripped of events due to the operation of the pruner. In this case, it will however retain the version of the stream in the tip document, and if that's non-0, will attempt to load the archived events from the Archive store.

What is Equinox's behavior if one does a Query on a 'non-existent' stream? :pray: @ragiano215

Example: I have an app serving a GET endpoint for a customer order, but the id supplied within the URL is for an order that hasn't yet been created.

Note firstly that Equinox treats a non-existent stream as an empty stream. For the use case stated, it's first recommended that the state is defined to represent this non-existent / uninitialized phase, e.g.: defining a DU with a variant Initial, or in some way following the Null Object Pattern. This value would thus be used as the Fold.initial for the Category. The app will use a .Query/.QueryEx on the relevant Decider, and Equinox will supply the initial value for the project function to render from (as a pattern match).

Side note: the original question is for a read operation, but there's an interesting consideration if we are doing a Transact. Say, for instance, that there's a PUT API endpoint where the code would register a fresh customer order for the customer in its order list via the Decider's Transact operation. As an optimization, one can utilize the AssumeEmpty hint as the Equinox.LoadOption to hint that it's worth operating on the assumption that the stream is empty. When the internal sync operation attempts to perform the write, that assumption will be tested; every write is always version checked. In the scenario where we are dealing with a rerun of an attempt to create an order (lets say the call timed out, but the processing actually concluded successfully on another node of the API server cluster just prior to the caller giving up), the version check will determine that the expected version is not 0 (as expected when a stream is Empty), but instead 1 (as the preceding invocation wrote one event). In this case, the loop will then use the fold function from the initial state, folding in the single event (via the evolve function), passing that state to the decider function, which, assuming it's implemented in an idempotent manner, will indicate that there are no events to be written.

<a name="what-is-a-decider"/></a>

What is a Decider? How does the Equinox type Decider relate to Jérémie's concept of one? :pray: @rmaziarka

The best treatments of the concept of a Decider are:

  1. Jérémie's intro post - it's not short, but its required reading for anyone considering event sourcing, regardless of whether you're even going to use a functional programmign language to do so.
  2. There's a very thorough treatment with code walk-through and discussion in this 2h45m video on Event Driven Information Systems with Jérémie Chassaing, @thinkb4coding

As teased in both, there will hopefully eventually (but hopefully not inevitably) be a book at some point too :fingers_crossed:

In Equinox

The Equinox type Decider exposes an API that covers the needs of making Consistent Decisions against a State derived from Events on a Stream. At a high level, we have:

NOTE the Decider itself in Equinox does not directly touch all three of the ingredients - while you pass it a decide function, the initial and fold functions, are supplied to the specific store library (e.g. Equinox.CosmosStore.CosmosStoreCategory), as that manages the loading, snapshotting, syncing and caching of the state and events.

In general

While the concept of a Decider plays well with Event Sourcing and many different types of Stores, it's important to note that neither storage or event sourcing is a prerequisite. A lot of the value of the concept is that you can and should be able to talk about and implement one without reference to any specific store implementation (or even thinking about it ever being stored - it can also be used to manage in-memory structures such as UI trees etc). By the same token, you can decorate/proxy a Decider with loading or saving behavior (not limited to just 'copying the commands'), e.g. you might be syncing saves of changes to a backend in near-real time while the front end is reflecting changes instantaneously.

Consistency

In any system, any decision (or even query) processed by a Decider should be concurrency controlled.

NOTE: the situation might be different if working in an environment where a particular concurrency model is emphasized. E.g.: if you're running in an Actor based system, one may map a decider to an actor. With this, any impetus to change state would be forwarded to that one actor and in a serial fashion. Potentials for conflicts would be managed by a supervisor.

Another example where the situation could be different is if you're building an in-memory decision system to support a game etc as Jérémie does in the talk. There's only one so that concern is side-stepped.

When applying the concept of a Decider to event sourcing, the consistency requirement means there's more to the exercise than emitting events into a thing those marketing centers on Events. There needs to be a way in the overall processing of a decision that manages a concurrency conflict by taking the state that superseded the one you based the original decision on (the origin state), and re-running the decision based on the reality of that conflicting actual state. The resync operation that needs to take place in that instance can be managed by reloading from events, reloading from a snapshot, or by taking events since your local state and folding those Events on top of that.

The ingredients

With Deciders in general, and Equinox in particular, the following elements are involved:

With the Equinox type Decider, the typical decide signature used with the Transact API has the following signature:

  context -> inputsAndOrCommand -> 'State -> Event list

NOTE: There are more advanced forms that allow the decide function to be Async, inspect the State's Version and/or to also return a 'result, which will be yielded to the caller driving the Decision as the return value of the Transact function.

So what kind of a thing is a Decider then?

Is it a marketing term? Jérémie's way of explaining an app?

I'd present the fact that Equinox:

... as evidence for Decider being a pattern language (analogous to how event sourcing and many modern event sourced UI paradigms share a lot of common patterns).

... about the process of making decisions

Finally, I'd say that a key thing the Decider concept brings is a richer way of looking at event sourcing than the typical event sourcing 101 examples you might see:

The missing part beyond that basic anemic stuff is where the value lies:

Quite frequently, a Decider may internally operate as a Process Manager, encapsulating a State Machine. That is to say, there will be a subset of the Deciders in a system that are providing APIs that support some overall protocol that enforces some lifecycle rules.

With Equinox.CosmosStore, it seems it should be possible to handle saving multiple events from multiple streams as an atomic transaction, as long as they share the same partition key in Cosmos DB. However there doesn't seem to be any way to do that with APIs such as Equinox.Decider.Transact? :pray: @rmaziarka

I'm asking because I had this idea which I was workshopping with a friend, that it could solve typical sync problems in typical availability domains.

Let's assume that our domain is Bike Sharing in different cities. Users can reserve a bike, and then access it and ride.

In our system we would have two subdomains:

There could be different subdomains as Orders, that would be using Inventory under the hood - like Repairing.

In the system we would like to a) block a particular bike for the user and b) at the same time create a reservation for them to store all important business information.

So we use and save data to the 2 different streams of data - typical ES problem.

We could use event handers / sagas but it brings another level of complexity.

In the case above we could assume that data inside a single city will be so small, that even with prolonged usage it won't fill the whole CosmosDB partition. So we could use it to handle saving 2 events in the same time. (My actual system is not Bike Sharing, and I'm confident of a lack of explosive growth!)

Saving a BikeBlocked event would be handled in a transaction along with the ReservationMade. So we wouldn't end up with the situation that bike is blocked and reservation is not made or conversely.

What do you think of this idea? Does it sound reasonable?

Why not keep it simple and have it one logical partition: a high level perspective

I'd actually attack this problem from an event modeling perspective first (Event Storming and other such things are reasonable too, but I personally found the rampup on EM to be reasonable, and it definitely forces you to traverse the right concerns. Good intro article re Event Modeling.

Once you cover enough scenarios of any non-CRUD system, I'd be surprised if you truly end up with a model with just 2 logical streams that you want to combine into 1 for simplicity of event storage because you are covering all the cases and can reason about the model cleanly.

When you have a reasonable handle from a few passes over that (watch our for analysis paralysis, but also don't assume you can keep it simple via :see_no_evil::hear_no_evil: and not talking to enough people who understand the whole needs of the system, aka :speak_no_evil:)

For any set of decisions you control in a single Decider you need to:

That's a lot of things.

Before we go on, consider this: you want to minimise how much stuff a single Decider does. Adding stuff into a Decider does not add complexity linearly. There is no technical low level silver bullet solution to this problem.

Right, strap yourself in; there's no TL;DR for this one ;)

Preamble

First, I'd refer to some good resources in this space, which describe key functions of an Event Store

Next, I'd like to call out some things that Equinox is focused on delivering, regardless of the backing store:

The provision of the changefeed needs to be considered as a primary factor in the overall design if you're trying to build a general store - the nature of what you are seeking to provide (max latency and ordering guarantees etc) will be a major factor in designing the schema for how you manage the encoding and updating of the items in the store

Sagas?

In the system we would like to block particular bike for the user. But at the same time create a reservation for them to store all important business information. So we use and save data to the 2 different streams of data - typical ES problem. We could use event handers / sagas but it brings another level of complexity.

There will always be complexity in any interesting system that should not just be a CRUD layer over a relational DB. Taking particular solution patterns off the table from the off is definitely something you need to be careful to avoid. As an analogy: Having lots of classes in a system can make a mess. But collapsing it all into as few as possible can result in ISP and SRP violations and actually make for a hard to navigate and grok system, despite there being less files and less lines of code (aka complexity). Coupling things can sometimes keep things Simple, but can also sometimes simply couple things.

In my personal experience

  1. Sagas, PMs and related patterns and techniques can be scary, and there are not many good examples out there in an event sourcing context
  2. You can build a significant number of systems without ever intentionally applying any of those patterns

But, also IME: 3) They're pretty fundamental 4) They are not as hard as you think when you've done two of them 5) Proper stores enable good ways to manage them 6) They enable you to keep the overall complexity of a system down by decoupling things one might artificially couple if you're working with a toolbox where you've denied yourself a space for PMs and Sagas

In other words, my YAGNI detector was on high alert for it, as it seems yours is :wink:

Transactional writes?

In the case above we could assume that data inside a single city will be so small, that even with the long usage it won't complete the whole CosmosDB partition. So we could use it to handle saving 2 events in the same time.

For avoidance of doubt:

You're correct to identify the maximum amount of data being managed in a scope as a critical consideration when coupling stuff within a logical partition in order to be able to achieve consistency when managing updates across two set of related pieces of information.

Specifically wrt CosmosDB, the following spring to mind without looking anything up:

In other words, it's looking like you're painted into a corner: you can't shard, can't scale and are asking for hotspotted partition issues. Correct, that doesn't always matter. But even if you have 10 cities, you can't afford for the two busiest ones to be hosted on the same node as that's going to be the one suffering rate limiting. Trust me, it's not even worth thinking about tricks to manage this fundamental problem (trying to influence the sharding etc is going to be real complexity you do not want to be explaining to other engineers on a whiteboard or a 5am conf call)

So why do all these things spring to mind ?

TL;DR on Cosmos DB for storing events (but really for all usages)

Why think about it and explain it at such a low level?

Why think about this problem from such a low level perspective? Why not just stick to the high level given that's equally important to get right, and if correctly will more often yield a cleanly implementable solution?

Many people have a strong desire to write the least amount of code possible, and that's not unreasonable. The most critical question is going to be, does it work at all? Due to the combination of factors above, the answer is looking pretty clear. You can write the code and run it to be sure. I already have, in spike branches, and will save you the spoiler.

However, the fundamental things that arise when viewing it at a low/storage design level, also have high level issues in terms of modelling the software too, and different people will understand them better from different angles

I've witnessed people attempt to 'solve' the fundamental low level issues by working around reality, moving it all into a Cosmos DB Stored Procedure (Yes, you can guess the outcome). Please don't even think about that, no matter how much tech tricks you'll learn!

Conclusion

You know what's coming next: You don't want to merge two Deciders and 'Just' bring it all under a nice tidy transaction to avoid thinking about Process Managers and/or other techniques.

If you're still serious about making the time investment to write a PoC (or a real) implementation of a Store on a Document DB such as CosmosDB (and/or even writing a SQL-backed one without studying the prior art in that space intently), you can't afford not to invest that time in watching a 2h45m video about Deciders!.

OK, but you didn't answer my question, you just talked about stuff you wanted to talk about!

😲Please raise a question-Issue, and we'll be delighted to either answer directly, or incorporate the question and answer here

Acknowledgements

The diagrams in this README.md and the DOCUMENTATION.md would not and could not have happened without the hard work and assistance of at least:

FURTHER READING

See DOCUMENTATION.md and Propulsion's DOCUMENTATION.md