Home

Awesome

[!WARNING] Benchmark is under active development all API, integrations and set of benchmarks is subject to change!

Latest run

Test Run parallel benchmarks

updated with actions and can be found here

What is all about?

General idea is to hide implementation of each ECS under context abstraction and work with it from benchmark implementations.

Benchmarks design follow 2 rules which I try to balance with:

General flow of any benchmark execution is divided into 3 steps:

[!IMPORTANT] Don't search truth here. There won't be any.

Implemented contexts

ECSVersionImplementedVerifiedNotes
Arch1.3.3-alphaN/A
fennecs0.5.14-betaN/A
Morpehstage-2024.1.0N/A
DragonECS0.8.61N/A
LeoECS2023.6.22N/A
LeoECSLite2024.5.22N/A
DefaultECS0.18.0-beta01Analyzer 0.17.8
FlecsNET4.0.3N/A
TinyEcs1.4.0N/A
Xeno0.1.6N/A
FriFlo3.0.0-preview.18N/A
StaticEcs0.9.0N/A

Implemented benchmarks

BenchmarkDescription
Create Empty EntityCreates [EntityCount] empty entities
Create Entity With N ComponentsCreates [EntityCount] entitites with N components
Add N ComponentsAdds N components to [EntityCount] entities
Remove N ComponentsAdds N components to [EntityCount] entities
System with N ComponentsPerforms simple operations on entities (numbers sum)
System with N Components Multiple CompositionSame as System with N Components but with mixture of other components

Running

Just call Benchmark.sh from terminal.

Command line args:

argdescriptionsample
benchmarkallow to specify single benchmark to runbenchmarks=CreateEmptyEntities,Add1Component
benchmarksallow to specify benchmarks to runbenchmark=CreateEmptyEntities
contextsallow to specify contexts to runcontexts=Morpeh,Fennecs,...
--listprints all benchmarks name--list

Since all comparisons is made by string contains you can simply write something like contexts=Morpeh instead of context=MorpehContext and benchmarks=With1,With2 to launch subset of benchmarks. Selected benchmarks and contexts will be logged to console. BUT benchmark arg requires exact name match with those printed with --list

Contribution

Problems

  1. Because of nature of BenchmarkDotNet there's sequential iteration of creating entities happening. This leads to case where, for example we creating 100k entities in benchmark, it's properly cleared in Setup and Cleanup but benchmaXeno.SourceGenerator.csprojrk itself will be called multiple times which will lead to creating 100k entities, then another 100k and in some cases lead to millions of entities in the world which can affect perfomance of creation and deletion on certain ECS implementations.
  2. System benchmarks which uses Padding property produces up to 1.100.000 entities each because of logic of padding generation. It affects runs duration but for now i'm not sure about correct way do fix that (maybe keep entire entities count up to EntityCount so it'll not affect speed but it'll reduce actual entity count to about 9.9k so archetype ecs implementation will gain significant boost).
  3. Because some framework deleting entity on delete last components there are differences in behaviours in tests and benchmarks. For example RemoveComponent benchmark will work faster with Arch and FennECS because they're not deleting entity. Because of that special property called DeletesEntityOnLastComponentDeletion is required to be implemented in each context.
  4. TinyECS - Deleting entities during lock causing stackoverflow on merge so get rid of lock during deletions.