Awesome
.NET vs JVM JSON serialization
This is reference benchmark for the fastest JSON serialization libraries in .NET and JVM. It includes various other libraries to show difference between them for various scenarios.
Varieties of models are tested, from small simple objects, to very complex large objects; with different number of loops. Unlike most benchmarks which use static data for test, test data for this benchmark is created within the code.
Many libraries are unable to fully complete the benchmark, due to various reasons.
To give more interesting results, we'll also run tests on Mono to see how it compares to JVM/.NET.
Models
Testing assumptions
- .NET: from stream and to stream - we want to avoid LOH issues, so no
byte[]
examples (even if it could be used on small objects) - while we could usebyte[]
pool just for serialization, this bench doesn't currently test for that - JVM: from
byte[]
toOutputStream
- while .NET reuses same stream instance, JVM libraries are consumingbyte[]
input and expected to write to the resulting stream (both objects are reused to avoid creating garbage). - single thread testing - tests are run on a single thread in a multi CPU envorionment, which tests the actual serialization algorithms and minimizes the influence of excessive GC which can be processed on other threads
- simple model - tests actual serialization overhead since there is little serialization to do
- standard model - non-trivial document model, should represent real world scenarios
- large model - big documents - tests should be bound by non-infrastructure parts and stress underlying platform/serialization algorithms
- almost default configuration - large model contains "advanced" features, such as interface serialization.
- one test at a time - perform one test and exit - while this will nullify JVM optimizations in runtime, they should show up in tests with larger number of loops.
- track duration of creating new object instance
- some other libraries are available for testing, but not included in results (fastJSON, Genson, Microsoft Bond, FST, ...)
- JMH is not used - but dead code elimination is not really an issue (one can always use Check argument to be 100% sure there is no dead code elimination).
Libraries
-
Newtonsoft.Json 9.0.1 - "Popular high-performance JSON framework for .NET"
-
Revenj.Json 1.3.1 - Part of Revenj framework. POCO + serialization/deserialization methods
-
Service Stack 4.0.60 - ".NET's fastest JSON Serializer"
-
Jil 2.14.3 - "aims to be the fastest general purpose JSON (de)serializer for .NET"
-
NetJSON 1.1.0 - "Faster than Any Binary?"
-
Jackson 2.8.1 - "aims to be the best possible combination of fast, correct, lightweight, and ergonomic for developers"
-
DSL-JSON 1.1.2 - DSL Platform compatibile Java library. POJO + serialization/deserialization methods
-
Boon 0.6.6 - "Boon is the probably the fastest way to serialize and parse JSON in Java so far for your project"
-
Alibaba/fastjson 1.2.12 - "measured to be faster than any other Java parser and databinder, includes jackson"
-
Gson 2.7 - "Gson is a Java library that can be used to convert Java Objects into their JSON representation"
Startup times
It's known issue that serialization libraries suffer from startup time, since they need to build and cache parsers for types. Let's see how much of an issue that is:
Small 1 (Message)
As expected baked in serialization code has minimal startup time, since it was amortized at compile time. While this can be nullified on servers with longer startup, it can cause noticeable delays on mobile apps. Java seems to be doing exceptionally well on startup times.
Small model
Model with only a few simple properties. This test mostly shows the overhead of the library since there is only little serialization to do.
Small 1.000.000 (Message)
Since there is large number of loops JVM optimization kicks-in so it's interesting to compare it to both smaller number of loops (100k) and larger number of loops (10M).
Non-trivial model
Non-trivial model should reflect most CRUD scenarios with documents. Serialization algorithms should show difference between libraries.
Standard 100.000 (Post)
LOH issue in .NET prohibits advanced optimizations available on JVM (in a sense that developers are forced to deal with it, instead of focusing on algorithms).
Large model
Large model contains several advanced features, such as interface serialization, occasional byte[] serialization and deep nested objects. Large strings and other objects are used which cause 10x slower instance creation in .NET.
Large 500 (Book)
Most libraries are unable to complete this bench (due to requirement for advanced features; Jackson gets some help through annotations).
Mono comparison (results for 2016/4)
Mono has improved significantly with v4.
Small 10.000.000 (Post)
It's only twice as slow as .NET version.
Full results
2015/6
AMD Phenom(tm) II X4 955 Processor 3.20 GHz / 24GB RAM
.NET 4.5.1, Mono 4.0.1, JVM 1.7.76/1.8.31
Results for Windows. Results for Linux. .NET vs Mono comparison.
2016/4
AMD Phenom(tm) II X4 955 Processor 3.20 GHz / 24GB RAM
.NET 4.6.2, Mono 4.2.3, JVM 1.8.77
Results for Windows. Results for Linux. .NET vs Mono comparison.
2016/7
Intel(R) Core(TM) i5-2520M CPU @ 2.50GHz / 16GB RAM
.NET 4.6.2, JVM 1.8.102
Results for Windows.
Reproducing results
Run GatherResults.exe or GatherResults.exe . 5
Individual tests can be run as:
- .NET: JsonBenchmark.exe (example: JsonBenchmark.exe RevenjJsonMinimal Small Both 1000000)
- JVM: json-benchmark.jar (example: java -jar json-benchmark.jar Jackson Standard Serialization 100000)
If you are interested in changing the models, then you can:
- install Visual studio plugin: DDD for DSL
- or use dsl-clc.jar with compile.bat
If you want to test other libraries run benchmark without arguments to find out which libraries are available. For example to test Microsoft Bond run: JsonBenchmark.exe BondBinary Small Both 1000000.
To check if library is working correctly, use Check argument. Some libraries are reported to work incorrectly but still included in results (Service Stack serializes only 3 digits on DateTime, which causes incorrect comparison after deserialization, ...)
Conclusions
- JVM seems to always be faster after optimization kicks-in
- LOH design issue forces .NET libraries to use suboptimal algorithms
- Newtonsoft is comparable with Jackson on features/quality but not in speed
- .NET libraries have matured over time (although most of them still have various issues)
- Almost everyone claims to be THE FASTEST
- JSON can compete with binary codecs in speed