Home

Awesome

Streaming Benchmarks

Hackage Gitter chat Build Status Windows Build status

This package provides micro-benchmarks to measure and compare the performance of various streaming implementations in Haskell.

We have taken due to care to make sure that we are benchmarking correctly and fairly. See the notes on correct benchmarking.

DISCLAIMER: This package is a result of benchmarking effort done during the development of streamly by the authors of streamly.

Benchmarks

The benchmark names are obvious, some of them are described below. Single operation benchmarks:

NameDescription
drainJust discards all the elements in the stream
drop-alldrops all element using the drop operation
lastextract the last element of the stream
foldsum all the numbers in the stream
mapincrements each number in the stream by 1
take-allUse take to retain all the elements in the stream
filter-evenKeep even numbers, discard odd
scanscan the stream using + operation
mapMtransform the stream using a monadic action
zipcombines corresponding elements of the two streams together

Composite operation benchmarks:

NameDescription
map x 4perform map operation 4 times
take-maptake followed by a map

For more details on how each benchmark is implemented see this benchmark file.

Each benchmark is run in a separate process to avoid any effects of GC interference and sharing across benchmarks.

Benchmark Results

Below we present some results comparing streamly with other streaming implementations. Due care has been taken to keep the comparisons fair. We have optimized each library's code to the best of our knowledge, please point out if you find any measurement issues.

Running Benchmarks

You can run individual benchmarks using cabal bench <target>. You may need to specify a build flag to include a particular streaming library e.g. --flag pipes to benchmark the pipes library. Please consult the cabal file to find the exact flag names.

Generating Comparison Reports

You can generate the comparison reports presented in this page yourself. To do so you need to run the benchmarks using the reporting tool, first build the reporting tool using the following command:

$ cd bench-runner
$ cabal install --project-file cabal.project.user --installdir ../bin

If you want to create a report for benchmarks showing a 10% or greater improvement with Streamly over Lists, use:

$ bin/bench-runner --package-name streaming-benchmarks --package-version 0.4.0 --compare --diff-cutoff-percent 10 --diff-style absolute --targets "StreamlyPure List"

After running once, you can add --no-measure option to use the same benchmark measurements for different reports. For example, use:

Streamly vs Haskell Lists

Streamly, when used with Identity monad, is almost the same as Haskell lists (in the base package). See this for more details.

The following table compares the timing of several operations for streamly with lists using a one million element stream. For brevity only those operations where the performance of the two packages differ by more than 10% are shown in the table below. The last column shows how many times slower list is compared to streamly.

Benchmarkstreamly(μs)list(μs)list/streamly
drop-map x 4375.0976925.32205.08
filter-drop x 4382.0354848.54143.57
drop-scan x 4795.8176716.7996.40
filter-scan x 4795.6044559.1556.01
scan-map x 41192.1948838.2240.97
take-map x 41500.9960126.5840.06
filter-take x 41502.0148766.8732.47
take-drop x 41499.6241720.0327.82
take-scan x 41874.9451283.3027.35
drop-one x 4375.338993.8723.96
dropWhile-false x 4374.618957.7923.91
dropWhile-false374.838670.0523.13
drop-one390.778681.8522.22
dropWhile-true571.6012237.4821.41
drop-all562.948262.3814.68
take-all624.83564.341/1.11
scan x 4795.83385.851/2.06
appendR[10000]360.75126.951/2.84
concatMap34957.711124.851/31.08

To generate these reports run bench-runner with:

Streamly vs Streaming

The following table compares the timing of several operations for streamly with streaming using a million element stream.

Benchmarkstreamly(μs)streaming(μs)streaming/streamly
appendR[10000]326.561301176.693984.54
mapM x 4374.42223591.08597.17
filter-map x 4381.07194903.88511.47
filter-scan x 4795.66233527.90293.50
filter-all-in x 4375.40102629.64273.38
filter-drop x 4387.1599096.98255.96
map x 4386.4994944.87245.66
drop-map x 4375.6289669.37238.73
scan x 4797.00166332.40208.70
scan-map x 41194.30238804.48199.95
filter-even x 4396.3777865.47196.45
drop-scan x 4796.98156063.52195.82
takeWhile-true x 4562.4990183.53160.33
scan375.2447520.57126.64
filter-take x 41498.55189635.34126.55
mapM388.1046689.61120.30
take-map x 41500.71178954.50119.25
zip656.6566689.73101.56
take-scan x 42380.35241675.75101.53
filter-all-in375.9733590.1489.34
map375.0233081.1388.21
filter-even393.2630458.4677.45
filter-all-out382.8726826.2170.07
take-all x 41499.71101332.5367.57
take-drop x 41498.5398281.9965.59
takeWhile-true562.6231863.2556.63
foldl'388.2218503.1547.66
drop-all562.0825200.3244.83
take-all768.6533247.9743.26
dropWhile-true564.8724431.5043.25
last385.5315240.8539.53
dropWhile-false374.8314566.7038.86
drop-one374.8014565.0138.86
drop-one x 4375.8814448.6738.44
dropWhile-false x 4390.1214619.4237.47
drain375.0613702.2936.53
toList117708.83201444.811.71

To generate these reports run bench-runner with:

Streamly vs Pipes

The following table compares the timing of several operations for streamly with pipes using a million element stream.

Benchmarkstreamly(μs)pipes(μs)pipes/streamly
appendR[10000]327.90901135.922748.21
mapM x 4375.20407184.391085.23
filter-map x 4381.52366759.70961.31
drop-map x 4375.48281296.82749.16
filter-all-in x 4375.60222331.68591.93
filter-drop x 4387.44222830.71575.14
drop-scan x 4797.23336737.89422.39
filter-even x 4389.87152688.91391.64
filter-scan x 4797.38309733.91388.44
drop-one x 4375.48139851.13372.46
map x 4386.56136289.32352.57
dropWhile-false x 4390.72137395.44351.65
scan-map x 41194.38381286.88319.23
takeWhile-true x 4562.86165143.23293.40
scan x 4796.68222986.17279.90
mapM388.1995576.97246.21
filter-all-in375.2171297.42190.02
take-map x 41502.76275887.24183.59
scan374.8165549.13174.89
take-drop x 41503.43256448.45170.58
filter-even390.2966183.72169.57
filter-all-out376.9959074.54156.70
drop-one375.1958395.24155.64
dropWhile-false375.3558223.03155.12
map375.0557736.43153.94
filter-take x 41503.00227925.71151.65
take-scan x 42455.91354284.33144.26
zip657.0786011.93130.90
takeWhile-true564.1461390.21108.82
take-all x 41502.32139730.7093.01
dropWhile-true564.0349227.1987.28
drop-all562.0546505.3782.74
take-all824.0960511.3473.43
drain375.2926390.5970.32
foldl'397.3419064.0547.98
last387.1117364.4444.86
toList117257.09207405.941.77

To generate these reports run bench-runner with:

Streamly vs Conduit

The following table compares the timing of several operations for streamly with conduit using a million element stream.

Benchmarkstreamly(μs)conduit(μs)conduit/streamly
mapM x 4375.46297002.31791.04
filter-map x 4380.79267543.81702.60
drop-map x 4375.66232307.84618.39
filter-drop x 4386.05235029.15608.81
filter-scan x 4796.56306556.67384.85
drop-scan x 4797.19300789.06377.31
zip657.29210069.05319.60
filter-all-in x 4375.24118506.68315.82
scan-map x 41194.67360671.18301.90
map x 4387.00113497.14293.27
drop-one x 4375.49101842.95271.23
dropWhile-false x 4389.44102051.22262.04
scan x 4796.72190479.35239.08
takeWhile-true x 4564.58114459.57202.73
filter-even x 4391.7672369.30184.73
filter-take x 41502.04267921.27178.37
take-map x 41502.88238875.95158.95
take-drop x 41500.34232606.19155.04
take-scan x 42443.83309738.86126.74
mapM389.1541897.48107.66
scan375.4038137.85101.59
take-all x 41502.32110682.7473.67
filter-all-in375.3126024.2169.34
dropWhile-false375.1025307.1367.47
map375.1823088.0961.54
drop-one375.4322020.6558.65
filter-even392.2821504.2854.82
takeWhile-true562.7929012.6851.55
filter-all-out378.7615736.0541.55
drop-all562.8919916.4835.38
foldl'388.8812499.0332.14
dropWhile-true564.4317983.3531.86
take-all784.6724425.3631.13
last385.7510974.8428.45
drain375.184272.1511.39
appendR[10000]326.931207.883.69
toList116441.26199138.091.71

To generate these reports run bench-runner with:

Stack and heap utilization

bench-runner also generates a maxrss comparison report, displaying the maximum resident memory for each benchmark.

Comparing other libraries

This package supports many streaming libraries, bytestring, text, vector etc. You can run the benchmarks directly or use bench-runner to create comparison reports. Check out the targets in the cabal file.

Adding New Libraries

It is trivial to add a new package. This is how a benchmark file for a streaming package looks like. Pull requests are welcome, we will be happy to help, just join the gitter chat and ask!