Awesome
Dependency and version information
<details> <summary>Click to show</summary>[jmh-clojure "0.4.1"]
{jmh-clojure/jmh-clojure {:mvn/version "0.4.1"}}
<dependency>
<groupId>jmh-clojure</groupId>
<artifactId>jmh-clojure</artifactId>
<version>0.4.1</version>
</dependency>
JDK versions 8 to 18 and Clojure versions 1.7 to 1.11 are currently tested against.
</details>What is it?
This library provides a data-oriented API to JMH, the Java Microbenchmark Harness.
JMH is developed by OpenJDK JVM experts and goes to great lengths to ensure accurate benchmarks. Benchmarking on the JVM is a complex beast and, by extension, JMH takes a bit of effort to learn and use properly. That being said, JMH is very robust and configurable. If you are new to JMH, I would recommend browsing the sample code and javadocs before using this library.
If you a need simpler, less strenuous tool, I would suggest looking at the popular criterium library.
Quick start
As a simple example, let's say we want to benchmark our fn that gets the value at an arbitrary indexed type. Of course, the built in nth
already does this, but we can't extend nth
to existing types like java.nio.ByteBuffer
, etc.
(ns demo.core)
(defprotocol ValueAt
(value-at [x idx]))
(extend-protocol ValueAt
clojure.lang.Indexed
(value-at [i idx]
(.nth i idx))
CharSequence
(value-at [s idx]
(.charAt s idx))
#_...)
Benchmarks are usually described in data and are fully separated from definitions. The reason for this is twofold. First, decoupling is generally good design practice. And second, it allows us to easily take advantage of JMH process isolation (forking) for reliability and accurracy. More on this later.
For repeatability, we'll place the following data in a benchmarks.edn
resource file in our project. (Note that using a file is not a requirement, we could also specify the same data in Clojure. The :fn
key values would need to be quoted in that case, however.)
{:benchmarks
[{:name :str, :fn demo.core/value-at, :args [:state/string, :state/index]}
{:name :vec, :fn demo.core/value-at, :args [:state/vector, :state/index]}]
:states
{:index {:fn (partial * 0.5), :args [:param/count]} ;; mid-point
:string {:fn demo.utils/make-str, :args [:param/count]}
:vector {:fn demo.utils/make-vec, :args [:param/count]}}
:params {:count 10}}
I have omitted showing the demo.utils
namespace for brevity, it is defined here if interested.
The above data should be fairly easy to understand. It is also a limited view of what can be specified. The sample file provides a complete reference and explanation.
Now to run the benchmarks. We'll start a REPL in our project and evaluate the following. Note that we could instead use lein-jmh or one of the other supported tools to automate this entire process.
(require '[jmh.core :as jmh]
'[clojure.java.io :as io]
'[clojure.edn :as edn])
(def bench-env
(-> "benchmarks.edn" io/resource slurp edn/read-string))
(def bench-opts
{:type :quick
:params {:count [31 100000]}
:profilers ["gc"]})
(jmh/run bench-env bench-opts)
;; => ({:name :str, :params {:count 31}, :score [1.44959801438209E8 "ops/s"], #_...}
;; {:name :str, :params {:count 100000}, :score [1.45485370497829E8 "ops/s"]}
;; {:name :vec, :params {:count 31}, :score [1.45550038851249E8 "ops/s"]}
;; {:name :vec, :params {:count 100000}, :score [8.5783753539823E7 "ops/s"]})
Note: due to the way jmh-clojure works, the
*compile-path*
directory should exist and be on your classpath before benchmarking. This is automated by tools like Leiningen. Fortools.deps
, see here.
The run
fn takes a benchmark environment and an optional map. We select the :quick
type: an alias for some common options. We override our default :count
parameter sequence to measure our fn against small and large inputs. We also enable the gc profiler.
Notice how we have four results: one for each combination of parameter and benchmark fn. For this example, we have omitted lots of additional result map data, including the profiler information.
Note that the above results were taken from multiple runs, which is always a good practice when benchmarking.
Alternate ways to run
Benchmarking expressions or fns manually without the data specification is also supported. For example, the run-expr
macro provides an interface similar to criterium, and allows benchmarking of code that only resides in memory (that you are updating in a REPL, for example), rather than on disk (loadable via require
). However, this forgoes JMH process isolation. For more on why benchmarking this way on the JVM can be sub-optimal, see here.
Tooling support
This library can be used directly in a bare REPL, as shown above, or standalone via tools.deps. For a more robust experience, see the jmh-clojure-task
project. This companion library provides some additional convenience features like sorting, table output, easy uberjar creation, and more. It can be easily integrated with tools like Leiningen.
More information
As previously mentioned, please see the sample file for the complete benchmark environment reference. For run
options, see the docs. Also, see the wiki for additional examples and topics.
The materials for a talk I gave at a London Clojurians online meetup are also available here. A video capture of the event can also be viewed on YouTube.
Running the tests
lein test
Or, lein test-all
for all supported Clojure versions.
License
Copyright © 2017-2024 Justin Conklin
Distributed under the Eclipse Public License, the same as Clojure.