Home

Awesome

Least Frequently Used cache.

/strat contains an approximate implementation of the Clock algorithm, which increments frequency on cache-hits and decrements frequency of items in the same 'clock' where a newer entry seeks admission into, which helps avoid starvation. Another algorithm, O1 implements a constant-time LFU-LRU hybrid cache, but the flip side is it consumes extra memory compared to Clock, though is very much simpler to reason about.

/ds contains implementations of underlying stores supporting the cache: A HashMap backed by the native Map impl, and a restrictive RangeList backed by a Skip List.

That is, Clock.js, MultiClock.js, O1.js instances can be backed by either HashMap for point queries (takes ~500ms for 1M point-queries), or by RangeList for range queries (takes ~5000ms for 1M range queries; see the perf workflow).

lfu.js serves as the entrypoint to construct and interact with these LFUs.

  import { LfuCache, RangeLfu } from "@serverless-dns/lfu.js";

  const lfu = new LfuCache("L1", 10)
  lfu.put(1, "a") // 1 -> "a"
  const v = lfu.get(1) // v = "a"

  const rgcache = new RangeLfu("R1", 10)
  rgcache.put(1, 10, "a") // (1, 10) -> "a"
  const v = rgcache.get(5) // v = "a"

All caches are magic. Knowing their mechanism is not enough to predict their outcome.

- Avery Pennarun.