Home

Awesome

WorkQueue

A simple implementation of the hungry consumer work scheduling model.

Given a queue of work to be processed, we create a a pool of workers. Each worker requests the next item to process from the queue. When it finishes processing, it reports the result back and then requests the next item of work.

The intent is that we do not need a central control point which preassigns work—that is done dynamically by the workers.

This has a couple of advantages. First, it locally maximizes CPU utilization, as workers are always busy as long as there is work in the queue.

Second, it offers a degree of resilience, both against CPU hogging work items and against worker failure.

Simple Example

results = WorkQueue.process(
  fn (val,_) -> { :ok, val*2 } end,   # worker function
  [ 1, 2, 3 ]                         # work items to process
)

assert length(results) == 3
for {input, output} <- results, do: assert(output == input * 2)

This code will allocate a number of workers (the default is 2/3rds of the available processing units in the node). Each worker then runs the given function for as long as there are items to process. The results are collected (in the order the workers return them) and returned.

The API

results = WorkQueue.process(work_processor, item_source, options \\ [])