Awesome
Interprocess communication for lua_nginx_module and openresty. Send named alerts with string data between Nginx worker processes.
Asynchronous, nonblocking, non-locking, and fast!
History
I wrote this as a quick hack to separate the interprocess code out of Nchan mostly on a flight back from Nginx Conf 2016. The completion of this module was generously sponsored by ring.com. Thanks guys!
API
local ipc = require "ngx.ipc"
ipc.send
Send alert to a worker process.
ipc.send(destination_worker_pid, alert_name, alert_data)
Returns:
true
on successnil, error_msg
ifalert_name
length is > 254,alert_data
length is > 4G,destination_worker_pid
is not a valid worker process.
ipc.broadcast
Broadcast alert to all workers (including sender).
ipc.broadcast(alert_name, alert_data)
Returns:
true
on successnil, error_msg
ifalert_name
length is > 254 oralert_data
length is > 4G
ipc.receive
Register one or several alert handlers.
Note that receive
cannot be used in the init_by_lua*
context. During startup, use init_worker_by_lua*
.
Register an alert handler:
ipc.receive(alert_name, function(data)
--ipc receiver function for all alerts with string name alert_name
end)
Returns:
true
Several alert names can be registered at once by passing a table:
ipc.receive({
hello = function(data)
--got a hello
end,
goodbye = function(data)
--got a goodbye
end
})
Deleting an alert handler:
ipc.receive(ipc_alert_name, nil)
Alerts received without a handler are discarded.
ipc.reply
Reply to worker that sent an alert. Works only when in an alert receiver handler function.
ipc.receive("hello", function(data)
ipc.reply("hello-response", "hi, you said "..data)
end)
Returns:
true
Raises error if used outside of a ipc.receive
handler.
ipc.sender
When receiving an alert, ipc.sender
contains the sending worker"s process id.
all other times, it is nil
ipc.receive("hello", function(data)
if ipc.sender == ngx.worker.pid() then
--just said hello to myself
end
end)
Example
nginx.conf
http {
init_worker_by_lua_block {
local ipc = require "ngx.ipc"
ipc.receive("hello", function(data)
ngx.log(ngx.ALERT, "sender" .. ipc.sender .. " says " .. data)
ipc.reply("reply", "hello to you too. you said " .. data)
end)
ipc.receive("reply", function(data)
ngx.log(ngx.ALERT, tostring(ipc.sender) .. " replied " .. data)
end)
}
server {
listen 80;
location ~ /send/(\d+)/(.*)$ {
set $dst_pid $1;
set $data $2;
content_by_lua_block {
local ipc = require "ngx.ipc"
local ok, err = ipc.send(ngx.var.dst_pid, "hello", ngx.var.data)
if ok then
ngx.say("Sent alert to pid " .. ngx.var.dst_pid);
else
ngx.status = 500
ngx.say(err)
end
}
}
location ~ /broadcast/(.*)$ {
set $data $1;
content_by_lua_block {
local ipc = require "ngx.ipc"
ipc.broadcast("hello", ngx.var.data)
}
}
}
}
How it works
IPC alerts are split into 4K packets and delivered to workers via Unix pipes. On the receiving end, a persistent timer started with ngx.timer.at
hangs around waiting to be manually triggered by the reading IPC event handler, and thes is re-added to wait for the next alert. A simple hack in concept, but a bit convoluted in implementation.
Speed
It's pretty fast. On an i5-2500K (2 core, 4 thread) running Nginx with the Lua module built with Luajit, here are the results of my benchmarks:
- 5 workers, 10b alerts: 220K alerts/sec
- 5 workers, 10Kb alerts: 110K alerts/sec
- 20 workers, 10b alerts: 220K alerts/sec
- 20 workers, 10Kb alerts: 33K alerts/sec