Home

Awesome

cronlock

<!-- badges/ -->

Build Status Gittip donate button Flattr donate button PayPayl donate button BitCoin donate button

<!-- /badges -->

Install

On most Linux & BSD machines, cronlock will install just by downloading it & making it executable. Here's the one-liner:

sudo curl -q -L https://raw.github.com/kvz/cronlock/master/cronlock -o /usr/bin/cronlock && sudo chmod +x $_

With Redis present on localhost, cronlock should now already work in basic form. Let's test by letting it execute a simple pwd:

CRONLOCK_HOST=localhost cronlock pwd

If this returns the current directory we're good to go. More examples below.

Introduction

Uses a central Redis server to globally lock cronjobs across a distributed system. This can be usefull if you have 30 webservers that you deploy crontabs to (such as mailing your customers), but you don't want 30 cronjobs spawned.

Of course you could also deploy your cronjobs to 1 box, but in volatile environments such as EC2 it can be helpful not to rely on 1 'throw away machine' for your scheduled tasks, and have 1 deploy-script for all your workers.

Another common problem that cronlock will solve is overlap by a single server/cronjob. It happens a lot that developers underestimate how long a job will run. This can happen because the job waits on something, acts different under high load/volume, or enters an endless loop.

In these cases you don't want the job to be fired again at the next cron-interval, making your problem twice as bad, some intervals later, there's a huge ps auxf with overlapping cronjobs, high server load, and eventually a crash.

By settings locks, cronlock can also prevent the overlap in longer-than-expected-running cronjobs.

Design goals

Requirements

Options

Using the CRONLOCK_CONFIG file or by exporting in your environment, you can set these variables to change the behavior of cronlock:

Redis Cluster Support

Cronlock has support for Redis Cluster (http://redis.io/topics/cluster-spec) introduced in Redis 3.0.

Cronlock acts as a relatively "dumb" cluster client - it will react to MOVED and ASK commands and retry the request to the node given back in the response but it does not attempt to record the slot to node relationship for future use.

Cronlock supports the configuration of only one CRONLOCK_HOST and CRONLOCK_PORT. Cronlock will always connect to the configured host and port and issue the initial REDIS command with the calculated MD5 key. If the Redis node returns ASK or MOVED, Cronlock will disconnect and connect to the given host and port in the ASK or MOVED respoonse. The new Redis host is used for the remaining duration of the execution of Cronlock (or until another ASK or MOVED command is returned) - Cronlock will connect to the originally configured CRONLOCK_HOST and CRONLOCK_PORT for the next execution.

Given the support of only one CRONLOCK_HOST and CRONLOCK_PORT it is recommended that each server running cronlock is configured to initially connect a different master in the Redis Cluster. Thus if one Redis server goes down the instances of Cronlock configured to connect to the other master nodes will continue to operate.

This is easily done if the Redis Cluster is on the same servers as Cronlock - as CRONLOCK_HOST can be set to 127.0.0.1 - each copy of Cronlock will therefore initially connect to its local master node - before reconnecting to other alive Redis nodes.

Aside from configuring appropriate values for CRONLOCK_HOST and CRONLOCK_PORT for the systems running Cronlock - no additional configuration is required for Redis Cluster support.

Examples

Single box

crontab -e
* * * * * cronlock ls -al

In this configuration, ls -al will be launched every minute. If the previous ls -al has not finished yet, another one is not started. This works on 1 server, as the default CRONLOCK_HOST of localhost is used.

In this setup, cronlock works much like Tim Kay's solo, except cronlock requires Redis, so I recommend using Tim Kay's solution here.

Distributed

echo '0 8 * * * CRONLOCK_HOST=redis.mydomain.com cronlock /var/www/mail_customers.sh' | crontab

In this configuration, a central Redis server is used to track the locking for /var/www/mail_customers.sh. So you see that throughout a cluster of 100 servers, just one instance of /var/www/mail_customers.sh is ran every morning. No less, no more.

As long as your Redis server and at least 1 volatile worker is alive, this happens.

Distributed using a config file

To avoid messy crontabs, you can use a config file for shared config instead. Unless CRONLOCK_CONFIG is set, cronlock will look in ./cronlock.conf, then in /etc/cronlock.conf.

Example:

cat << EOF > /etc/cronlock.conf
CRONLOCK_HOST="redis.mydomain.com"
CRONLOCK_GRACE=50
CRONLOCK_PREFIX="mycompany.cronlocks."
CRONLOCK_NTPDATE="yes"
EOF

crontab -e
* * * * * cronlock /var/www/mail_customers.sh # will use config from /etc/cronlock.conf

Lock commands even though they have different arguments

By default cronlock uses your command and its arguments to make a unique identifier by which the global lock is acquired. However if you want to run: ls -al or ls -a, but just 1 instance of either, you'll want to provide your own key:

crontab -e
# One of two will be executed because they share the same KEY
* * * * * CRONLOCK_KEY="ls" cronlock ls -al
* * * * * CRONLOCK_KEY="ls" cronlock ls -a

Per application

If you use the same script and Redis server for multiple applications, an unwanted lock could deny app2 its script. You could make up your own unique CRONLOCK_KEY to circumvent, but it's probably better to use the CRONLOCK_PREFIX for that:

crontab -e
* * * * * CRONLOCK_PREFIX="mylocks.app1." cronlock /var/www/mail_customers.sh
crontab -e
* * * * * CRONLOCK_PREFIX="mylocks.app2." cronlock /var/www/mail_customers.sh

Now both /var/www/mail_customers.sh will run, because they have a different application in their prefixes.

Exit codes

Versioning

This project implements the Semantic Versioning guidelines.

Releases will be numbered with the following format:

<major>.<minor>.<patch>

And constructed with the following guidelines:

For more information on SemVer, please visit http://semver.org.

License

Copyright (c) 2013 Kevin van Zonneveld, http://kvz.io
Licensed under MIT: http://kvz.io/licenses/LICENSE-MIT