Awesome
Leek
<h3 align="center"> <br> <a href="#"><img src="https://raw.githubusercontent.com/kodless/leek/master/doc/static/img/logo.png" alt="Leek Celery Monitoring Tool" height="200" width="200"></a> <br> <span>Celery Tasks Monitoring Tool</span> <br> <span>Documentation: https://tryleek.com</span> <br> </h3>What is Leek?
Leek is a celery tasks monitoring tool, the main difference between Leek and other monitoring tools is that Leek can connect to and monitor many brokers with a single container whereas other tools can monitor only a single broker at a time.
Also leek supports environments branching, multiple applications, Google SSO, charts, issues monitoring, advanced filtering and search, indexation and persistence, slack notifications and provides an awesome UI for a better user experience.
Leek came to remediate the issues found with other celery monitoring tools and provide a reliable results and cool features to ease the process of monitoring your celery cluster, finding and respond to issues quickly.
What Leek is not?
Leek is not a celery tasks/workers control tool and you cannot use leek to revoke/terminate/start tasks, restart your workers fleet, or manage your brokers. however control features could be supported with future releases.
Leek is not a package that can be installed/imported but a full stack application published as a docker image.
Features
As opposed to many other alternatives, leek came to fix the issues existing in other tools and offer many other cool features that does not exist in other tools:
-
Google SSO
- you can connect to leek using GSuite accounts for organizations and standard GMail accounts for individuals. -
Multi brokers support
- other monitoring tools can connect to only one broker at a time, which enforces you to deploy many instances to monitor them all. however, Leek with its Agent, it can monitor tasks from multiple brokers with only a single instance of leek. -
Multi ENVs support
- when connecting Leek agent to brokers, you can specify an environment tag for that broker and each event sent from that broker will be tagged with that environment name, allowing you to split celery events into qa, stg, prod subsets so later you can filter task by environment name. -
Enhanced storage
- unlike other alternatives that stores the events in volatile RAM, celery events are indexed to elasticsearch to offer persistence and fast retrieval/search. -
Beatiful UI
- unlike other alternatives which are either a command line tool or have an ugly UI, Leek offers a great user experience thanks to its beautiful well designed UI. -
Notification
- you can define notification rules that will trigger a slack notification to inform you about critical events, the notification triggers rules can match against task state, task name exclusion/exclusions, environment name, and runtime upper bound. -
Monitor Issues
- Leek can also monitor issues by aggregating the failed tasks by exception name, and for each exception it will calculate occurrences, recovered, pending, failed and critical exceptions. -
Charts
- Leek generate multiple charts giving you an idea about the application state, these chart includes: tasks states distribution, tasks queues distribution, top 5 executed tasks, top 5 slow tasks, tasks execution over time, tasks queue over time, tasks failure over time ... -
Filter by anything
- unlike other alternatives that doesn't provide a good support for filters, leek provides a wide range of filters. -
Tasks control
- for now leek can retry tasks only, more tasks/workers control features may be introduced in the future.
Running a local demo
To experiment with leek, you can run one of these demo docker-compose files:
curl -sSL https://raw.githubusercontent.com/kodless/leek/master/demo/docker-compose-rmq-no-auth.yml > docker-compose.yml
docker-compose up
curl -sSL https://raw.githubusercontent.com/kodless/leek/master/demo/docker-compose-redis-no-auth.yml > docker-compose.yml
docker-compose up
This is an example of a demo, that includes 4 services:
-
Leek main application
-
A RabbitMQ or Redis broker
-
An elasticsearch node
-
Demo celery client (publisher)
-
Demo celery workers (consumer)
-
After running the services with
docker-compose up
, wait for the services to start and navigate to http://0.0.0.0:8000. -
Create an application with the same name as in
LEEK_AGENT_SUBSCRIPTIONS
, which isleek
. -
Enjoy the demo
version: "2.4"
services:
# Main app
app:
image: kodhive/leek
environment:
# General
- LEEK_API_LOG_LEVEL=WARNING
- LEEK_AGENT_LOG_LEVEL=INFO
# Components
- LEEK_ENABLE_API=true
- LEEK_ENABLE_AGENT=true
- LEEK_ENABLE_WEB=true
# URLs
- LEEK_API_URL=http://0.0.0.0:5000
- LEEK_WEB_URL=http://0.0.0.0:8000
- LEEK_ES_URL=http://es01:9200
# Authentication
- LEEK_API_ENABLE_AUTH=false
# Subscriptions
- |
LEEK_AGENT_SUBSCRIPTIONS=
[
{
"broker": "amqp://admin:admin@mq//",
"broker_management_url": "http://mq:15672",
"backend": null,
"exchange": "celeryev",
"queue": "leek.fanout",
"routing_key": "#",
"org_name": "mono",
"app_name": "leek",
"app_env": "prod",
"prefetch_count": 1000,
"concurrency_pool_size": 2,
"batch_max_size_in_mb": 1,
"batch_max_number_of_messages": 1000,
"batch_max_window_in_seconds": 5
}
]
- LEEK_AGENT_API_SECRET=not-secret
ports:
- 5000:5000
- 8000:8000
depends_on:
mq:
condition: service_healthy
# Just for local demo!! (Test worker)
worker:
image: kodhive/leek-demo
environment:
- BROKER_URL=pyamqp://admin:admin@mq:5672
depends_on:
mq:
condition: service_healthy
# Just for local demo!! (Test client)
publisher:
image: kodhive/leek-demo
environment:
- BROKER_URL=pyamqp://admin:admin@mq:5672
command: >
bash -c "python3 publisher.py"
depends_on:
mq:
condition: service_healthy
# Just for local demo!! (Test broker)
mq:
image: rabbitmq:3.8.9-management-alpine
environment:
- RABBITMQ_DEFAULT_USER=admin
- RABBITMQ_DEFAULT_PASS=admin
- "RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS=-rabbit log [{console,[{level,error}]}]"
ports:
- 15672:15672
- 5672:5672
healthcheck:
test: [ "CMD", "nc", "-z", "localhost", "5672" ]
interval: 2s
timeout: 4s
retries: 20
# Just for local development!! (Test index db)
es01:
image: elasticsearch:7.10.1
container_name: es01
environment:
- node.name=es01
- cluster.name=es-docker-cluster
- cluster.initial_master_nodes=es01
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
command: ["elasticsearch", "-Elogger.level=ERROR"]
healthcheck:
test: ["CMD-SHELL", "curl --silent --fail localhost:9200/_cluster/health || exit 1"]
interval: 30s
timeout: 30s
retries: 3
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65535
hard: 65535
ports:
- 9200:9200