Awesome
Info
Ansible role for deploying spark-operator.
It is available also on Ansible Galaxy (link).
Usage
ansible-galaxy install jiri_kremser.spark_operator
And then use it in your playbook (example-playbook.yaml
):
- hosts: localhost
roles:
- role: jiri_kremser.spark_operator
or with your own settings (these are the default settings):
- hosts: localhost
roles:
- role: jiri_kremser.spark_operator
namespace: myproject
image: quay.io/radanalyticsio/spark-operator:latest-released
crd: false
prometheus: false
runExampleCluster: false
ansible-playbook example-playbook.yaml
or with this one liner:
ansible localhost -m include_role -a name=jiri_kremser.spark_operator
After running the playbook, the operator should be up and running and you can continue with creating either config maps or custom resources for your Spark clusters or Spark applications. For more details please consult the readme of spark-operator (you can skip the operator deployment step) or check the examples.
Try locally
ansible-playbook -i tests/inventory tests/test-playbook.yml