Awesome
Zilla Examples
This repo contains a collection of example folders that can be used individually to demonstrate key Zilla features. If this is your first step on your journey with Zilla, we encourage you to try our Quickstart.
Prerequisites
You will need an environment with Docker or Helm and Kubernetes installed. Check out our Postman collections for more ways to interact with an example.
Getting Started
The startup.sh
script is meant to help setup and teardown the necessary components for each of the examples. Using it is the easiest way to interact with each example.
Install and run any of the examples using the startup.sh
script:
./startup.sh -m example.name
You can specify your own Kafka host and port or the working directory where you want the examples to be downloaded. Existing example directories will not
be overwritten.
./startup.sh -m -k kafka:9092 -d /tmp example.name
Alternatively, you can run this script the same way without cloning the repo.
wget -qO- https://raw.githubusercontent.com/aklivity/zilla-examples/main/startup.sh | sh -s -- -m example.name
Usage
./startup.sh --help
Usage: startup.sh [-hm][-k KAFKA_BOOTSTRAP_SERVER][-d WORKDIR][-v ZILLA_VERSION][-e EX_VERSION][--no-kafka-init][--redpanda] example.name
Operand:
example.name The name of the example to use [default: quickstart][string]
Options:
-d | --workdir Sets the directory used to download and run the example [string]
-e | --ex-version Sets the examples version to download [default: latest][string]
-h | --use-helm Use the helm install, if available, instead of compose [boolean]
-k | --kafka-server Sets the Kafka Boostrap Server to use [string]
-m | --use-main Download the head of the main branch [boolean]
-v | --zilla-version Sets the zilla version to use [default: latest][string]
--auto-teardown Executes the teardown script immediately after setup [boolean]
--no-kafka-init The script wont try to bootstrap the kafka broker [boolean]
--redpanda Makes the included kafka broker and scripts use Redpanda [boolean]
--help Print help [boolean]
Examples
Name | Description |
---|---|
asyncapi.mqtt.proxy | Forwards validated MQTT publish messages and proxies subscribes to an MQTT broker |
asyncapi.mqtt.kafka.proxy | Forwards MQTT publish messages to Kafka, broadcasting to all subscribed MQTT clients |
asyncapi.http.kafka.proxy | Correlates HTTP requests and responses over separate Kafka topics |
asyncapi.sse.proxy | Proxies validated messages delivered by the SSE server |
asyncapi.sse.kafka.proxy | Streams messages published to a Kafka topic over SSE |
tcp.echo | Echoes bytes sent to the TCP server |
tcp.reflect | Echoes bytes sent to the TCP server, broadcasting to all TCP clients |
tls.echo | Echoes encrypted bytes sent to the TLS server |
tls.reflect | Echoes encrypted bytes sent to the TLS server, broadcasting to all TLS clients |
http.filesystem | Serves files from a directory on the local filesystem |
http.filesystem.config.server | Serves files from a directory on the local filesystem, getting the config from a http server |
http.echo | Echoes request sent to the HTTP server from an HTTP client |
http.echo.jwt | Echoes request sent to the HTTP server from a JWT-authorized HTTP client |
http.proxy | Proxy request sent to the HTTP server from an HTTP client |
http.proxy.schema.inline | Proxy request sent to the HTTP server from an HTTP client with schema enforcement |
http.kafka.sync | Correlates HTTP requests and responses over separate Kafka topics |
http.kafka.async | Correlates HTTP requests and responses over separate Kafka topics, asynchronously |
http.kafka.cache | Serves cached responses from a Kafka topic, detect when updated |
http.kafka.oneway | Sends messages to a Kafka topic, fire-and-forget |
http.kafka.crud | Exposes a REST API with CRUD operations where a log-compacted Kafka topic acts as a table |
http.kafka.sasl.scram | Sends messages to a SASL/SCRAM enabled Kafka |
http.kafka.karapace | Validate messages while produce and fetch to a Kafka topic |
http.redpanda.sasl.scram | Sends messages to a SASL/SCRAM enabled Redpanda Cluster |
kubernetes.prometheus.autoscale | Demo Kubernetes Horizontal Pod Autoscaling feature based a on a custom metric with Prometheus |
grpc.echo | Echoes messages sent to the gRPC server from a gRPC client |
grpc.kafka.echo | Echoes messages sent to a Kafka topic via gRPC from a gRPC client |
grpc.kafka.fanout | Streams messages published to a Kafka topic, applying conflation based on log compaction |
grpc.kafka.proxy | Correlates gRPC requests and responses over separate Kafka topics |
grpc.proxy | Proxies gRPC requests and responses sent to the gRPC server from a gRPC client |
amqp.reflect | Echoes messages published to the AMQP server, broadcasting to all receiving AMQP clients |
mqtt.kafka.broker | Forwards MQTT publish messages to Kafka, broadcasting to all subscribed MQTT clients |
mqtt.kafka.broker.jwt | Forwards MQTT publish messages to Kafka, broadcasting to all subscribed JWT-authorized MQTT clients |
quickstart | Starts endpoints for all protocols (HTTP, SSE, gRPC, MQTT) |
sse.kafka.fanout | Streams messages published to a Kafka topic, applying conflation based on log compaction |
sse.proxy.jwt | Proxies messages delivered by the SSE server, enforcing streaming security constraints |
ws.echo | Echoes messages sent to the WebSocket server |
ws.reflect | Echoes messages sent to the WebSocket server, broadcasting to all WebSocket clients |
Read the docs. Try the examples. Join the Slack community.