Home

Awesome

Actions Status

An open-source project by Conduktor.io

This project is sponsored by conduktor.io. Conduktor provides a platform to help central teams defining global security and governance controls and developers have access to a central Kafka console and get a self-serve experience. It also helps visualize your ACLs (attached to Service Accounts) in your Apache Kafka cluster!

Kafka Security Manager Diagram

Kafka Security Manager

Kafka Security Manager (KSM) allows you to manage your Kafka ACLs at scale by leveraging an external source as the source of truth. Zookeeper just contains a copy of the ACLs instead of being the source.

Kafka Security Manager Diagram

There are several advantages to this:

Your role is to ensure that Kafka Security Manager is never down, as it is now a custodian of your ACL.

Parsers

CSV

The csv parser is the default parser and also the fallback one in case no other parser is matched.

This is a sample CSV acl file:

KafkaPrincipal,ResourceType,PatternType,ResourceName,Operation,PermissionType,Host
User:alice,Topic,LITERAL,foo,Read,Allow,*
User:bob,Group,PREFIXED,bar,Write,Deny,12.34.56.78
User:peter,Cluster,LITERAL,kafka-cluster,Create,Allow,*

Important Note: As of KSM 0.4, a new column PatternType has been added to match the changes that happened in Kafka 2.0. This enables KSM to manage LITERAL and PREFIXED ACLs. See #28

YAML

The yaml parser will load ACLs from yaml instead, to activate the parser just provide files with yml or yaml extension.

An example YAML permission file might be:

users:
  alice:
    topics:
      foo:
        - Read
      bar*:
        - Produce
  bob:
    groups:
      bar:
        - Write,Deny,12.34.56.78
      bob*:
        - All
    transactional_ids:
      bar-*:
        - All
  peter:
    clusters:
      kafka-cluster:
        - Create

The YAML parser will handle automatically prefix patterns by simply appending a star to your resource name.

It also supports some helpers to simplify setup:

Sources

Current sources shipping with KSM include:

Building

sbt clean test
sbt universal:stage

Fat JAR:

sbt clean assembly

This is a Scala app and therefore should run on the JVM like any other application

Artifacts

By using the JAR dependency, you can create your own SourceAcl.

RELEASES artifacts are deployed to Maven Central:

build.sbt (see Maven Central for the latest version)

libraryDependencies += "io.conduktor" %% "kafka-security-manager" % "version"

Configuration

Security configuration - Zookeeper client

Make sure the app is using a property file and launch options similar to your broker so that it can

  1. Authenticate to Zookeeper using secure credentials (usually done with JAAS)
  2. Apply Zookeeper ACL if enabled

Kafka Security Manager does not connect to Kafka.

Sample run for a typical SASL Setup:

target/universal/stage/bin/kafka-security-manager -Djava.security.auth.login.config=conf/jaas.conf

Where conf/jaas.conf contains something like:

Client {
    com.sun.security.auth.module.Krb5LoginModule required
    useKeyTab=true
    storeKey=true
    keyTab="/etc/kafka/secrets/zkclient1.keytab"
    principal="zkclient/example.com@EXAMPLE.COM";
};

Security configuration - Admin client

When configured authorizer class is io.conduktor.ksm.compat.AdminClientAuthorizer, kafka-security-manager will use kafka admin client instead of direct zookeeper connection. Configuration example would be

KafkaClient {
  org.apache.kafka.common.security.plain.PlainLoginModule required
  username="admin"
  password="admin-secret";
};

Configuration file

For a list of configuration see application.conf. You can customise them using environment variables or create your own application.conf file and pass it at runtime doing:

target/universal/stage/bin/kafka-security-manager -Dconfig.file=path/to/config-file.conf

Overall we use the lightbend config library to configure this project.

Environment variables

The default configurations can be overwritten using the following environment variables:

Running on Docker

Building the image

./build-docker.sh

Docker Hub

Alternatively, you can get the automatically built Docker images on Docker Hub

Running

(read above for configuration details)

Then apply to the docker run using for example (in EXTRACT mode):

docker run -it -e AUTHORIZER_ZOOKEEPER_CONNECT="zookeeper-url:2181" -e KSM_EXTRACT_ENABLE=true \
            conduktor/kafka-security-manager:latest

Any of the environment variables described above can be used by the docker run command with the -e options.

Example

docker-compose up -d
docker-compose logs kafka-security-manager
# view the logs, have fun changing example/acls.csv
docker-compose down

For full usage of the docker-compose file see kafka-security-manager

Extracting ACLs

You can initially extract all your existing ACL in Kafka by running the program with the config ksm.extract.enable=true or export KSM_EXTRACT_ENABLE=true

Output should look like:

[2018-03-06 21:49:44,704] INFO Running ACL Extraction mode (ExtractAcl)
[2018-03-06 21:49:44,704] INFO Getting ACLs from Kafka (ExtractAcl)
[2018-03-06 21:49:44,704] INFO Closing Authorizer (ExtractAcl)

KafkaPrincipal,ResourceType,PatternType,ResourceName,Operation,PermissionType,Host
User:bob,Group,PREFIXED,bar,Write,Deny,12.34.56.78
User:alice,Topic,LITERAL,foo,Read,Allow,*
User:peter,Cluster,LITERAL,kafka-cluster,Create,Allow,*

You can then use place this CSV anywhere and use it as your source of truth.

Compatibility

KSM VersionKafka VersionNotes
1.1.0-SNAPSHOT2.8.xupdated log4j dependency
1.0.12.8.xupdated log4j dependency
0.11.02.5.xrenamed packages to io.conduktor. Breaking change on extract config name
0.10.02.5.xYAML support<br>Add configurable num failed refreshes before notification
0.92.5.xUpgrade to Kafka 2.5.x
0.82.3.1Add a "run once" mode
0.72.1.1Kafka Based ACL refresher available (no zookeeper dependency)
0.62.0.0important stability fixes - please update
0.52.0.0
0.42.0.0important change: added column 'PatternType' in CSV
0.31.1.x
0.21.1.xupgrade to 0.3 recommended
0.11.0.xmight work for earlier versions

Contributing

You can break the API / configs as long as we haven't reached 1.0. Each API break would introduce a new version number.

PRs are welcome, especially with the following:

Please open an issue before opening a PR.

Release process

That's it !