Skip to content
No description, website, or topics provided.
Branch: master
Clone or download
tombentley Merge pull request #7 from strimzi/tls-env-vars
Document env vars required for use with TLS
Latest commit 46f6b67 Aug 29, 2018
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
.travis Use same tag & avoid docker login warning Aug 29, 2018
hello-world-consumer FROM centos:7 Aug 29, 2018
hello-world-producer FROM centos:7 Aug 29, 2018
scripts FROM centos:7 Aug 29, 2018
.gitignore Add derived files to .gitignore Aug 29, 2018
.travis.yml Travis build Aug 29, 2018
LICENSE Initial commit Jul 30, 2018
Makefile smoothing an example Aug 6, 2018
Makefile.docker ssl in separate yaml Aug 6, 2018
Makefile.java init commit Aug 1, 2018
README.md Remove documentation for TRUSTSTORE_* and KEYSTORE_* Aug 29, 2018
deployment-ssl.yaml topic consistency Aug 17, 2018
deployment.yaml
hello-world-consumer.yaml smoothing an example Aug 6, 2018
hello-world-producer.yaml smoothing an example Aug 6, 2018
pom.xml

README.md

Description of client-examples

This repository contains example Kafka clients. Both producer and consumer clients are assembled into Docker images which allows them to be deployed on Kubernetes or OpenShift. This may serve as a basic usage example for the Strimzi project.

This repository contains a deployment.yaml file with Deployments for the producer and consumer as well as KafkaTopic and KafkaUsers for use by Strimzi operator. Logging configuration can be found in the log4j2.properties file for the producer and consumer separately.

Build

To build this example you need some basic requirements. Make sure you have make, docker, JDK 1.8 and mvn installed. After cloning this repository to your folder Hello World example is fully ready to be build with maven. By one single command Java sources are compiled into JAR files, Docker images are created and pushed to repository. By default the Docker organization to which images are pushed is the one defined by the USER environment variable which is assigned to the DOCKER_ORG one. The organization can be changed exporting a different value for the DOCKER_ORG and it can also be the internal registry of an OpenShift running cluster.

The command for making the examples is:

make all

Note: Be sure docker and oc cluster where images should be pushed are running.

Usage

Basic requirement to run this example is running Kubernetes or OpenShift cluster with deployed Kafka and Zookeeper containers. Examples how to deploy basic configuration can be found in the Strimzi documentation.

After successfully building the images (which will cause the images to be pushed to the specified Docker repository) you are ready to deploy the producer and consumer containers along with Kafka and Zookeper.

This can be done in two ways:

  • By applying hello-world-producer.yaml and hello-world-consumer.yaml files. This deploys producer image ready to publish to the topic which is specified in the hello-world-producer.yaml file and and consumer image ready to subscribe to the topic which is specified in the hello-world-consumer.yaml.
  • The second option is to apply deployment.yaml file. This deploys the producer and consumer ans also creates the topic and example is in ready-to-observe state.

Before deploying the producer and the consumer, remember to update the image field with the path where the image was pushed during the build and it's available (i.e. <my-docker-org>/hello-world-consumer:latest)

When using deployment.yaml file for deployment you can start observing the sending messages in producer container's log and the receiving of messages in consumer container's log. The producer sends every DELAY_MS ms a message. This message is received and printed by the consumer. Consumer is running until MESSAGE_COUNT messages are received.

It's also available a deployment-ssl.yaml file which deploys the same producer and consumer applications but using a TLS connection to the cluster.

Configuration

Although this Hello World is simple example it is fully configurable. Below are listed and described environmental variables.

Producer

  • BOOTSTRAP_SERVERS - comma-separated host and port pairs that is a list of Kafka broker addresses. The form of pair is host:port, e.g. my-cluster-kafka-bootstrap:9092
  • TOPIC - the topic the producer will send to
  • DELAY_MS - the delay, in ms, between messages
  • MESSAGE_COUNT - the number of messages the producer should send
  • CA_CRT - the certificate of the CA which signed the brokers' TLS certificates, for adding to the client's trust store
  • USER_CRT - the user's certificate
  • USER_KEY - the user's private key
  • LOG_LEVEL - logging level

Consumer

  • BOOTSTRAP_SERVERS - comma-separated host and port pairs that is a list of Kafka broker addresses. The form of pair is host:port, e.g. my-cluster-kafka-bootstrap:9092
  • TOPIC - name of topic which consumer subscribes
  • GROUP_ID - specifies the consumer group id for the consumer
  • MESSAGE_COUNT - the number of messages the consumer should receive
  • CA_CRT - the certificate of the CA which signed the brokers' TLS certificates, for adding to the client's trust store
  • USER_CRT - the user's certificate
  • USER_KEY - the user's private key
  • LOG_LEVEL - logging level

Logging configuration is done by setting up EXAMPLE_LOG_LEVEL environmental variable. Value of this variable is substituted into log4j2.properties file under client/src/main/resources/log4j2.properties. In this file you can set optional appender and loggers.

You can’t perform that action at this time.