Skip to content

Commit

Permalink
refactor project
Browse files Browse the repository at this point in the history
  • Loading branch information
fhussonnois committed Nov 5, 2020
1 parent 0644003 commit 8f49091
Show file tree
Hide file tree
Showing 24 changed files with 2,047 additions and 4,199 deletions.
102 changes: 102 additions & 0 deletions README.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,102 @@
= Kafka Monitoring Stack for Docker Compose (Prometheus / Grafana)
:toc:
:toc-placement!:

This repository demonstrates how to use Prometheus and Grafana for monitoring an Apache Kafka cluster.

toc::[]

== Getting Started

=== Start Confluent Platform using Docker

**1. Clone the Kafka Monitoring Suite repository.**

[source,bash]
----
$ git clone https://github.com/streamthoughts/kafka-monitoring-suite-demo-prometheus.git
$ cd kafka-monitoring-suite-demo-prometheus
----

**2. Start Confluent/Kafka cluster.**

Deploy Kafka, Prometheus and Grafana services using Docker and Docker Compose.

Note: Depending on your network speed, this may take few minutes to download all images.

[source,bash]
----
# single node Kafka Cluster
$ docker-compose -f zk-kafka-single-node-stack.yml
# or 3 nodes Kafka Cluster
$ docker-compose -f zk-kafka-multiple-nodes-stack.yml
# single node Kafka Cluster with SASL/PLAINTEXT enable.
$ docker-compose -f zk-kafka-single-node-secured-stack.yml
# 3 nodes Kafka Cluster with SASL/PLAINTEXT enable.
$ docker-compose -f zk-kafka-multiple-nodes-secured-stack.yml
----

**3. Create Topic.**

Create `demo-topic` with 6 partitions and 3 replicas.

[source,bash]
----
$ ./bin/kafka-topics --create --partitions 6 --replication-factor 3 --topic demo-topic
----

**4. Produce messages.**

Open a new terminal window, generate some message to simulate producer load.

[source,bash]
----
$ ./bin/kafka-producer-perf-test --throughput 500 --num-records 100000000 --topic demo-topic --record-size 100
----

**5. Consume messages.**

Open a new terminal window, generate some message to simulate consumer load.

[source,bash]
----
$ ./bin/kafka-producer-perf-test --throughput 500 --num-records 100000000 --topic demo-topic --record-size 100
----

**6. Open Grafana.**

Open your favorite web browser and open provided Grafana dashboard : Kafka Cluster / Global Healthcheck

(see [Accessing Grafana Web UI](#accessing-grafana-web-ui))

image:./assets/kafka-cluster-healthcheck.png[kafka-cluster-healthcheck]

=== Accessing Grafana Web UI

Grafana is accessible at the address : [http://localhost:3000](http://localhost:3000)

Security are :

* user : `admin`
* password : `kafka`

=== Accessing Prometheus Web UI

Prometheus is accessible at the address : [http://localhost:9090](http://localhost:9090)

== Contributions

Any feedback, bug reports and PRs are greatly appreciated!

== Licence

Copyright 2020 StreamThoughts.

Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0["http://www.apache.org/licenses/LICENSE-2.0"]

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
85 changes: 0 additions & 85 deletions README.md

This file was deleted.

4 changes: 2 additions & 2 deletions bin/create-admin-scram-users
Original file line number Diff line number Diff line change
Expand Up @@ -16,12 +16,12 @@

set -e;

configDir=$(readlink -f $0 | xargs dirname)/../etc
CONFIG_DIR=$(readlink -f $0 | xargs dirname)/etc

echo "Creating new user for SASL/SCRAM-SHA-[256|512]: (username:kafka, password:kafka) - This may take a few seconds..."
docker run -it \
--env KAFKA_OPTS="-Djava.security.auth.login.config=/etc/kafka/secrets/kafka_server_jaas.conf -Dzookeeper.sasl.clientconfig=ZkClient" \
--network="host" \
--mount type=bind,source=${configDir}/secrets/,target=/etc/kafka/secrets/ \
--mount type=bind,source=${CONFIG_DIR}/secrets/,target=/etc/kafka/secrets/ \
confluentinc/cp-kafka:latest \
/usr/bin/kafka-configs --zookeeper localhost:2181 --alter --add-config 'SCRAM-SHA-256=[iterations=8192,password=kafka],SCRAM-SHA-512=[password=kafka]' --entity-type users --entity-name kafka
Loading

0 comments on commit 8f49091

Please sign in to comment.