Kafka GUI for topics, topics data, consumers group, schema registry, connect and more...
- General
- Works with modern Kafka cluster (1.0+)
- Connection on standard or ssl, sasl cluster
- Multi cluster
- Topics
- List
- Configurations view
- Partitions view
- Consumers groups assignments view
- Node leader & assignments view
- Create a topic
- Configure a topic
- Delete a topic
- Browse Topic datas
- View data, offset, key, timestamp & headers
- Automatic deserializarion of avro message encoded with schema registry
- Configurations view
- Logs view
- Delete a record
- Sort view
- Filter per partitions
- Filter with a starting time
- Filter data with a search string
- Consumer Groups (only with kafka internal storage, not with old Zookepper)
- List with lag, topics assignments
- Partitions view & lag
- Node leader & assignments view
- Display active and pending consumers groups
- Delete a consumer group
- Update consumer group offsets to start / end / timestamp
- Schema Registry
- List schema
- Create / Update / Delete a schema
- View and delete individual schema version
- Connect
- List connect definition
- Create / Update / Delete a definition
- Pause / Resume / Restart a definition or a task
- Nodes
- List
- Configurations view
- Logs view
- Configure a node
- Authentification and Roles
- Read only mode
- BasicHttp with roles per user
- Download docker-compose.yml file
- run
docker-compose pull
to be sure to have the last version of KafkaHQ - run
docker-compose up
- go to http://localhost:8080
It will start a Kafka node, a Zookeeper node, a Schema Registry, a Connect, fill with some sample data, start a consumer group and a kafka stream & start KafkaHQ.
First you need a configuration files in order to configure KafkaHQ connections to Kafka Brokers.
docker run -d \
-p 8080:8080 \
-v /tmp/application.yml:/app/application.yml \
tchiotludo/kafkahq
- With
-v /tmp/application.yml
must be an absolute path to configuration file - Go to http://localhost:8080
- Install Java 11
- Download the latest jar on release page
- Create an configuration files
- Launch the application with
java -Dmicronaut.config.files=/path/to/application.yml -jar kafkahq.jar
- Go to http://localhost:8080
Configuration file can by default be provided in either Java properties, YAML, JSON or Groovy files. YML Configuration file example can be found here :application.example.yml
kafkahq.connections
is a key value configuration with :key
: must be an url friendly (letter, number, _, -, ... dot are not allowed here) string the identify your cluster (my-cluster-1
andmy-cluster-2
is the example above)properties
: all the configurations found on Kafka consumer documentation. Most important isbootstrap.servers
that is a list of host:port of your Kafka brokers.schema-registry
: (optional)url
: the schema registry urlbasic-auth.username
: schema registry basic auth usernamebasic-auth.password
: schema registry basic auth password
connect
: (optional)url
: connect urlbasic-auth.username
: connect basic auth usernamebasic-auth.password
: connect basic auth passwordssl.trust-store
: /app/truststore.jksssl.trust-store-password
: trust-store-passwordssl.key-store
: /app/truststore.jksssl.key-store-password
: key-store-password
Configuration example for kafka cluster secured by ssl for saas provider like aiven (full https & basic auth):
You need to generate a jks & p12 file from pem, cert files give by saas provider.
openssl pkcs12 -export -inkey service.key -in service.cert -out client.keystore.p12 -name service_key
keytool -import -file ca.pem -alias CA -keystore client.truststore.jks
Configurations will look like this example:
kafkahq:
connections:
ssl-dev:
properties:
bootstrap.servers: "{{host}}.aivencloud.com:12835"
security.protocol: SSL
ssl.truststore.location: {{path}}/avnadmin.truststore.jks
ssl.truststore.password: {{password}}
ssl.keystore.type: "PKCS12"
ssl.keystore.location: {{path}}/avnadmin.keystore.p12
ssl.keystore.password: {{password}}
ssl.key.password: {{password}}
schema-registry:
url: "https://{{host}}.aivencloud.com:12838"
basic-auth:
username: avnadmin
password: {{password}}
connect:
url: "https://{{host}}.aivencloud.com:{{port}}"
basic-auth:
username: avnadmin
password: {{password}}
kafkahq.topic.page-size
number of topics per page (default : 25)kafkahq.topic.default-view
is default list view (ALL, HIDE_INTERNAL, HIDE_INTERNAL_STREAM, HIDE_STREAM)kafkahq.topic.internal-regexps
is list of regexp to be considered as internal (internal topic can't be deleted or updated)kafkahq.topic.stream-regexps
is list of regexp to be considered as internal stream topic
These parameters are the default values used in the topic creation page.
kafkahq.topic.retention
Default retention in mskafkahq.topic.replication
Default number of replica to usekafkahq.topic.partition
Default number of partition
kafkahq.topic-data.sort
: default sort order (OLDEST, NEWEST) (default: OLDEST)kafkahq.topic-data.size
: max record per page (default: 50)kafkahq.topic-data.poll-timeout
: The time, in milliseconds, spent waiting in poll if data is not available in the buffer (default: 1000).
kafkahq.consumer-groups.page-size
number of consumer groups per page (default : 25)
kafkahq.schema.page-size
number of schemas per page (default : 25)
kafkahq.security.default-roles
: Roles available for all the user even unlogged user, roles available are :topic/read
topic/insert
topic/delete
topic/config/update
node/read
node/config/update
topic/data/read
topic/data/insert
topic/data/delete
group/read
group/delete
group/offsets/update
registry/read
registry/insert
registry/update
registry/delete
registry/version/delete
By default, security & roles is enabled by default but anonymous user have full access. You can completely disabled
security with micronaut.security.enabled: false
.
If you need a read-only application, simply add this to your configuration files :
kafkahq:
security:
default-roles:
- topic/read
- node/read
- topic/data/read
- group/read
- registry/read
- connect/read
kafkahq.security.basic-auth
: List user & password with affected rolesactual-username
: login of the current user as a yaml key (may be anything email, login, ...)password
: Password in sha256, can be converted with commandecho -n "password" | sha256sum
roles
: Role for current users
Take care that basic auth will use session store in server memory. If your instance is behind a reverse proxy or a loadbalancer, you will need to forward the session cookie named
SESSION
and / or use sesssion stickiness
kafkahq.server.base-path
: if behind a reverse proxy, path to kafkahq with trailing slash (optional). Example: kafkahq is behind a reverse proxy with url http://my-server/kafkahq, set base-path: "/kafkahq/". Not needed if you're behind a reverse proxy with subdomain http://kafkahq.my-server/
kafkahq.clients-defaults.{{admin|producer|consumer}}.properties
: default configuration for admin producer or consumer. All properties from Kafka documentation is available.
Since KafkaHQ is based on Micronaut, you can customize configurations (server port, ssl, ...) with Micronaut configuration. More information can be found on Micronaut documentation
KafkaHQ docker image support 3 environment variables to handle configuraiton :
KAFKAHQ_CONFIGURATION
: a string that contains the full configuration in yml that will be written on /app/configuration.yml on container.MICRONAUT_APPLICATION_JSON
: a string that contains the full configuration in JSON formatMICRONAUT_CONFIG_FILES
: a path to to a configuration file on container. Default path is/app/application.yml
Take care when you mount configuration files to not remove kafkahq files located on /app.
You need to explicitely mount the /app/application.yml
and not mount the /app
directory.
This will remove the KafkaHQ binnaries and give you this error: /usr/local/bin/docker-entrypoint.sh: 9: exec: ./kafkahq: not found
volumeMounts:
- mountPath: /app/application.yml
subPath: application.yml
name: config
readOnly: true
Several monitoring endpoint is enabled by default. You can disabled it or restrict access only for authenticated users following micronaut configuration below.
/info
Info Endpoint with git status informations./health
Health Endpoint/loggers
Loggers Endpoint/metrics
Metrics Endpoint/prometheus
Prometheus Endpoint
A docker-compose is provide to start a development environnement.
Just install docker & docker-compose, clone the repository and issue a simple docker-compose -f docker-compose-dev.yml up
to start a dev server.
Dev server is a java server & webpack-dev-server with live reload.
Many thanks to:
JetBrains for their free OpenSource license.
Apache 2.0 © tchiotludo