Skip to content
This repository has been archived by the owner on Jun 23, 2023. It is now read-only.

Ansible playbook to provision and manage a Red Hat AMQ Streams cluster on top of RHEL or Fedora environments.

License

Notifications You must be signed in to change notification settings

rmarting/amq-streams-rhel-ansible-playbook

Repository files navigation

Red Hat AMQ Streams on RHEL Standalone Playbook

License Main Lang Last Commit Tag Downloads

🗄️ ARCHIVED 🗄️

This repo is no longer maintained as there is a great initiative to automate many great middleware stuff in the Ansible Middleware community. I will try to push new features and benefits there instead of this repo, and then share with a broader community.

The AMQ Streams Collection supports an automate way to install and configure an Apache Kafka cluster, and other components. Please, review it, and contribute there too.

Hope that helps!!!

About

This Ansible Playbook includes a set of different roles:

  • kafka-install: Deploys a cluster of Red Hat AMQ Streams on several hosts.
  • kafka-uninstall: Uninstall a cluster of Red Hat AMQ Streams from several hosts.

These roles with the right configuration will allow you to deploy a full complex Red Hat AMQ Streams on RHEL and automate the most common tasks.

Change Log

Ansible Requirements

This playbook has tested with the next Ansible versions (ansible --version):

ansible 2.10.17

The playbook has to be executed with root permission, using the root user or via sudo because it will install packages and configure the hosts.

RHEL Subscription Required

This role is prepared to be executed in a RHEL 7 Server or RHEL 8 Server. Subscription is required to execute the Grant RHEL repos enabled task, which enables the required repositories to install Red Hat AMQ Streams dependencies

Global and Host Variables

Global Variables

Each playbook will use a set of Global Variables with values that it will be the same for all of them.

These variables will be defined in groups_vars/all.yaml file.

These variables are:

  • binary: to define AMQ Streams binary location at your ansible control machine.
# AMQ Streams Installation Source
binary:
  folder: '/tmp/'
  • kafka: Define the Red Hat AMQ Streams version to install. This values will form the path the the binaries: /tmp/amq-streams-{{ kafka['version'] }}.-bin.zip
# AMQ Streams Version
kafka:
  version: '2.4.0'
  • user: OS user to execute the AMQ Streams processes.
# OS User to install/execute AMQ Streams
user:
  name: 'kafka'
  shell: '/bin/bash'
  homedir: 'True'
  • java_8_home, java_11_home, and java_17_home: Path to Java Virtual Machine 8 (for RHEL 7), Java Virtual Machine 11 (for RHEL 8), and Java Virtual Machine 17 (for RHEL 8 or Fedora 38) to execute the processes.
# Java Home
java_8_home: '/usr/lib/jvm/jre-1.8.0-openjdk'
java_11_home: '/usr/lib/jvm/jre-11-openjdk'
java_17_home: '/usr/lib/jvm/jre-17-openjdk'
  • kafka_base: AMQ Streams base path where it will be installed everything.
kafka_base: '/opt/kafka'
  • kafka_home: AMQ Streams Home path where it will installed the cluster. This variable will be defined for each host with a name. This allow installs more instances in the same hosts.
kafka_home: "{{ kafka_base }}/{{ kafka_name }}"
  • zookeeper_data_folder: Path to be used by Apache Zookeeper to store its status.
zookeeper_data_folder: "/var/lib/{{ kafka_name }}/zookeeper"
  • kafka_storage_folder: Path to be used by Apache Kafka to store the partitions.
kafka_storage_folder: "/var/lib/{{ kafka_name }}/kafka"
  • zookeeper_4wl_commands_whitelist: Command whitelist to operate with Zookeeper. More info here
zookeeper_4wl_commands_whitelist: "*"
  • zookeeper_admin_server_enabled: Enable the admin server. More info here
zookeeper_admin_server_enabled: true
  • zookeeper_admin_server_port: Port of the admin server
zookeeper_admin_server_port: 8080

Host Variables

Each role will use a set of Host Variables defined in the playbook for each host defined in the inventory.

These variables are:

  • kafka_name: Logical name to identify each AMQ Cluster Instance. Mandatory.
  • port_offset: Port offset to be added to default AMQ Cluster ports. Mandatory to install different instances in the same host.

These variables should be defined in the playbook as:

  roles:
    - {
        role: kafka-install,
        kafka_name: 'cluster-01',
        port_offset: '0'
      }

Ansible Roles

kafka-install role

This role deploys a AMQ Cluster in a set of hosts.

The main tasks done are:

  • Install AMQ Streams prerequisites and set up host
  • Install AMQ Streams binaries
  • Create OS users and OS services
  • Set up different AMQ Streams features: Clustering, storage folders, ...

Red Hat AMQ Streams binaries should be downloaded from Red Hat Customer Portal (subscription needed). The binaries should be copied into /tmp folder from the host where the playbook will be executed.

Configuration parameters

Role's execution could be configured with the following variables.

Example playbook

To execute this playbook:

ansible-playbook -i hosts kafka-install.yaml

Inventory (host file):

[amq_lab_environment]
rh8amq01
rh8amq02
rh8amq03

[zookeepers]
rh8amq01
rh8amq02
rh8amq03

[brokers]
rh8amq01
rh8amq02
rh8amq03

Playbook (kafka-install.yaml file):

---
- name: Install Playbook of an AMQ Streams Environment
  hosts: amq_lab_environment
  serial: 1
  remote_user: root
  gather_facts: true
  become: yes
  become_user: root
  become_method: sudo
  roles:
    - {
        role: kafka-install,
        kafka_name: 'cluster-01',
        port_offset: '0'
      }

Verify Installation

Once all Zookeeper nodes of the clusters are up and running, verify that all nodes are Zookeeper members of the cluster by sending a stat command to each of the nodes using ncat utility.

echo stat | ncat localhost 2181
Zookeeper version: 3.4.14-redhat-00001-a70f9d20b6373085f9a94183a1324a57cdaead75, built on 05/30/2019 13:26 GMT
Clients:
 /127.0.0.1:52252[1](queued=0,recved=222,sent=228)
 /0:0:0:0:0:0:0:1:57906[0](queued=0,recved=1,sent=0)
 /192.168.122.205:59026[1](queued=0,recved=78,sent=78)

Latency min/avg/max: 0/2/20
Received: 303
Sent: 308
Connections: 3
Outstanding: 0
Zxid: 0x100000059
Mode: follower
Node count: 51

Once all nodes of the Kafka brokers are up and running, verify that all nodes are members of the Kafka cluster by sending a dump command to one of the Zookeeper nodes using the ncat utility. The command prints all Kafka brokers registered in Zookeeper.

echo dump | ncat localhost 2181
SessionTracker dump:
org.apache.zookeeper.server.quorum.LearnerSessionTracker@1d09eac4
ephemeral nodes dump:
Sessions with Ephemerals (3):
0x2000050aba60000:
	/brokers/ids/2
0x1000050954d0001:
	/brokers/ids/1
0x1000050954d0000:
	/controller
	/brokers/ids/0

Other alternative is using the Zookeeper Admin Server that provides a HTTP endpoint for the same commands.

  • stat command:
❯ curl http://localhost:8080/commands/stat
{
  "version" : "3.6.3--15575ad0cadaccd19f84de5d488d1cfb00572a84-dirty, built on 04/21/2023 09:02 GMT",
  "read_only" : false,
  "server_stats" : {
    "packets_sent" : 521,
    "packets_received" : 521,
    "fsync_threshold_exceed_count" : 0,
    "client_response_stats" : {
      "last_buffer_size" : 16,
      "min_buffer_size" : 16,
      "max_buffer_size" : 16
    },
    "provider_null" : false,
    "server_state" : "leader",
    "outstanding_requests" : 0,
    "min_latency" : 0,
    "avg_latency" : 0.5058,
    "max_latency" : 4,
    "data_dir_size" : 134232120,
    "log_dir_size" : 134232120,
    "last_processed_zxid" : 17179869184,
    "num_alive_client_connections" : 3,
    "auth_failed_count" : 0,
    "non_mtlsremote_conn_count" : 0,
    "non_mtlslocal_conn_count" : 0,
    "uptime" : 1035502
  },
  "client_response" : {
    "last_buffer_size" : 16,
    "min_buffer_size" : 16,
    "max_buffer_size" : 16
  },
  "proposal_stats" : {
    "last_buffer_size" : -1,
    "min_buffer_size" : -1,
    "max_buffer_size" : -1
  },
  "node_count" : 53,
  "connections" : [ {
    "remote_socket_address" : "192.168.122.186:50150",
    "interest_ops" : 1,
    "outstanding_requests" : 0,
    "packets_received" : 174,
    "packets_sent" : 174
  }, {
    "remote_socket_address" : "192.168.122.182:39268",
    "interest_ops" : 1,
    "outstanding_requests" : 0,
    "packets_received" : 174,
    "packets_sent" : 174
  }, {
    "remote_socket_address" : "192.168.122.205:52686",
    "interest_ops" : 1,
    "outstanding_requests" : 0,
    "packets_received" : 173,
    "packets_sent" : 173
  } ],
  "secure_connections" : [ ],
  "command" : "stats",
  "error" : null
}
  • dump command:
❯ curl http://localhost:8080/commands/dump
{
  "expiry_time_to_session_ids" : {
    "3120000" : [ ],
    "3122000" : [ ],
    "3126000" : [ ],
    "3128000" : [ ],
    "3132000" : [ 144115223352049664 ],
    "3134000" : [ 72057629377888256, 144115223352049665 ]
  },
  "session_id_to_ephemeral_paths" : {
    "72057629377888256" : [ "/brokers/ids/2" ],
    "144115223352049664" : [ "/controller", "/brokers/ids/0" ],
    "144115223352049665" : [ "/brokers/ids/1" ]
  },
  "command" : "dump",
  "error" : null
}

kafka-uninstall role

This role uninstall an AMQ Streams Cluster from inventory hosts.

The main tasks done are:

  • Stop AMQ Streams services
  • Remove OS services
  • Remove AMQ Streams binaries

Configuration parameters

Role's execution could be configured with the following variables.

Example playbook

To execute this playbook:

ansible-playbook -i hosts kafka-uninstall.yaml

Inventory (host file):

[amq_lab_environment]
rh8amq01
rh8amq02
rh8amq03

[zookeepers]
rh8amq01
rh8amq02
rh8amq03

[brokers]
rh8amq01
rh8amq02
rh8amq03

Playbook (kafka-uninstall.yaml file):

---
- name: Uninstall Playbook of an AMQ Streams Environment
  hosts: amq_lab_environment
  remote_user: root
  gather_facts: true
  become: yes
  become_user: root
  become_method: sudo
  roles:
    - {
        role: kafka-uninstall,
        kafka_name: 'cluster-01'
      }

Sample Kafka commands

If you want to test and verify your new Kafka cluster, the following commands can help you:

  • Create a new topic:
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --create --topic sample --partitions 10 --replication-factor 3
  • Describe a topic
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --describe --topic sample

The output should be similar to:

Topic: sample	PartitionCount: 10	ReplicationFactor: 3	Configs: min.insync.replicas=1,cleanup.policy=delete,segment.bytes=1073741824,message.format.version=2.3-IV1,max.message.bytes=52428800,delete.retention.ms=3600000
	Topic: sample	Partition: 0	Leader: 2	Replicas: 2,0,1	Isr: 2,0,1
	Topic: sample	Partition: 1	Leader: 1	Replicas: 1,2,0	Isr: 1,2,0
	Topic: sample	Partition: 2	Leader: 0	Replicas: 0,1,2	Isr: 0,1,2
	Topic: sample	Partition: 3	Leader: 2	Replicas: 2,1,0	Isr: 2,1,0
	Topic: sample	Partition: 4	Leader: 1	Replicas: 1,0,2	Isr: 1,0,2
	Topic: sample	Partition: 5	Leader: 0	Replicas: 0,2,1	Isr: 0,2,1
	Topic: sample	Partition: 6	Leader: 2	Replicas: 2,0,1	Isr: 2,0,1
	Topic: sample	Partition: 7	Leader: 1	Replicas: 1,2,0	Isr: 1,2,0
	Topic: sample	Partition: 8	Leader: 0	Replicas: 0,1,2	Isr: 0,1,2
	Topic: sample	Partition: 9	Leader: 2	Replicas: 2,1,0	Isr: 2,1,0
  • List topics
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --list

Main References

About

Ansible playbook to provision and manage a Red Hat AMQ Streams cluster on top of RHEL or Fedora environments.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages