Quickstart to launch a Solace PubSub+ Software Message Broker in Google Container Engine for Kubernetes
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Failed to load latest commit information.
images Rebranding updates and fixes (#21) May 23, 2018
scripts
.travis.yml Rebranding updates and fixes (#21) May 23, 2018
CONTRIBUTING.md
LICENSE Initial copy from KenBarr repo Oct 23, 2017
README.md Solace docker hub support and documentation updates (#23) Aug 2, 2018

README.md

Build Status

Deploying a Solace PubSub+ Software Message Broker HA group onto a Google Kubernetes Engine (gke) cluster

Purpose of this repository

This repository expands on Solace Kubernetes Quickstart to show you how to deploy Solace PubSub+ software message brokers in an HA configuration on a 3 node Google Kubernetes Engine (GKE) cluster spread across 3 zones.

alt text

  • Purple - Data – Client data including active node management.
  • Blue - DNS – HA node discovery.
  • Black - Disk – Persistent disk mount.
  • Orange/Yellow - Mgmt – Direct CLI/SEMP.

Description of Solace PubSub+ Software Message Broker

The Solace PubSub+ software message broker meets the needs of big data, cloud migration, and Internet-of-Things initiatives, and enables microservices and event-driven architecture. Capabilities include topic-based publish/subscribe, request/reply, message queues/queueing, and data streaming for IoT devices and mobile/web apps. The message broker supports open APIs and standard protocols including AMQP, JMS, MQTT, REST, and WebSocket. As well, it can be deployed in on-premise datacenters, natively within private and public clouds, and across complex hybrid cloud environments.

How to Deploy a Solace PubSub+ Software Message Broker onto GKE

This is a 5 step process:

Step 1: Create a project in Google Cloud Platform and enable prerequisites

  • In the Cloud Platform Console, go to the Manage Resources page and select or create a new project.

    GO TO THE MANAGE RESOURCES PAGE

  • Enable billing for your project by following this link.

    ENABLE BILLING

  • Enable the Container Registry API by following this link and selecting the project you created above.

    ENABLE THE API



Step 2: Obtain a reference to the docker image of the Solace PubSub+ message broker to be deployed

First, decide which Solace PubSub+ message broker and version is suitable to your use case.

The docker image reference can be:

  • A public or accessible private docker registry repository name with an optional tag. This is the recommended option if using PubSub+ Standard. The default is to use the latest message broker image available from Docker Hub as solace/solace-pubsub-standard:latest, or use a specific version tag.

  • A docker image download URL

    • If using Solace PubSub+ Enterprise Evaluation Edition, go to the Solace Downloads page. For the image reference, copy and use the download URL in the Solace PubSub+ Enterprise Evaluation Edition Docker Images section.

      PubSub+ Enterprise Evaluation Edition
      Docker Image
      90-day trial version of PubSub+ Enterprise
      Get URL of Evaluation Docker Image
    • If you have purchased a Docker image of Solace PubSub+ Enterprise, Solace will give you information for how to download the compressed tar archive package from a secure Solace server. Contact Solace Support at support@solace.com if you require assistance. Then you can host this tar archive together with its MD5 on a file server and use the download URL as the image reference.

Step 3 (Optional): Place the message broker in Google Container Registry, using a script

Hint: You may skip this step if using the free PubSub+ Standard Edition available from the Solace public Docker Hub registry. The docker registry reference to use will be solace/solace-pubsub-standard:<TagName>.

  • The script can be executed from an installed Google Cloud SDK Shell or open a Google Cloud Shell from the Cloud Platform Console.

    • If using Google Cloud SDK Shell, also setup following dependencies:

      • docker, gcloud and kubectl installed
      • use gcloud init to setup account locally
      • proper Google Cloud permissions have been set: container.clusterRoleBindings.create permission is required
    • If using the Cloud Shell from the Cloud Platform Console, it can be started in the browser from the red underlined icon in the upper right:

alt text



  • In the shell paste the download URL of the Solace PubSub+ software message broker Docker image from step 2.). As an alternative to using the download link you can also use load versions hosted remotely (if so, a .md5 file needs to be created in the same remote directory).
wget https://raw.githubusercontent.com/SolaceProducts/solace-gke-quickstart/master/scripts/copy_solace_image_to_gkr.sh
chmod 755 copy_solace_image_to_gkr.sh
./copy_solace_image_to_gkr.sh -u <DOWNLOAD_URL>

  • The script will end with showing a SOLACE_IMAGE_URL link required for Step 5. You can view the new entry on the Google Container Registry in the Cloud Platform Console:

alt text



Step 4: Use Google Cloud SDK or Cloud Shell to create the three node GKE cluster

  • Download and execute the cluster creation script. It would be alright to accept the default values for all the script's arguments if you were setting up and running a single message broker; however, some need to be changed to support the 3 node HA cluster. If you want to run the HA cluster in a single GCP zone, specify -n = 3 as the number of nodes per zone and a single -z <zone>. If you want the HA cluster spread across 3 zones within a region - which is the configuration recommended for production situations - specify the 3 zones as per the example below, but leave the number of nodes per zone at the default value of 1.
wget https://raw.githubusercontent.com/SolaceProducts/solace-gke-quickstart/master/scripts/create_cluster.sh
chmod 755 create_cluster.sh
./create_cluster.sh -z us-central1-b,us-central1-c,us-central1-f

This will create a GKE cluster of 3 nodes spread across 3 zones:

alt text

Here are two more GKE create_cluster.sh arguments you may need to consider changing for your deployment:

  • solace-message-broker-cluster: The default cluster name, which can be changed by specifying the -c <cluster name> command line argument.

  • n1-standard-4: The default machine type. To use a different Google machine type, specify -m <machine type>. Note that the minimum CPU and memory requirements must be satisfied for the targeted message broker size, see the next step.


You can check that the Kubernetes deployment on GKE is healthy with the following command (which should return a single line with svc/kubernetes):

kubectl get services

If this fails, you will need to troubleshoot GKE.

Also note that during installation of GKE and release Solace HA, several GCP resources, such as GCE nodes, disks and load balancers, are created. After deleting a Kubernetes release you should validate that all its resources are also deleted. The Solace Kubernetes Quickstart ) describes how to delete a release. If it is necessary to delete the GKE cluster refer to the Google Cloud Platform documentation.



Step 5: Use Google Cloud SDK or Cloud Shell to deploy Solace message broker Pods and Service to that cluster

This will finish with a message broker HA configuration deployed to GKE.

  • Retrieve the Solace Kubernetes QuickStart from GitHub:
mkdir ~/workspace; cd ~/workspace
git clone https://github.com/SolaceProducts/solace-kubernetes-quickstart.git
cd solace-kubernetes-quickstart
  • Update the Solace Kubernetes helm chart values.yaml configuration file for your target deployment with the help of the Kubernetes quick start configure.sh script. (Please refer to the Solace Kubernetes QuickStart for further details).

    Notes:

    • Providing -i SOLACE_IMAGE_URL is optional (see Step 3, if using the latest Solace PubSub+ Standard edition message broker image from the Solace public Docker Hub registry
    • Set the cloud provider option to -c gcp because you are deploying to Google Cloud Platform.

Execute the configuration script, which will install the helm tool if it doesn't exist then customize the solace helm chart. It will be ready for creating a production HA message broker deployment, with up to 1000 connections, using a provisioned PersistentVolume (PV) storage. For other deployment configuration options refer to the Solace Kubernetes Quickstart README.

cd ~/workspace/solace-kubernetes-quickstart/solace
# Substitute <ADMIN_PASSWORD> with the desired password for the management "admin" user.
../scripts/configure.sh -p <ADMIN_PASSWORD> -c gcp -v values-examples/prod1k-persist-ha-provisionPvc.yaml -i <SOLACE_IMAGE_URL> 
# Initiate the deployment
helm install . -f values.yaml
# Wait until all pods running and ready and the active message broker pod label is "active=true"
watch kubectl get statefulset,service,pods,pvc,pv --show-labels

Additional notes:

  • If you need to repair or modify the deployment, refer to this section.
  • If using Google Cloud Shell the helm installation may be lost because of known limitations. If the helm command no longer responds run ../scripts/configure.sh -r to repair the helm installation.

Validate the Deployment

Now you can validate your deployment:

prompt:~$ kubectl get statefulsets,services,pods,pvc,pv
NAME                          DESIRED   CURRENT   AGE
statefulsets/XXX-XXX-solace   3         3         4d

NAME                           TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)                                       AGE
svc/XXX-XXX-solace             LoadBalancer   10.19.242.217   107.178.210.65   22:30238/TCP,8080:31684/TCP,55555:32120/TCP   4d
svc/XXX-XXX-solace-discovery   ClusterIP      None            <none>           8080/TCP                                      4d
svc/kubernetes                 ClusterIP      10.19.240.1     <none>           443/TCP                                       4d

NAME                  READY     STATUS    RESTARTS   AGE
po/XXX-XXX-solace-0   1/1       Running   0          4d
po/XXX-XXX-solace-1   1/1       Running   0          4d
po/XXX-XXX-solace-2   1/1       Running   0          4d

NAME                        STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS            AGE
pvc/data-XXX-XXX-solace-0   Bound     pvc-47e3bd45-53ce-11e8-bda4-42010a800031   30Gi       RWO            XXX-XXX-standard   4d
pvc/data-XXX-XXX-solace-1   Bound     pvc-47e826a0-53ce-11e8-bda4-42010a800031   30Gi       RWO            XXX-XXX-standard   4d
pvc/data-XXX-XXX-solace-2   Bound     pvc-47ef4d7c-53ce-11e8-bda4-42010a800031   30Gi       RWO            XXX-XXX-standard   4d

NAME                                          CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                           STORAGECLASS       REASON    AGE
pv/pvc-47e3bd45-53ce-11e8-bda4-42010a800031   30Gi       RWO            Delete           Bound     default/data-XXX-XXX-solace-0   XXX-XXX-standard             4d
pv/pvc-47e826a0-53ce-11e8-bda4-42010a800031   30Gi       RWO            Delete           Bound     default/data-XXX-XXX-solace-1   XXX-XXX-standard             4d
pv/pvc-47ef4d7c-53ce-11e8-bda4-42010a800031   30Gi       RWO            Delete           Bound     default/data-XXX-XXX-solace-2   XXX-XXX-standard             4d



$ kubectl describe service XXX-XXX-solace
Name:                     XXX-XXX-solace
Namespace:                default
Labels:                   app=solace
                          chart=solace-0.3.0
                          heritage=Tiller
                          release=XXX-XXX
Annotations:              <none>
Selector:                 active=true,app=solace,release=XXX-XXX
Type:                     LoadBalancer
IP:                       10.19.242.217
LoadBalancer Ingress:     107.178.210.65
Port:                     ssh  22/TCP
TargetPort:               22/TCP
NodePort:                 ssh  30238/TCP
Endpoints:                10.16.0.10:22
Port:                     semp  8080/TCP
TargetPort:               8080/TCP
NodePort:                 semp  31684/TCP
Endpoints:                10.16.0.10:8080
Port:                     smf  55555/TCP
TargetPort:               55555/TCP
NodePort:                 smf  32120/TCP
Endpoints:                10.16.0.10:55555
Session Affinity:         None
External Traffic Policy:  Cluster
:
:

Note here that there are several IPs and ports. In this example 107.178.210.65 is the external Public IP to use, indicated as "LoadBalancer Ingress". This can also be seen from the Google Cloud Console:

alt text

Viewing bringup logs

It is possible to watch the message broker come up via logs in the Google Cloud Platform log stack. Inside Logging look for the GKE Container called solace-message-broker-cluster. In the example below the Solace admin password was not set, therefore the container would not come up and exited.

alt text



Gaining admin and ssh access to the message broker

The external management IP will be the Public IP associated with your GCE instance. Access will go through the load balancer service as described in the introduction and will always point to the active message broker. The default port is 22 for CLI and 8080 for SEMP/SolAdmin.

See the Solace Kubernetes Quickstart README for more details including admin and ssh access to the individual message brokers.

Testing Data access to the message broker

To test data traffic though the newly created message broker instance, visit the Solace Developer Portal and select your preferred programming language to send and receive messages. Under each language there is a Publish/Subscribe tutorial that will help you get started.

Note: The Host will be the Public IP. It may be necessary to open up external access to a port used by the particular messaging API if it is not already exposed.

alt text


Modifying, upgrading or deleting the deployment

Refer to the Solace Kubernetes QuickStart

Contributing

Please read CONTRIBUTING.md for details on our code of conduct, and the process for submitting pull requests to us.

Authors

See the list of contributors who participated in this project.

License

This project is licensed under the Apache License, Version 2.0. - See the LICENSE file for details.

Resources

For more information about Solace technology in general please visit these resources: