Skip to content

chandras-xl/xl-release-kubernetes-helm-chart

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

63 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Helm Charts for Digital.ai Release on Kubernetes (BETA)

This repository contains Helm Charts for Digital.ai (formerly Xebialabs) Release product. The Helm Chart automates and simplifies deploying Digital.ai Release clusters on Kubernetes and other Kubernetes-enabled Platforms by providing the essential features you need to keep your clusters up and running.

Prerequisites Details

  • Kubernetes v1.17+
  • A running Kubernetes cluster
    • Dynamic storage provisioning enabled
    • StorageClass for persistent storage. The Installing StorageClass Helm Chart section provides steps to install storage class on OnPremise Kubernetes cluster and AWS Elastic Kubernetes Service(EKS) cluster.
    • StorageClass which is expected to be used with Digital.ai Release should be set as default StorageClass
  • Kubectl installed and setup to use the cluster
  • Helm 3 installed
  • License File for Digital.ai Release in base64 encoded format
  • Repository Keystorefile in base64 encoded format

Chart Details

This chart will deploy following components:

  • PostgreSQL single instance / pod

(NOTE: For production grade installations it is recommended to use an external PostgreSQL). Alternatively users may want to install Postgres HA on Kubernetes. For more information, refer Crunchy PostgreSQL Operator

  • RabbitMQ in highly available configuration
  • HAProxy ingress controller
  • Digital.ai Release in highly available configuration

Tested Configuration

  • Supported Platforms: - OnPremise Kubernetes, AWS Elastic Kubernetes Service (EKS)
  • Storage: - Network File System (NFS), AWS Elastic File System (EFS)
  • Messaging Queue: - Rabbit MQ
  • Database: - Postgresql
  • LoadBalancers: - HAProxy Ingress Controller

Installing StorageClass Helm Chart

If you are using storage class other than NFS and EFS then please proceed with installation steps

NFS Client Provisioner for OnPremise Kubernetes cluster

  • For deploying helm chart, nfs server and nfs mount path are required.
  • Before installing NFS Provisioner helm chart, you need to add the stable helm repository to your helm client as shown below:
helm repo add stable https://charts.helm.sh/stable
  • To install the chart with the release name nfs-provisioner:
helm install nfs-provisioner --set nfs.server=x.x.x.x --set nfs.path=/exported/path stable/nfs-client-provisioner
  • The nfs-provisioner storage class must be marked with the default annotation so that PersistentVolumeClaim objects (without a StorageClass specified) will trigger dynamic provisioning.
kubectl patch storageclass nfs-provisioner -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
  • After deploying the nfs helm chart, execute the below command to get StorageClass name which can be used in values.yaml file for parameter Persistence.StorageClass
kubectl get storageclass

For more information on nfs-client-provisioner, refer stable/nfs-client-provisioner

Elastic File System for AWS Elastic Kubernetes Service(EKS) cluster

Before deploying EFS helm chart, there are some steps which need to be performed.

  • Create your EFS file system. Refer Create Your Amazon EFS File System for creating file system.
  • Create a mount target. Refer Creating mount targets for creating mount target.
  • Before installing EFS Provisioner helm chart, you need to add the stable helm repository to your helm client as shown below:
helm repo add stable https://charts.helm.sh/stable
  • Provide the efsFileSystemId and awsRegion which can be obtained by executing above steps. Install the chart with the release name aws-efs:
helm install aws-efs stable/efs-provisioner --set efsProvisioner.efsFileSystemId=fs-12345678 --set efsProvisioner.awsRegion=us-east-2
  • The aws-efs storage class must be marked with the default annotation so that PersistentVolumeClaim objects (without a StorageClass specified) will trigger dynamic provisioning.
kubectl patch storageclass aws-efs -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
  • There has to be only one storage class with default setting, so remove other storage classes with default settings.
kubectl patch storageclass gp2 -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
  • After deploying the efs helm chart, execute the below command to get StorageClass name which can be used in values.yaml file for parameter Persistence.StorageClass
kubectl get storageclass

For more information on efs-provisioner, refer stable/efs-provisioner

Installing the Digital.ai Release Helm Chart

Get the chart by cloning this repository:

git clone https://github.com/xebialabs/xl-release-kubernetes-helm-chart.git

The Parameters section lists the parameters that can be configured before installation starts. Before installing helm charts, you need to update the dependencies of a chart:

helm dependency update xl-release-kubernetes-helm-chart

To install the chart with the release name xlr-production:

helm install xlr-production xl-release-kubernetes-helm-chart

Access Digital.ai Release Dashboard

By default, the NodePort service is exposed externally on the available k8s worker nodes and can be seen by running below command

kubectl get service

For production grade setups, we recommend using LoadBalancer as service type.

For OnPremise Cluster, you can access Digital.ai Release UI from an outside cluster with below link

http://ingress-loadbalancer-DNS:NodePort/xl-release/

Similarly for EKS, access Digital.ai Release UI using below link

http://ingress-loadbalancer-DNS/xl-release/

The path should be unique across the Kubernetes cluster.(Ex "/xl-release/")

Uninstalling the Digital.ai Release Helm Chart

To uninstall/delete the xlr-production deployment:

helm delete xlr-production

Parameters

For deployment on Production environment, all parameters need to be configured as per users requirement and k8s setup which is under use. However, for deployment on test environment, most of the default values will suffice. The following parameters are required to be configured and rest of the parameters may remain as default

  • xlrLicense: License for Digital.ai Release in base64 format
  • Persistence.StorageClass: Storage Class to be defined, Network File System (NFS) for OnPremise or Elastic File System (EFS) for AWS Elastic Kubernetes Service(EKS)
  • ingress.hosts: DNS name for accessing ui of Digital.ai Release
  • RepositoryKeystore: RepositoryKeystore for Digital.ai Release in base64 format
  • KeystorePassphrase: Passphrase for RepositoryKeystore

The following table lists the configurable parameters of the Digital.ai Release chart and their default values.

Parameter Description Default
K8sSetup.Platform Platform on which to install the chart. Allowed values are PlainK8s and AWSEKS PlainK8s
replicaCount Number of replicas 3
ImageRepository Image name xebialabs/xl-release
ImageTag Image tag 9.7
ImagePullPolicy Image pull policy, Defaults to 'Always' if image tag is 'latest',set to 'IfNotPresent' Always
ImagePullSecret Specify docker-registry secret names. Secrets must be manually created in the namespace nil
haproxy-ingress.install Install haproxy subchart. If you have haproxy already installed, set 'install' to 'false' true
haproxy-ingress.controller.kind Type of deployment, DaemonSet or Deployment DaemonSet
haproxy-ingress.controller.service.type Kubernetes Service type for haproxy. It can be changed to LoadBalancer or NodePort NodePort
ingress.Enabled Exposes HTTP and HTTPS routes from outside the cluster to services within the cluster true
ingress.annotations Annotations for ingress controller ingress.kubernetes.io/ssl-redirect: "false"      kubernetes.io/ingress.class: haproxy ingress.kubernetes.io/rewrite-target: / ingress.kubernetes.io/affinity: cookie ingress.kubernetes.io/session-cookie-name: JSESSIONID ingress.kubernetes.io/session-cookie-strategy: prefix ingress.kubernetes.io/config-backend: `
ingress.path You can route an Ingress to different Services based on the path /xl-release/
ingress.hosts DNS name for accessing ui of Digital.ai Release example.com
ingress.tls.secretName Secret file which holds the tls private key and certificate example-secretsName
ingress.tls.hosts DNS name for accessing ui of digital.ai Release using tls example.com
AdminPassword Admin password for digital.ai release If user does not provide password, random 10 character alphanumeric string will be generated
xlrLicense Convert xl-release.lic files content to base64 here nil
RepositoryKeystore Convert keystore.jks files content to base64 here nil
KeystorePassphrase Passphrase for keystore.jks file nil
postgresql.install postgresql chart with single instance. Install postgresql chart. If you have an existing database deployment, set 'install' to 'false'. true
postgresql.postgresqlUsername PostgreSQL user (creates a non-admin user when postgresqlUsername is not postgres) postgres
postgresql.postgresqlPassword PostgreSQL user password random 10 character alphanumeric string
postgresql.postgresqlExtendedConf.listenAddresses Specifies the TCP/IP address(es) on which the server is to listen for connections from client applications *
postgresql.postgresqlExtendedConf.maxConnections Maximum total connections 500
postgresql.initdbScriptsSecret Secret with initdb scripts that contain sensitive information (Note: can be used with initdbScriptsConfigMap or initdbScripts). The value is evaluated as a template. postgresql-init-sql-xlr
postgresql.service.port PostgreSQL port 5432
postgresql.persistence.enabled Enable persistence using PVC true
postgresql.persistence.size PVC Storage Request for PostgreSQL volume 50Gi
postgresql.persistence.existingClaim Provide an existing PersistentVolumeClaim, the value is evaluated as a template. nil
postgresql.resources CPU/Memory resource requests/limits Memory: 256Mi, CPU: 250m
postgresql.nodeSelector Node labels for pod assignment {}
postgresql.affinity Affinity labels for pod assignment {}
postgresql.tolerations Toleration labels for pod assignment []
UseExistingDB.Enabled If you want to use an existing database, change 'postgresql.install' to 'false'. false
UseExistingDB.XLR_DB_URL Database URL for xl-release nil
UseExistingDB.XLR_DB_USER Database User for xl-release nil
UseExistingDB.XLR_DB_PASS Database Password for xl-release nil
UseExistingDB.XLR_REPORT_DB_URL Database URL for xl-release report db nil
UseExistingDB.XLR_REPORT_DB_USER Database User for xl-release report db nil
UseExistingDB.XLR_REPORT_DB_PASS Database Password for xl-release report db nil
rabbitmq-ha.install Install rabbitmq chart. If you have an existing message queue deployment, set 'install' to 'false'. true
rabbitmq-ha.rabbitmqUsername RabbitMQ application username guest
rabbitmq-ha.rabbitmqPassword RabbitMQ application password random 24 character long alphanumeric string
rabbitmq-ha.rabbitmqErlangCookie Erlang cookie RELEASERABBITMQCLUSTER
rabbitmq-ha.rabbitmqMemoryHighWatermark Memory high watermark 500MB
rabbitmq-ha.rabbitmqNodePort Node port 5672
rabbitmq-ha.extraPlugins Additional plugins to add to the default configmap rabbitmq_shovel, rabbitmq_shovel_management, rabbitmq_federation, rabbitmq_federation_management, rabbitmq_amqp1_0, rabbitmq_management,
rabbitmq-ha.replicaCount Number of replica 3
rabbitmq-ha.rbac.create If true, create & use RBAC resources true
rabbitmq-ha.service.type Type of service to create ClusterIP
rabbitmq-ha.persistentVolume.enabled If true, persistent volume claims are created true
rabbitmq-ha.persistentVolume.size Persistent volume size 20Gi
rabbitmq-ha.persistentVolume.annotations Persistent volume annotations {}
rabbitmq-ha.persistentVolume.resources CPU/Memory resource requests/limits {}
rabbitmq-ha.definitions.policies HA policies to add to definitions.json { "name": "ha-all", "pattern": ".*","vhost": "/","definition": {"ha-mode": "all","ha-sync-mode": "automatic","ha-sync-batch-size": 1}}
rabbitmq-ha.definitions.globalParameters Pre-configured global parameters {"name":"cluster_name","value": "" }
rabbitmq-ha.prometheus.operator.enabled Enabling Prometheus Operator false
UseExistingMQ.Enabled If you want to use an existing Message Queue, change 'rabbitmq-ha.install' to 'false' false
UseExistingMQ.XLR_TASK_QUEUE_USERNAME Username for xl-release task queue nil
UseExistingMQ.XLR_TASK_QUEUE_PASSWORD Password for xl-release task queue nil
UseExistingMQ.XLR_TASK_QUEUE_NAME Name for xl-release task queue nil
UseExistingMQ.XLR_TASK_QUEUE_URL URL for xl-release task queue nil
resources CPU/Memory resource requests/limits.User can change the parameter accordingly nil
HealthProbes Would you like a HealthProbes to be enabled true
HealthProbesLivenessTimeout Delay before liveness probe is initiated 90
HealthProbesReadinessTimeout Delay before readiness probe is initiated 90
HealthProbeFailureThreshold Minimum consecutive failures for the probe to be considered failed after having succeeded 12
HealthPeriodScans How often to perform the probe 10
nodeSelector Node labels for pod assignment {}
tolerations Toleration labels for pod assignment []
affinity Affinity labels for pod assignment {}
Persistence.Enabled Enable persistence using PVC true
Persistence.StorageClass PVC Storage Class for volume nil
Persistence.Annotations Annotations for the PVC {}
Persistence.AccessMode PVC Access Mode for volume ReadWriteOnce
Persistence.Size PVC Storage Request for volume. For production grade setup, size must be changed 5Gi

Upgrading the Digital.ai Release Helm Chart

To upgrade the version ImageTag parameter needs to be updated to the desired version. To see the list of available ImageTag for Digital.ai Release, refer following links Release_tags. For upgrade, Rolling Update strategy is used. To upgrade the chart with the release name xlr-production, execute below command:

helm upgrade xlr-production xl-release-kubernetes-helm-chart/

Note: Currently upgrading custom plugins and database drivers is not supported. In order to upgrade custom plugins and database drivers, users need to build custom docker image of Digital.ai Release containing required files. See the adding custom plugins section in the Digital.ai (formerly Xebialabs) official documentation.

Existing or External Databases

There is an option to use external PostgreSQL database for your Digital.ai Release. Configure values.yaml file accordingly. If you want to use an existing database, these steps need to be followed:

  • Change postgresql.install to false
  • UseExistingDB.Enabled: true
  • UseExistingDB.XLR_DB_URL: jdbc:postgresql://<postgres-service-name>.<namsepace>.svc.cluster.local:5432/<xlr-database-name>
  • UseExistingDB.XLR_DB_USER: Database User for xl-release
  • UseExistingDB.XLR_DB_PASS: Database Password for xl-release
  • UseExistingDB.XLR_REPORT_DB_URL: jdbc:postgresql://<postgres-service-name>.<namsepace>.svc.cluster.local:5432/<xlr-report-database-name>
  • UseExistingDB.XLR_REPORT_DB_USER: Database User for xl-release report db
  • UseExistingDB.XLR_REPORT_DB_PASS: Database Password for xl-release report db

Example:

#Passing a custom PostgreSQL to XL-Release
UseExistingDB:
  Enabled: true
  # If you want to use existing database, change the value to "true".
  # Uncomment the following lines and provide the values.
  XLR_DB_URL: jdbc:postgresql://xlr-production-postgresql.default.svc.cluster.local:5432/xlr-db
  XLR_DB_USER: xlr
  XLR_DB_PASS: xlr
  XLR_REPORT_DB_URL: jdbc:postgresql://xlr-production-postgresql.default.svc.cluster.local:5432/xlr-report-db
  XLR_REPORT_DB_USER: xlr-report
  XLR_REPORT_DB_PASS: xlr-report

Note: User might have database instance running outside the cluster. Configure parameters accordingly.

Existing or External Messaging Queue

If you plan to use an existing messaging queue, follow these steps to configure values.yaml

  • Change rabbitmq-ha.install to false
  • UseExistingMQ.Enabled: true
  • UseExistingMQ.XLR_TASK_QUEUE_USERNAME: Username for xl-release task queue
  • UseExistingMQ.XLR_TASK_QUEUE_PASSWORD: Password for xl-release task queue
  • UseExistingMQ.XLR_TASK_QUEUE_NAME: Queue Name for xl-release
  • UseExistingMQ.XLR_TASK_QUEUE_URL: amqp://<rabbitmq-service-name>.<namsepace>.svc.cluster.local:5672 Example:
# Passing a custom RabbitMQ to XL-Release
UseExistingMQ:
  Enabled: true
  # If you want to use an existing Message Queue, change 'rabbitmq-ha.install' to 'false'.
  # Set 'UseExistingMQ.Enabled' to 'true'.Uncomment the following lines and provide the values.
  XLR_TASK_QUEUE_USERNAME: guest
  XLR_TASK_QUEUE_PASSWORD: guest
  XLR_TASK_QUEUE_NAME: xlr-task-queue
  XLR_TASK_QUEUE_URL: amqp://xlr-production-rabbitmq-ha.default.svc.cluster.local:5672

Note: User might have rabbitmq instance running outside the cluster. Configure parameters accordingly.

Existing Ingress Controller

There is an option to use external ingress controller for Digital.ai Release. If you want to use an existing ingress controller, change haproxy.install to false.

Useful links

About

Kubernetes Helm chart for XL Release

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • HTML 100.0%