This chart bootstraps an HAProxy load balancer as deployment/daemonset on a Kubernetes cluster using the Helm package manager. As oposed to HAProxy Kubernetes Ingress Controller Chart, HAProxy is installed as a regular application and not as an Ingress Controller.
- Kubernetes 1.17+ (recommended 1.20+)
- Helm 3.6+ (recommended 3.7+)
The quickest way to setup a Kubernetes cluster is with Azure Kubernetes Service, AWS Elastic Kubernetes Service or Google Kubernetes Engine using their respective quick-start guides.
For setting up Kubernetes on other cloud platforms or bare-metal servers refer to the Kubernetes getting started guide.
Get the latest Helm release.
Once you have Helm installed, add the repo as follows:
helm repo add haproxytech https://haproxytech.github.io/helm-charts
helm repo update
To install the chart with Helm v3 as my-release deployment:
helm install my-release haproxytech/haproxy
NOTE: To install the chart with Helm v2 (legacy Helm) the syntax requires adding deployment name to --name
parameter:
helm install haproxytech/haproxy \
--name my-release
To auto-generate resource names when installing, use the following:
helm install haproxytech/haproxy \
--generate-name
To install the chart using a private registry for HAProxy (for instance to use a HAProxy Enterprise image) into a separate namespace prod.
NOTE: Helm v3 requires namespace to be precreated (eg. with kubectl create namespace prod
)
helm install my-haproxy haproxytech/haproxy \
--namespace prod \
--set image.tag=latest \
--set image.repository=myregistry.domain.com/imagename \
--set imageCredentials.registry=myregistry.domain.com \
--set imageCredentials.username=MYUSERNAME \
--set imageCredentials.password=MYPASSWORD
Alternatively, use a pre-configured (existing) imagePullSecret in the same namespace:
helm install my-ingress haproxytech/haproxy \
--namespace prod \
--set image.tag=SOMETAG \
--set existingImagePullSecret name-of-existing-image-pull-secret
NOTE: Enterprise images using S6 overlay need default CMD arguments disabled (more about YAML configuration file for Helm can be found in a separate paragraph below):
args:
enabled: false
Default image mode is Deployment, but it is possible to use DaemonSet as well:
helm install my-haproxy2 haproxytech/haproxy \
--set kind=DaemonSet
NOTE: With helm --set
it is needed to put quotes and escape dots in the annotation key and commas in the value string.
HPA automatically scales number of replicas in Deployment or Replication Controller and adjusts replica count. Therefore we want to unset default replicaCount by setting corresponding key value to null and enable autoscaling:
helm install my-haproxy3 haproxytech/haproxy \
--set kind=Deployment \
--set replicaCount=null \
--set autoscaling.enabled=true \
--set autoscaling.targetCPUUtilizationPercentage=80
NOTE: Make sure to look into other tunable values for HPA documented in values.yaml.
On some environments like EKS and GKE there might be a need to pass service annotations. Syntax can become a little tedious however:
helm install my-haproxy4 haproxytech/haproxy \
--set kind=DaemonSet \
--set service.type=LoadBalancer \
--set service.annotations."service\.beta\.kubernetes\.io/aws-load-balancer-internal"="0.0.0.0/0" \
--set service.annotations."service\.beta\.kubernetes\.io/aws-load-balancer-cross-zone-load-balancing-enabled"="true"
NOTE: With helm --set
it is needed to put quotes and escape dots in the annotation key and commas in the value string.
As opposed to using many --set
invocations, much simpler approach is to define value overrides in a separate YAML file and specify them when invoking Helm.
The config
block can also support using helm templates to populate dynamic values, e.g. {{ .Release.Name }}
.
mylb.yaml:
kind: DaemonSet
config: |
global
log stdout format raw local0
daemon
maxconn 1024
defaults
log global
timeout client 60s
timeout connect 60s
timeout server {{ .Values.global.serverTimeout }}
frontend fe_main
bind :80
default_backend be_main
backend be_main
server web1 10.0.0.1:8080 check
server web2 {{ .Release.Name }}-web:8080 check
service:
type: LoadBalancer
annotations:
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
And invoking Helm becomes (compare to the previous example):
helm install my-haproxy5 -f mylb.yml haproxytech/haproxy
In order to e.g. support SSL certificates, you can mount additional volumes from secrets:
mylb.yaml:
service:
type: LoadBalancer
config: |
global
log stdout format raw local0
daemon
maxconn 1024
defaults
log global
timeout client 60s
timeout connect 60s
timeout server 60s
frontend fe_main
mode http
bind :80
bind :443 ssl crt /usr/local/etc/ssl/tls.crt
http-request redirect scheme https code 301 unless { ssl_fc }
default_backend be_main
backend be_main
mode http
server web1 10.0.0.1:8080 check
mountedSecrets:
- volumeName: ssl-certificate
secretName: star-example-com
mountPath: /usr/local/etc/ssl
The above example assumes that there is a certificate in key tls.crt
of a secret called star-example-com
.
In order to load data from other sources (e.g. to preload something inside an init-container) you can mount additional volumes to the container:
extraVolumes:
- name: tls
emptyDir: {}
- name: tmp
emptyDir:
medium: Memory
extraVolumeMounts:
- name: tls
mountPath: /etc/tls
- name: tmp
mountPath: /tmp
In order to expose extra data (e.g. node and pod IP addresses) to haproxy, you can populate extra environment variables on the container:
extraEnvs:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
To be able to bind to privileged ports such as tcp/80 and tcp/443 without root privileges (UID and GID are set to 1000 in the example, as HAProxy Docker image has UID/GID of 1000 reserved for HAProxy), there is a special workaround required as NET_BIND_SERVICE
capability is not propagated, so we need to use initContainers
feature as well:
kind: DaemonSet
containerPorts:
http: 80
https: 443
stat: 1024
daemonset:
useHostNetwork: true
useHostPort: true
hostPorts:
http: 80
https: 443
stat: 1024
config: |
global
log stdout format raw local0
maxconn 1024
defaults
log global
timeout client 60s
timeout connect 60s
timeout server 60s
frontend fe_main
bind :80
default_backend be_main
backend be_main
server web1 127.0.0.1:8080 check
securityContext:
enabled: true
runAsUser: 1000
runAsGroup: 1000
initContainers:
- name: sysctl
image: "busybox:musl"
command:
- /bin/sh
- -c
- sysctl -w net.ipv4.ip_unprivileged_port_start=0
securityContext:
privileged: true
To upgrade the my-release deployment:
helm upgrade my-release haproxytech/haproxy
To uninstall/delete the my-release deployment:
helm delete my-release
It is possible to generate a set of YAML files for testing/debugging:
helm install my-release haproxytech/haproxy \
--debug \
--dry-run
We welcome all contributions. Please refer to guidelines on how to make a contribution.