Skip to content

Load Balancer Health Probe Configuration and Generation

Vito Sabella edited this page Apr 19, 2022 · 3 revisions

Goals

(Open Question: Do we need to determine a load balancer endpoint directly to a pod vs directly

  • Ensure that users have access to the full capability of Azure Load Balancer health probes to deploy any Load Balancer Service scenario in K8s including:
    • Independent per-port health probes
    • externalTrafficPolicy: Local / podPresence health probe
    • Single health probe endpoint used for multiple service ports
    • MultiProtocolLB
    • High Availability Ports mode
    • PodIP support which is coming later to allow load balancers to be connected to independent pods as opposed to VMSS backends.
  • Generate load balancer health probes that are efficient - reducing duplication where possible
  • Generate load balancer health probes that are as correct as possible for the scenario at hand - HTTP for HTTP, TCP for TCP, etc.
  • Allow for variance between Standard and Basic Azure Load Balancer SKUs
  • Describe how cloud-controller-azure should reconcile changes between the current state of the Azure Load Balancer and the desired state in the configuration.

Definitions

  • Service - A Kubernetes Service Definition (v1.Service)
  • service port - the port definition object in v1.Service.spec.ports
  • health probe port - the desired port for health checks
  • appProtocol

Scenarios

Single Health Probe for all ports on a single service

Certain services (such as Istio Ingress Gateway) use a single health endpoint for all ports and protocols in a Service entry.

  • A single LoadBalancer service contains between 1 and N service ports, such as HTTPS, HTTP, or even UDP or TCP ports.
  • The health probe port for this service port is not the same as the service port nodePort
  • The health probe protocol for this service port may or may not be the same as the service port appProtocol

In this case we expect that

  • A single health probe is created for this Service.
  • Each load balancing rule created for this service (one per service port) will use the single health probe as the health check

externalTrafficPolicy: Local

When a service has the externalTrafficPolicy set to Local, Kubernetes will automatically create a special "podPresenceMonitor" Http health check endpoint. This endpoint represents a single check that tells you whether a specific node contains active pods for the service. This is represented in the v1.Service api as "healthCheckNodePort" and is only available for externalTrafficPolicy:Local

When externalTrafficPolicy is local, all service ports in a service must use the health probe pointing to the podPresenceMonitor.

Independent health probes for each service port

One of the most common LoadBalancer service scenarios is basic HTTP/HTTPS ingress using popular ingress controllers like ingress-nginx. In this scenario:

  • A single LoadBalancer service contains between 1 and N service ports, such as HTTP and HTTPS
  • The health probe port for this service port is usually the same as the service port nodePort, but not always.
  • The health probe protocol for this service port is usually the same as the service port appProtocol, but not always.

In this case we expect that

  • One health probe is created per servicePort
  • Each load balancing rule created for this service (one per service port) will use a health probe created for that service port.
apiVersion: v1
kind: Service
metadata:
  name: demo-lb
  namespace: ingress-nginx
  annotations:
    service.beta.kubernetes.io/azure-load-balancer-internal: 'true'
spec:
  ports:
    - name: http
      protocol: TCP
      appProtocol: http
      port: 80
      targetPort: http
    - name: https
      protocol: TCP
      appProtocol: https
      port: 443
      targetPort: https
  selector:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: nginx-ingress
    app.kubernetes.io/name: ingress-nginx
  type: LoadBalancer
  sessionAffinity: None
  externalTrafficPolicy: Cluster
  ipFamilies:
    - IPv4
  ipFamilyPolicy: SingleStack
  allocateLoadBalancerNodePorts: true
  internalTrafficPolicy: Cluster

Mixed Scenario - Some ports share a healthcheck, other ports share a different healthcheck

In a single service we may have combinations of the above.

Design

Configuring Azure Load Balancer for Kubernetes services involves the following components:

  • Load Balancer

  • Frontend IP Configuration - The IP address used to access the load balanced Service. There is one Frontend IP per Kubernetes LoadBalancer Service.

  • Load Balancing Rule -

  • Backend Pools - A pointer to a list of Virtual Machines or Virtual Machine Scale Sets that will

  • The system should select the most specific, correct load balancer rule available for the port in question. In this question

Appendix

	// support podPresence health check when External Traffic Policy is local
	// take precedence over user defined probe configuration
	// healthcheck proxy server serves http requests
	// https://github.com/kubernetes/kubernetes/blob/7c013c3f64db33cf19f38bb2fc8d9182e42b0b7b/pkg/proxy/healthcheck/service_health.go#L236
	// LoadBalancerBackendPoolConfigurationType defines how vms join the load balancer backend pools. Supported values
	// are `nodeIPConfiguration`, `nodeIP` and `podIP`.
	// `nodeIPConfiguration`: vm network interfaces will be attached to the inbound backend pool of the load balancer (default);
	// `nodeIP`: vm private IPs will be attached to the inbound backend pool of the load balancer;
	// `podIP`: pod IPs will be attached to the inbound backend pool of the load balancer (not supported yet).
Clone this wiki locally