Skip to content

Load Balancer Health Probe Configuration and Generation

Vito Sabella edited this page Apr 19, 2022 · 3 revisions

Goals

(Open Question: Do we need to determine a load balancer endpoint directly to a pod vs directly

  • Ensure that users have access to the full capability of Azure Load Balancer health probes to deploy any Load Balancer Service scenario in K8s including:
    • Independent per-port health probes
    • externalTrafficPolicy: Local / podPresence health probe
    • Single health probe endpoint used for multiple service ports
    • MultiProtocolLB
    • High Availability Ports mode
    • PodIP support which is coming later to allow load balancers to be connected to independent pods as opposed to VMSS backends.
  • Generate load balancer health probes that are efficient - reducing duplication where possible
  • Generate load balancer health probes that are as correct as possible for the scenario at hand - HTTP for HTTP, TCP for TCP, etc.
  • Allow for variance between Standard and Basic Azure Load Balancer SKUs
  • Describe how cloud-controller-azure should reconcile changes between the current state of the Azure Load Balancer and the desired state in the configuration.

Scenarios

Single Health Probe for all ports on a single service

This scenario can be occur to whenever externalTrafficPolicy: Local is set, or because of a specific behavior of a service that uses a single health check to represent the health of multiple ports (such as Istio Ingress Gateway).

In the externalTrafficPolicy: Local scenario, a special podPresenceMonitor service offers an HTTP health probe to determine whether there is a healthy instance of a pod on a specific node. The port is represented in the Service spec as "healthCheckNodePort"

  • A single LoadBalancer service contains between 1 and N service ports, such as HTTPS, HTTP, or even UDP or TCP ports.
  • The health check port for this service port is not the same as the service port nodePort
  • The health check protocol for this service may or may not be the same as the service port appProtocol

Independent health probes for each service port

One of the most common LoadBalancer service scenarios is basic HTTP/HTTPS ingress using popular ingress controllers like ingress-nginx. In this scenario

  • A single LoadBalancer service contains multiple ports, such as HTTP and HTTPS
  • Each port is independent of the other, and has its own health check.
  • The listening service (e.g. ingress-nginx) provides a health check per service port (http://ip:443/
  • The health check is provided by the listening service, not by podPresence Health Check (e.g. externalTrafficPolicy: Cluster)

In this circumstance the ideal health probe would leverage either HTTP or HTTPS probes

apiVersion: v1
kind: Service
metadata:
  name: demo-lb
  namespace: ingress-nginx
  annotations:
    service.beta.kubernetes.io/azure-load-balancer-internal: 'true'
spec:
  ports:
    - name: http
      protocol: TCP
      appProtocol: http
      port: 80
      targetPort: http
    - name: https
      protocol: TCP
      appProtocol: https
      port: 443
      targetPort: https
  selector:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: nginx-ingress
    app.kubernetes.io/name: ingress-nginx
  type: LoadBalancer
  sessionAffinity: None
  externalTrafficPolicy: Cluster
  ipFamilies:
    - IPv4
  ipFamilyPolicy: SingleStack
  allocateLoadBalancerNodePorts: true
  internalTrafficPolicy: Cluster

Mixed Scenario - Some ports share a healthcheck, other ports share a different healthcheck

work in progress...

Design

  • The system should select the most specific, correct load balancer rule available for the port in question. In this question

Appendix

	// support podPresence health check when External Traffic Policy is local
	// take precedence over user defined probe configuration
	// healthcheck proxy server serves http requests
	// https://github.com/kubernetes/kubernetes/blob/7c013c3f64db33cf19f38bb2fc8d9182e42b0b7b/pkg/proxy/healthcheck/service_health.go#L236
	// LoadBalancerBackendPoolConfigurationType defines how vms join the load balancer backend pools. Supported values
	// are `nodeIPConfiguration`, `nodeIP` and `podIP`.
	// `nodeIPConfiguration`: vm network interfaces will be attached to the inbound backend pool of the load balancer (default);
	// `nodeIP`: vm private IPs will be attached to the inbound backend pool of the load balancer;
	// `podIP`: pod IPs will be attached to the inbound backend pool of the load balancer (not supported yet).
Clone this wiki locally