-
Notifications
You must be signed in to change notification settings - Fork 130
Description
Consider the following scenario with router 3.11:
- a service defines multiple ports (in the example below:
8080
and8443
) - a route for that service sets
targetPort
so one of the service ports is preferred (in the example below:8443
) - dynamic configuration manager is enabled
The initial HAProxy configuration after a full configuration reload is correct: it only includes the one port identified by targetPort
. This can be verified using the HAProxy admin socket (starting with a single pod backing the service):
> show servers state be_tcp:test-haproxy-router:passthrough
# be_id be_name srv_id srv_name srv_addr srv_op_state srv_admin_state srv_uweight srv_iweight srv_time_since_last_change srv_check_status srv_check_result srv_check_health srv_check_state srv_agent_state bk_f_forced_id srv_f_forced_id srv_fqdn srv_port
44 be_tcp:test-haproxy-router:passthrough 1 pod:server-ssl-1-5m5n8:server-ssl:10.76.32.172:8443 10.76.32.172 2 0 256 256 27 6 3 4 6 0 0 0 - 8443
44 be_tcp:test-haproxy-router:passthrough 2 _dynamic-pod-1 172.4.0.4 0 5 1 1 27 1 0 0 14 0 0 0 - 8765
44 be_tcp:test-haproxy-router:passthrough 3 _dynamic-pod-2 172.4.0.4 0 5 1 1 27 1 0 0 14 0 0 0 - 8765
44 be_tcp:test-haproxy-router:passthrough 4 _dynamic-pod-3 172.4.0.4 0 5 1 1 27 1 0 0 14 0 0 0 - 8765
44 be_tcp:test-haproxy-router:passthrough 5 _dynamic-pod-4 172.4.0.4 0 5 1 1 27 1 0 0 14 0 0 0 - 8765
44 be_tcp:test-haproxy-router:passthrough 6 _dynamic-pod-5 172.4.0.4 0 5 1 1 27 1 0 0 14 0 0 0 - 8765
Let's now scale the deploymentConfig backing the multi-port service to 2 pods.
Expected behavior
The dynamic list of HAProxy backend servers is updated with a single new endpoint pointing to the new pod and the route's targetPort
Actual behaviour
The dynamic list of HAProxy backend servers is updated with servers for each port of the service, ignoring targetPort
> show servers state be_tcp:test-haproxy-router:passthrough
# be_id be_name srv_id srv_name srv_addr srv_op_state srv_admin_state srv_uweight srv_iweight srv_time_since_last_change srv_check_status srv_check_result srv_check_health srv_check_state srv_agent_state bk_f_forced_id srv_f_forced_id srv_fqdn srv_port
44 be_tcp:test-haproxy-router:passthrough 1 pod:server-ssl-1-5m5n8:server-ssl:10.76.32.172:8443 10.76.32.172 2 0 256 256 199 6 3 4 6 0 0 0 - 8443
44 be_tcp:test-haproxy-router:passthrough 2 _dynamic-pod-1 10.76.19.57 0 4 1 1 2 8 2 0 6 0 0 0 - 8443
44 be_tcp:test-haproxy-router:passthrough 3 _dynamic-pod-2 10.76.19.57 0 4 1 1 2 8 2 0 6 0 0 0 - 8080
44 be_tcp:test-haproxy-router:passthrough 4 _dynamic-pod-3 172.4.0.4 0 5 1 1 199 1 0 0 14 0 0 0 - 8765
44 be_tcp:test-haproxy-router:passthrough 5 _dynamic-pod-4 172.4.0.4 0 5 1 1 199 1 0 0 14 0 0 0 - 8765
44 be_tcp:test-haproxy-router:passthrough 6 _dynamic-pod-5 172.4.0.4 0 5 1 1 199 1 0 0 14 0 0 0 - 8765
It looks like this is due to the dynamic router using directly service.EndpointTable
here and here instead of filtering ports with the template's endpointsForAlias
method
I am testing a fix, will submit a PR shortly