-
Notifications
You must be signed in to change notification settings - Fork 113
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NE-822 Don't scale route weight on single service routes #377
NE-822 Don't scale route weight on single service routes #377
Conversation
6b7042e
to
fc7e579
Compare
/retest |
cluster bootstrap failed /test e2e-upgrade |
fc7e579
to
76028a0
Compare
/retest |
/retest |
2 similar comments
/retest |
/retest |
76028a0
to
e24ca3a
Compare
/hold I think we should wait for perf&scale test. |
/retest |
1 similar comment
/retest |
pkg/router/template/router.go
Outdated
for key := range serviceUnits { | ||
serviceUnitNames[key] = 1 | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What if the service unit has 0 endpoints? We should check for that case:
for key := range serviceUnits { | |
serviceUnitNames[key] = 1 | |
} | |
for key := range serviceUnits { | |
if r.numberOfEndpoints(key) > 0 { | |
serviceUnitNames[key] = 1 | |
} | |
} |
Granted, omitting the r.numberOfEndpoints(key) > 0
check might not change the ultimate result from the config template:
{{- range $serviceUnitName, $weight := $cfg.ServiceUnitNames }}
{{- if ge $weight 0 }}{{/* weight=0 is reasonable to keep existing connections to backends with cookies as we can see the HTTP headers */}}
{{- with $serviceUnit := index $.ServiceUnits $serviceUnitName }}
{{- range $idx, $endpoint := processEndpointsForAlias $cfg $serviceUnit (env "ROUTER_BACKEND_PROCESS_ENDPOINTS" "") }}
{{/* [actual HAProxy config stuff here] */}}
{{- end }}{{/* end range processEndpointsForAlias */}}
{{- end }}{{/* end get serviceUnit from its name */}}
{{- end }}{{/* end range over serviceUnitNames */}}
In the template, the effect is the same whether range $serviceUnitName, $weight := $cfg.ServiceUnitNames
iterates 0 times or whether range $idx, $endpoint := processEndpointsForAlias $cfg $serviceUnit (env "ROUTER_BACKEND_PROCESS_ENDPOINTS" "")
iterates 0 times. However, it could cause problems for the dynamic config manager, which has logic like this:
// As the endpoints have changed, recalculate the weights.
newWeights := r.calculateServiceWeights(cfg.ServiceUnits)
// Get the weight for this service unit.
weight, ok := newWeights[id]
if !ok {
weight = 0
}
Anyway, from a strict correctness perspective, I believe calculateServiceWeights
needs the r.numberOfEndpoints(key) > 0
check.
Please add a unit test case to ensure we don't regress:
--- a/pkg/router/template/router_test.go
+++ b/pkg/router/template/router_test.go
@@ -873,6 +873,16 @@ func TestCalculateServiceWeights(t *testing.T) {
serviceWeights map[ServiceUnitKey]int32
expectedWeights map[ServiceUnitKey]int32
}{
+ {
+ name: "service with no endpoint",
+ serviceUnits: map[ServiceUnitKey][]Endpoint{
+ suKey1: {},
+ },
+ serviceWeights: map[ServiceUnitKey]int32{
+ suKey1: 100,
+ },
+ expectedWeights: map[ServiceUnitKey]int32{},
+ },
{
name: "equally weighted services with same number of endpoints",
serviceUnits: map[ServiceUnitKey][]Endpoint{
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point, didn't think about a single service with no endpoints and exposing that datapath. Agreed it doesn't functionally changing anything, but keep our code clean for the future.
Done.
e24ca3a
to
9656da7
Compare
/retest |
2 similar comments
/retest |
/retest |
/lgtm |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
13 similar comments
/retest-required Please review the full test history for this PR and help us cut down flakes. |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
label /qe-approved |
/label qe-approved |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
1 similar comment
/retest-required Please review the full test history for this PR and help us cut down flakes. |
/retest |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
2 similar comments
/retest-required Please review the full test history for this PR and help us cut down flakes. |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
/skip |
/label px-approved |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
1 similar comment
/retest-required Please review the full test history for this PR and help us cut down flakes. |
@gcs278: The following test failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
Change to not scale weight to 256 if there is only one service for the route. More information can be found here: NE-709 Impact of Server Weight on Memory Allocation
In a nutshell, scaling the weight to 256 is redundant for single services routes since all servers in the haproxy backend will always have the same weight, so therefore weight 1 == weight 256. Reducing the weight helps with reducing the static memory allocation.