-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement support of horizontal scaling rules for Autoscaling #27119
Conversation
8e442c1
to
2c43fcd
Compare
Test changes on VMUse this command from test-infra-definitions to manually test this PR changes on a VM: inv create-vm --pipeline-id=38457337 --os-family=ubuntu Note: This applies to commit 862d440 |
Regression DetectorRegression Detector ResultsRun ID: 6f4f6688-ca96-4a68-a494-48aed8829fe8 Metrics dashboard Target profiles Baseline: c37797d Performance changes are noted in the perf column of each table:
No significant changes in experiment optimization goalsConfidence level: 90.00% There were no significant changes in experiment optimization goals at this confidence level and effect size tolerance.
|
perf | experiment | goal | Δ mean % | Δ mean % CI | links |
---|---|---|---|---|---|
➖ | basic_py_check | % cpu utilization | +3.08 | [+0.40, +5.76] | Logs |
➖ | tcp_syslog_to_blackhole | ingress throughput | +2.34 | [-10.56, +15.23] | Logs |
➖ | uds_dogstatsd_to_api_cpu | % cpu utilization | +0.81 | [-0.08, +1.70] | Logs |
➖ | file_tree | memory utilization | +0.44 | [+0.39, +0.49] | Logs |
➖ | uds_dogstatsd_to_api | ingress throughput | -0.00 | [-0.00, +0.00] | Logs |
➖ | tcp_dd_logs_filter_exclude | ingress throughput | -0.00 | [-0.01, +0.01] | Logs |
➖ | idle | memory utilization | -0.38 | [-0.42, -0.35] | Logs |
➖ | pycheck_1000_100byte_tags | % cpu utilization | -0.50 | [-5.02, +4.01] | Logs |
➖ | otel_to_otel_logs | ingress throughput | -0.87 | [-1.68, -0.07] | Logs |
Explanation
A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".
For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
-
Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.
-
Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.
-
Its configuration does not mark it "erratic".
@@ -55,15 +57,24 @@ func newHorizontalReconciler(clock clock.Clock, eventRecorder record.EventRecord | |||
} | |||
|
|||
func (hr *horizontalController) sync(ctx context.Context, podAutoscaler *datadoghq.DatadogPodAutoscaler, autoscalerInternal *model.PodAutoscalerInternal) (autoscaling.ProcessResult, error) { | |||
// If we have no Spec, nothing to do | |||
if autoscalerInternal.Spec() == nil { | |||
return autoscaling.NoRequeue, nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be a silent failure. Is it something we expect ? Should we return an error ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not really possible in practice or only temporarily. In any case once the Spec
is updated, it will requeued.
) (*datadoghq.DatadogPodAutoscalerHorizontalAction, time.Duration, error) { | ||
// Check if we scaling has been disabled enabled explicitly | ||
if currentDesiredReplicas == 0 { | ||
return nil, 0, errors.New("scaling disabled as current replicas is set to 0") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am curious about this case, can it happen unintentionally ? Should we retry later ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We always retry all autoscalers every 5m if nothing happens in the meantime.
allowed, reason := isScalingAllowed(autoscalerInternal, source, scaleDirection) | ||
// Checking if scaling constraints allow this scaling | ||
autoscalerSpec := autoscalerInternal.Spec() | ||
allowed, reason := isScalingAllowed(autoscalerSpec, source, scaleDirection) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suppose we have a desired value that is less than minReplicas
and upscaling is disabled, it means that the desired replicas will stay below minReplicas
.
IMO we should always honor the boundaries, even if the policy forbids upscaling or downscaling. As a user, I would definitely expect that. What do you think ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it makes sense to have it this way. If you disable downscaling for an incident and manually scale above maxReplicas
, I would not want the controller to downscale it anyway.
earliestTimestamp := currentTime.Add(-periodDuration) | ||
|
||
for _, event := range events { | ||
if event.Time.Time.After(earliestTimestamp) { | ||
if numEvents == 0 { | ||
// Record when the oldest event will be out of the window | ||
expireIn = event.Time.Sub(earliestTimestamp) | ||
} | ||
|
||
numEvents++ | ||
diff := event.ToReplicas - event.FromReplicas | ||
if diff > 0 { | ||
added += diff | ||
} else { | ||
removed += -diff | ||
} | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
earliestTimestamp := currentTime.Add(-periodDuration) | |
for _, event := range events { | |
if event.Time.Time.After(earliestTimestamp) { | |
if numEvents == 0 { | |
// Record when the oldest event will be out of the window | |
expireIn = event.Time.Sub(earliestTimestamp) | |
} | |
numEvents++ | |
diff := event.ToReplicas - event.FromReplicas | |
if diff > 0 { | |
added += diff | |
} else { | |
removed += -diff | |
} | |
} | |
} | |
earliestTimestamp := currentTime.Add(-periodDuration) | |
events = lo.Filter(events, func(event datadoghq.DatadogPodAutoscalerHorizontalAction, _ int) bool { | |
return event.Time.Time.After(earliestTimestamp) | |
}) | |
numEvents = int32(len(events)) | |
for i, event := range events { | |
if i == 0 { | |
// Record when the oldest event will be out of the window | |
expireIn = event.Time.Sub(earliestTimestamp) | |
} | |
diff := event.ToReplicas - event.FromReplicas | |
if diff > 0 { | |
added += diff | |
} else { | |
removed -= diff | |
} | |
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But I loop matching events twice instead of once. I'd rather keep the current simple one-pass loop.
if targetDesiredReplicas > maxReplicasFromRules { | ||
return maxReplicasFromRules, minExpireIn, fmt.Sprintf("desired replica count limited to %d (originally %d) due to scaling policy", maxReplicasFromRules, targetDesiredReplicas) | ||
} | ||
return targetDesiredReplicas, 0, "" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should re-check the boundaries here. If we look at the past N events and at event N+1 the boundaries changed, then the Nth value can be outside the boundaries and will be taken into account in maxReplicasFromRules
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
targetDesiredReplicas
is already bounded to max/minReplicas
before calling scaleUp/Down. Rules do not account for min/max
, so the result from maxReplicasFromRules
is the same regardless of min/max
.
if targetDesiredReplicas > maxReplicasFromRules { | ||
return maxReplicasFromRules, minExpireIn, fmt.Sprintf("desired replica count limited to %d (originally %d) due to scaling policy", maxReplicasFromRules, targetDesiredReplicas) | ||
} | ||
return targetDesiredReplicas, 0, "" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we return 0
here ? Shouldn't we reevaluate after minExpireIn
in case the next event limits us ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't requeue after a specific delay as we already fully reached target. The next impact will be when we receive a new reccomendation, which will trigger a requeue anyway.
pkg/clusteragent/autoscaling/workload/model/pod_autoscaler_test_utils.go
Show resolved
Hide resolved
func TestAddHorizontalAction(t *testing.T) { | ||
testTime := time.Now() | ||
|
||
// Test no retention, should move back to keep a single action | ||
var horizontalEventsRetention time.Duration | ||
horizontalLastActions := []datadoghq.DatadogPodAutoscalerHorizontalAction{ | ||
{ | ||
Time: metav1.Time{Time: testTime.Add(-10 * time.Minute)}, | ||
}, | ||
{ | ||
Time: metav1.Time{Time: testTime.Add(-8 * time.Minute)}, | ||
}, | ||
} | ||
addedAction1 := &datadoghq.DatadogPodAutoscalerHorizontalAction{ | ||
Time: metav1.Time{Time: testTime}, | ||
} | ||
horizontalLastActions = addHorizontalAction(testTime, horizontalEventsRetention, horizontalLastActions, addedAction1) | ||
assert.Equal(t, []datadoghq.DatadogPodAutoscalerHorizontalAction{*addedAction1}, horizontalLastActions) | ||
|
||
// Add another event, should still keep one | ||
horizontalLastActions = addHorizontalAction(testTime, horizontalEventsRetention, horizontalLastActions, addedAction1) | ||
assert.Equal(t, []datadoghq.DatadogPodAutoscalerHorizontalAction{*addedAction1}, horizontalLastActions) | ||
|
||
// 15 minutes retention, should keep everything | ||
horizontalEventsRetention = 15 * time.Minute | ||
horizontalLastActions = []datadoghq.DatadogPodAutoscalerHorizontalAction{ | ||
{ | ||
Time: metav1.Time{Time: testTime.Add(-10 * time.Minute)}, | ||
}, | ||
{ | ||
Time: metav1.Time{Time: testTime.Add(-8 * time.Minute)}, | ||
}, | ||
} | ||
// Adding two fake events | ||
horizontalLastActions = addHorizontalAction(testTime, horizontalEventsRetention, horizontalLastActions, addedAction1) | ||
horizontalLastActions = addHorizontalAction(testTime, horizontalEventsRetention, horizontalLastActions, addedAction1) | ||
assert.Equal(t, []datadoghq.DatadogPodAutoscalerHorizontalAction{ | ||
{ | ||
Time: metav1.Time{Time: testTime.Add(-10 * time.Minute)}, | ||
}, | ||
{ | ||
Time: metav1.Time{Time: testTime.Add(-8 * time.Minute)}, | ||
}, | ||
*addedAction1, | ||
*addedAction1, | ||
}, horizontalLastActions) | ||
|
||
// Moving time forward, should keep only the last two events | ||
testTime = testTime.Add(10 * time.Minute) | ||
addedAction2 := &datadoghq.DatadogPodAutoscalerHorizontalAction{ | ||
Time: metav1.Time{Time: testTime}, | ||
} | ||
horizontalLastActions = addHorizontalAction(testTime, horizontalEventsRetention, horizontalLastActions, addedAction2) | ||
assert.Equal(t, []datadoghq.DatadogPodAutoscalerHorizontalAction{ | ||
*addedAction1, | ||
*addedAction1, | ||
*addedAction2, | ||
}, horizontalLastActions) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
func TestAddHorizontalAction(t *testing.T) { | |
testTime := time.Now() | |
// Test no retention, should move back to keep a single action | |
var horizontalEventsRetention time.Duration | |
horizontalLastActions := []datadoghq.DatadogPodAutoscalerHorizontalAction{ | |
{ | |
Time: metav1.Time{Time: testTime.Add(-10 * time.Minute)}, | |
}, | |
{ | |
Time: metav1.Time{Time: testTime.Add(-8 * time.Minute)}, | |
}, | |
} | |
addedAction1 := &datadoghq.DatadogPodAutoscalerHorizontalAction{ | |
Time: metav1.Time{Time: testTime}, | |
} | |
horizontalLastActions = addHorizontalAction(testTime, horizontalEventsRetention, horizontalLastActions, addedAction1) | |
assert.Equal(t, []datadoghq.DatadogPodAutoscalerHorizontalAction{*addedAction1}, horizontalLastActions) | |
// Add another event, should still keep one | |
horizontalLastActions = addHorizontalAction(testTime, horizontalEventsRetention, horizontalLastActions, addedAction1) | |
assert.Equal(t, []datadoghq.DatadogPodAutoscalerHorizontalAction{*addedAction1}, horizontalLastActions) | |
// 15 minutes retention, should keep everything | |
horizontalEventsRetention = 15 * time.Minute | |
horizontalLastActions = []datadoghq.DatadogPodAutoscalerHorizontalAction{ | |
{ | |
Time: metav1.Time{Time: testTime.Add(-10 * time.Minute)}, | |
}, | |
{ | |
Time: metav1.Time{Time: testTime.Add(-8 * time.Minute)}, | |
}, | |
} | |
// Adding two fake events | |
horizontalLastActions = addHorizontalAction(testTime, horizontalEventsRetention, horizontalLastActions, addedAction1) | |
horizontalLastActions = addHorizontalAction(testTime, horizontalEventsRetention, horizontalLastActions, addedAction1) | |
assert.Equal(t, []datadoghq.DatadogPodAutoscalerHorizontalAction{ | |
{ | |
Time: metav1.Time{Time: testTime.Add(-10 * time.Minute)}, | |
}, | |
{ | |
Time: metav1.Time{Time: testTime.Add(-8 * time.Minute)}, | |
}, | |
*addedAction1, | |
*addedAction1, | |
}, horizontalLastActions) | |
// Moving time forward, should keep only the last two events | |
testTime = testTime.Add(10 * time.Minute) | |
addedAction2 := &datadoghq.DatadogPodAutoscalerHorizontalAction{ | |
Time: metav1.Time{Time: testTime}, | |
} | |
horizontalLastActions = addHorizontalAction(testTime, horizontalEventsRetention, horizontalLastActions, addedAction2) | |
assert.Equal(t, []datadoghq.DatadogPodAutoscalerHorizontalAction{ | |
*addedAction1, | |
*addedAction1, | |
*addedAction2, | |
}, horizontalLastActions) | |
} | |
func TestAddHorizontalAction(t *testing.T) { | |
testTime := time.Now() | |
actionNow := &datadoghq.DatadogPodAutoscalerHorizontalAction{ | |
Time: metav1.Time{Time: testTime}, | |
} | |
actionIn10Min := &datadoghq.DatadogPodAutoscalerHorizontalAction{ | |
Time: metav1.Time{Time: testTime.Add(10 * time.Minute)}, | |
} | |
testCases := []struct { | |
description string | |
testTime time.Time | |
horizontalEventsRetention time.Duration | |
horizontalLastActions []datadoghq.DatadogPodAutoscalerHorizontalAction | |
addedAction *datadoghq.DatadogPodAutoscalerHorizontalAction | |
expectedActions []datadoghq.DatadogPodAutoscalerHorizontalAction | |
}{ | |
{ | |
description: "No retention, keep single action", | |
testTime: testTime, | |
horizontalLastActions: []datadoghq.DatadogPodAutoscalerHorizontalAction{ | |
{Time: metav1.Time{Time: testTime.Add(-10 * time.Minute)}}, | |
{Time: metav1.Time{Time: testTime.Add(-8 * time.Minute)}}, | |
}, | |
addedAction: actionNow, | |
expectedActions: []datadoghq.DatadogPodAutoscalerHorizontalAction{*actionNow}, | |
}, | |
{ | |
description: "Add another event, still keep one", | |
testTime: testTime, | |
horizontalLastActions: []datadoghq.DatadogPodAutoscalerHorizontalAction{ | |
*actionNow, | |
}, | |
addedAction: actionNow, | |
expectedActions: []datadoghq.DatadogPodAutoscalerHorizontalAction{*actionNow}, | |
}, | |
{ | |
description: "15 minutes retention, keep everything", | |
testTime: testTime, | |
horizontalEventsRetention: 15 * time.Minute, | |
horizontalLastActions: []datadoghq.DatadogPodAutoscalerHorizontalAction{ | |
{Time: metav1.Time{Time: testTime.Add(-10 * time.Minute)}}, | |
{Time: metav1.Time{Time: testTime.Add(-8 * time.Minute)}}, | |
}, | |
addedAction: actionNow, | |
expectedActions: []datadoghq.DatadogPodAutoscalerHorizontalAction{ | |
{Time: metav1.Time{Time: testTime.Add(-10 * time.Minute)}}, | |
{Time: metav1.Time{Time: testTime.Add(-8 * time.Minute)}}, | |
*actionNow, | |
}, | |
}, | |
{ | |
description: "Moving time forward, keep only last two events", | |
testTime: testTime.Add(10 * time.Minute), | |
horizontalEventsRetention: 15 * time.Minute, | |
horizontalLastActions: []datadoghq.DatadogPodAutoscalerHorizontalAction{ | |
*actionNow, | |
*actionNow, | |
}, | |
addedAction: actionIn10Min, | |
expectedActions: []datadoghq.DatadogPodAutoscalerHorizontalAction{ | |
*actionNow, | |
*actionNow, | |
*actionIn10Min, | |
}, | |
}, | |
} | |
for _, tc := range testCases { | |
t.Run(tc.description, func(t *testing.T) { | |
result := addHorizontalAction(tc.testTime, tc.horizontalEventsRetention, tc.horizontalLastActions, tc.addedAction) | |
assert.Equal(t, tc.expectedActions, result) | |
}) | |
} | |
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not a big fan of array-based testing in this case as the scenario is sequential and the sequence order is important.
upscaleRetention := getLongestScalingRulesPeriod(policy.Upscale.Rules) | ||
if upscaleRetention > longestRetention { | ||
longestRetention = upscaleRetention | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
upscaleRetention := getLongestScalingRulesPeriod(policy.Upscale.Rules) | |
if upscaleRetention > longestRetention { | |
longestRetention = upscaleRetention | |
} | |
longestRetention = getLongestScalingRulesPeriod(policy.Upscale.Rules) |
I think upscaleRetention > longestRetention
will either be true or equal unless we allow negative values
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed, but it keeps symmetry in the upscale/downscale
code
actions = actions[:0] | ||
actions = append(actions, *action) | ||
return actions |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
actions = actions[:0] | |
actions = append(actions, *action) | |
return actions | |
return []datadoghq.DatadogPodAutoscalerHorizontalAction{*action} |
I think it's more concise. If the goal was to update actions
in place. I am not sure that it works. Example: https://go.dev/play/p/_iaqW7Zwg3k
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's more concise but it allocates, current code does not. It's true that :0
does not remove memory, so any previous reference to the array will read un-existing elements, but that should not happen here.
/merge |
🚂 MergeQueue: waiting for PR to be ready This merge request is not mergeable yet, because of pending checks/missing approvals. It will be added to the queue as soon as checks pass and/or get approvals. Use |
🚂 MergeQueue: pull request added to the queue The median merge time in Use |
/merge |
1 similar comment
/merge |
🚂 MergeQueue: pull request added to the queue The median merge time in Use |
What does this PR do?
Implement support of horizontal scaling rules for Autoscaling, and add test suite for horizontal scaling.
Motivation
Feature
Additional Notes
Dedicated commit to set
model
fields to private, making sure all updates go through functions to track modifications.Possible Drawbacks / Trade-offs
Describe how to test/QA your changes
QA already done as part of autoscaling release.