-
Notifications
You must be signed in to change notification settings - Fork 663
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix max pods to evict per node. #87
Fix max pods to evict per node. #87
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Other than the violation of the DRY, it looks good.
@@ -72,23 +74,42 @@ func TestPodAntiAffinity(t *testing.T) { | |||
}, | |||
}, | |||
} | |||
p4.Spec.Affinity = &v1.Affinity{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the second copy of the p1.Spec.Affinity
. What about:
func defaultAffinity() *v1.Affinity {
return &v1.Affinity{
PodAntiAffinity: &v1.PodAntiAffinity{
RequiredDuringSchedulingIgnoredDuringExecution: []v1.PodAffinityTerm{
{
LabelSelector: &metav1.LabelSelector{
MatchExpressions: []metav1.LabelSelectorRequirement{
{
Key: "foo",
Operator: metav1.LabelSelectorOpIn,
Values: []string{"bar"},
},
},
},
TopologyKey: "region",
},
},
},
}
}
p1.Spec.Affinity = defaultAffinity()
p3.Spec.Affinity = defaultAffinity()
p4.Spec.Affinity = defaultAffinity()
?
This fixes only the second issue @wjiangjay reported. What about the first one? |
I dont see any issue with having default logging at 0? In downstream, as part of the installer, I have made it configurable. so there is always an option to configure it at higher level always. Or do you think we should increase default level to some higher level? |
What do you mean by it? |
The code is repeated three times. It should not as long as it is not intentional. |
4b7d8c9
to
99fd14a
Compare
99fd14a
to
34fb602
Compare
@ingvagabund PTAL. |
For this issue, I prefer 2 solutions here:
Another thing is that I have no idea if the PR is for the solution 2, do I misunderstand? |
/test all |
/lgtm |
Yes that's what this PR does. golang by default sets int value to 0 and this default or unset value of 0 means Also it does not make any sense to set to 0, because why you would run descheduler when you dont want to evict any pods from nodes.
that would lead to confusion IMO.
So this PR does as the solution 1. |
I am merging this as otherwise, by default, descheduler wont work at all. I think we should have been more careful before merging PR related to max-pod-per-node. Also we can continue further discussion in the issue. |
…x-issue Fix max pods to evict per node.
WRKLDS-594: Downstream descheduler 1.26 bumps
@ravisantoshgudimetla @ingvagabund @wjiangjay