Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix max pods to evict per node. #87

Merged

Conversation

aveshagarwal
Copy link
Contributor

@aveshagarwal aveshagarwal commented Apr 11, 2018

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Apr 11, 2018
Copy link
Contributor

@ingvagabund ingvagabund left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Other than the violation of the DRY, it looks good.

@@ -72,23 +74,42 @@ func TestPodAntiAffinity(t *testing.T) {
},
},
}
p4.Spec.Affinity = &v1.Affinity{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the second copy of the p1.Spec.Affinity. What about:

func defaultAffinity() *v1.Affinity {
  	return &v1.Affinity{
 		PodAntiAffinity: &v1.PodAntiAffinity{
 			RequiredDuringSchedulingIgnoredDuringExecution: []v1.PodAffinityTerm{
 				{
 					LabelSelector: &metav1.LabelSelector{
 						MatchExpressions: []metav1.LabelSelectorRequirement{
 							{
 								Key:      "foo",
 								Operator: metav1.LabelSelectorOpIn,
 								Values:   []string{"bar"},
 							},
 						},
 					},
 					TopologyKey: "region",
 				},
  			},
  		},
  	}
}

p1.Spec.Affinity = defaultAffinity()
p3.Spec.Affinity = defaultAffinity()
p4.Spec.Affinity = defaultAffinity()

?

@ingvagabund
Copy link
Contributor

This fixes only the second issue @wjiangjay reported. What about the first one?

@aveshagarwal
Copy link
Contributor Author

This fixes only the second issue @wjiangjay reported. What about the first one?

I dont see any issue with having default logging at 0? In downstream, as part of the installer, I have made it configurable. so there is always an option to configure it at higher level always. Or do you think we should increase default level to some higher level?

@aveshagarwal
Copy link
Contributor Author

Other than the violation of the DRY,

What do you mean by it?

@ingvagabund
Copy link
Contributor

Other than the violation of the DRY,

What do you mean by it?

The code is repeated three times. It should not as long as it is not intentional.

@aveshagarwal
Copy link
Contributor Author

@ingvagabund PTAL.

@ghost
Copy link

ghost commented Apr 13, 2018

For this issue, I prefer 2 solutions here:

  1. declare the default value 0 for max pods to evict per node in help info. Most easy one.
  2. change the default value at least 1 when no --max-pod-to-evict-per-node option is given and also declare this in help info

Another thing is that I have no idea if the PR is for the solution 2, do I misunderstand?

@ingvagabund
Copy link
Contributor

/test all

@ingvagabund
Copy link
Contributor

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Apr 13, 2018
@aveshagarwal
Copy link
Contributor Author

For this issue, I prefer 2 solutions here:
declare the default value 0 for max pods to evict per node in help info. Most easy one.

Yes that's what this PR does. golang by default sets int value to 0 and this default or unset value of 0 means no affect of this flag. Or IOWs, evict to the maximum possible as determined by the algorithms.

Also it does not make any sense to set to 0, because why you would run descheduler when you dont want to evict any pods from nodes.

change the default value at least 1 when no --max-pod-to-evict-per-node option is given and also declare this in help info

that would lead to confusion IMO.

Another thing is that I have no idea if the PR is for the solution 2, do I misunderstand?

So this PR does as the solution 1.

@aveshagarwal
Copy link
Contributor Author

aveshagarwal commented Apr 13, 2018

I am merging this as otherwise, by default, descheduler wont work at all. I think we should have been more careful before merging PR related to max-pod-per-node. Also we can continue further discussion in the issue.

@aveshagarwal aveshagarwal merged commit c1d87dd into kubernetes-sigs:master Apr 13, 2018
@aveshagarwal aveshagarwal deleted the master-pod-max-issue branch April 20, 2018 17:55
briend pushed a commit to briend/descheduler that referenced this pull request Feb 11, 2022
…x-issue

Fix max pods to evict per node.
knelasevero pushed a commit to knelasevero/descheduler that referenced this pull request May 12, 2023
WRKLDS-594: Downstream descheduler 1.26 bumps
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants