-
Notifications
You must be signed in to change notification settings - Fork 645
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pods do not get evicted while logs say "evicting pods from node" #39
Comments
Interesting, this works fine for v1.8.4 - $ kubectl version
Client Version: version.Info{Major:"1", Minor:"8+", GitVersion:"v1.8.4-dirty", GitCommit:"9befc2b8928a9426501d3bf62f72849d5cbcd5a3", GitTreeState:"dirty", BuildDate:"2017-11-25T12:04:44Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8+", GitVersion:"v1.8.4-dirty", GitCommit:"9befc2b8928a9426501d3bf62f72849d5cbcd5a3", GitTreeState:"dirty", BuildDate:"2017-11-25T11:54:10Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"} |
Your understanding is correct. |
This is weird/incorrect as the idea is to have targetThresholds >= thresholds. |
@aveshagarwal yep, was just playing around with the policy file. This did not work for me when I deployed a cluster from Kubernetes master the other day, but when I switched to v1.8.4, the descheduler did evict the pods. Is this expected behavior, which versions does the descheduler support today? |
@containscafeine Are you still facing this issue? If not can you please close this? |
@containscafeine - Closing this as there is no update. Feel free to open, if you are still facing. |
Bug 1874056: 1.19.0 bump
So, if I understood correctly,
nodeResourceUtilizationThresholds.thresholds
is considered underutilizednodeResourceUtilizationThresholds.targetThresholds
is considered overutilizedIf this is correct, the following happens -
I have 4 nodes, 1 master node and 3 worker nodes -
I tainted and then uncordoned node
kubernetes-minion-group-1vp4
, which means there are no pods or Kubernetes resources on that node -$ kubectl get all -o wide | grep kubernetes-minion-group-1vp4 $
and the allocated resources on this node are -
while on the other 2 worker nodes the allocated resources are -
So with the right
DeschedulerPolicy
, pods should have been descheduled from the loads that are over utilized and scheduled on the fresh node.I wrote the following
DeschedulerPolicy
-I run the descheduler as the following -
Seems like the descheduler ended up making the decisions for evicting pods from overutilized nodes, but when I check the cluster, nothing on the old nodes was terminated and nothing on the fresh node popped up -
$ kubectl get all -o wide | grep kubernetes-minion-group-1vp4 $
What am I doing wrong? :(
The text was updated successfully, but these errors were encountered: