Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Daemonset can not create the needed pod when the node has enough resource after the other daemonset delete #58868

Closed
chentao1596 opened this Issue Jan 26, 2018 · 7 comments

Comments

Projects
4 participants
@chentao1596
Copy link
Member

chentao1596 commented Jan 26, 2018

What happened:
I create two daemonsets, only one can create the needed pod because of the node's resource is not enough.
I delete the normal one, but the other one still not create the needed pod.

What you expected to happen:
I hope the other one can quickly create the needed pod.

How to reproduce it (as minimally and precisely as possible):
My enviroment:
1 master + 1 node
two daemonsets have the same resource required: 700ms(cpu) + 700Mi(memory), but my node can only support one(e.g memory is not enough for 2 pods)

Anything else we need to know?: none

Environment:

  • Kubernetes version (use kubectl version):
    build by myself:
    Client Version: version.Info{Major:"1", Minor:"10+", GitVersion:"v1.10.0-alpha.1.1355+6477f2b0fdbc0b", GitCommit:"6477f2b0fdbc0b7e72dccba5b5123e72244e9909", GitTreeState:"clean", BuildDate:"2018-01-25T03:06:42Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
    Server Version: version.Info{Major:"1", Minor:"10+", GitVersion:"v1.10.0-alpha.1.1355+6477f2b0fdbc0b", GitCommit:"6477f2b0fdbc0b7e72dccba5b5123e72244e9909", GitTreeState:"clean", BuildDate:"2018-01-25T02:48:14Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
  • Cloud provider or hardware configuration: none
  • OS (e.g. from /etc/os-release):CentOS Linux release 7.3.1611 (Core)
  • Kernel (e.g. uname -a):Linux host-10-10-10-53 3.10.0-514.26.2.el7.x86_64 #1 SMP Tue Jul 18 11:25:45 CST 2017 x86_64 x86_64 x86_64 GNU/Linux
  • Install tools: ansible
  • Others: none
@chentao1596

This comment has been minimized.

Copy link
Member Author

chentao1596 commented Jan 26, 2018

/cc @k82cn
I found you have handle it when no-daemon pod deleted.
#46935
Please have a see, thank you!

@k82cn

This comment has been minimized.

Copy link
Member

k82cn commented Jan 26, 2018

I delete the normal one, but the other one still not create the needed pod.

delete pod or DS?

@k82cn

This comment has been minimized.

Copy link
Member

k82cn commented Jan 26, 2018

/sig apps

@k8s-ci-robot k8s-ci-robot added sig/apps and removed needs-sig labels Jan 26, 2018

@chentao1596

This comment has been minimized.

Copy link
Member Author

chentao1596 commented Jan 27, 2018

delete pod or DS?

delete DS.

@kow3ns kow3ns added this to Backlog in Workloads Feb 26, 2018

@fejta-bot

This comment has been minimized.

Copy link

fejta-bot commented Apr 27, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@fejta-bot

This comment has been minimized.

Copy link

fejta-bot commented May 27, 2018

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@fejta-bot

This comment has been minimized.

Copy link

fejta-bot commented Jun 26, 2018

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Workloads automation moved this from Backlog to Done Jun 26, 2018

@chentao1596 chentao1596 referenced this issue Oct 12, 2018

Closed

REQUEST: New membership for @chentao1596 #162

6 of 6 tasks complete
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.