New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DaemonSet] Considering run-once DaemonSet (Job DaemonSet) #50689

Closed
resouer opened this Issue Aug 15, 2017 · 9 comments

Comments

Projects
None yet
5 participants
@resouer
Copy link
Member

resouer commented Aug 15, 2017

Is this a BUG REPORT or FEATURE REQUEST?:

/kind feature

What happened:
Currently, only restart=always is allowed for DaemonSet

What you expected to happen:
Run once DasemonSet is also vey useful, for example, install CNI or flexvolume binaries on every node.

Exit status can be used to determine its lifecycle. It's like a Job DaemonSet. But I am not sure if we should introduce a new API object or refactoring DaemonSet is enough.

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version):
  • Cloud provider or hardware configuration**:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:
@resouer

This comment has been minimized.

Copy link
Member

resouer commented Aug 15, 2017

This is equivalent to adding dead loop or sleep to end of scripts:

e.g. ./run.sh

# do my magic job

while true; do
	sleep 3600;
done
@guangxuli

This comment has been minimized.

Copy link
Contributor

guangxuli commented Aug 15, 2017

Reasonable. +1.
Not sure why the DaemonSet Proposal describe that kubelet reject DaemonSet objects with pod templates that don't have restartPolicy set to Always. May be i miss something.

@wenlxie

This comment has been minimized.

Copy link
Contributor

wenlxie commented Aug 16, 2017

If the DaemonSet controller can create run-once pod in the node when:

  1. Node just add to the cluster
  2. Node restart.
    That would be awesome
@resouer

This comment has been minimized.

Copy link
Member

resouer commented Aug 19, 2017

@guangxuli It's by design

@wenlxie Yes, it's true this use case tend to be associated with Node change. And I think this behavior can be covered by current DaemonSet controller.

I will try to ask for wider suggestions in community, and draft a design doc if it sounds reasonable.

@resouer resouer self-assigned this Aug 19, 2017

@resouer

This comment has been minimized.

Copy link
Member

resouer commented Aug 19, 2017

cc @mikedanese @kargakis,what do you think about this feature request? Is it reasonable, or we prefer to leave it as is (#50689 (comment)).

@wenlxie

This comment has been minimized.

Copy link
Contributor

wenlxie commented Aug 19, 2017

@resouer This scenario can be covered by current DaemonSet controller? I am a bit confused.
I know if the a node just added to the cluster, the controller will create one pod to it. But if the node restart, the DaemonSet controller still can do that?

@resouer

This comment has been minimized.

Copy link
Member

resouer commented Aug 20, 2017

@wenlxie Controller has enough information about Node status change .

@wenlxie

This comment has been minimized.

Copy link
Contributor

wenlxie commented Aug 20, 2017

@resouer Thanks, will have a check and have a try.

@kargakis

This comment has been minimized.

Copy link
Member

kargakis commented Aug 20, 2017

Dupe of #36601

@kargakis kargakis closed this Aug 20, 2017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment