-
Notifications
You must be signed in to change notification settings - Fork 38.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pod init hooks #9836
Comments
Is it possible for you to use a container lifecycle hook? http://godoc.org/github.com/GoogleCloudPlatform/kubernetes/pkg/api/v1#Lifecycle RestartPolicy is almost what you want but it is applied over all the containers in a pod. http://godoc.org/github.com/GoogleCloudPlatform/kubernetes/pkg/api/v1#RestartPolicy |
@mikedanese RestartPolicy would be perfect if only it were per-container rather than pod. I wouldn't mind trying to implement this if there's interest. |
cc @bgrant0607 @davidopp for api changes. |
The corner cases around mixed-restart-policy pods is not worth the energy. On Mon, Jun 15, 2015 at 8:26 PM, Mike Danese notifications@github.com
|
It's true that the most common use cases of sidecar containers, for example data loader and log saver, will run forever. However, IIRC we've occasionally told people to use sidecar containers for various kinds of initial configuration, and in those cases it seems that having the sidecar run once to completion and the other pod restart would be desirable. I guess the lifecycle hook proposal would be that you merge the configuration container logic into the main container's logic, so that you have just one container, and have the PreStart hook invoke the former? Anyway, I don't have a strong opinion. |
We've thrown out the idea of having two stages in a pod - stage 1 only On Mon, Jun 15, 2015 at 10:47 PM, David Oppenheimer <
|
Yeah, I was assuming the user would be responsible for orchestrating the dependency between the two containers (e.g. the application in the main container would wait until the configuration data was populated by the other container before finishing its startup). |
Closing as a dupe of #4282 |
I have a container in a pod whose only job is to populate some configuration values within the pod-local etcd on startup. This works fine except that after it exits successfully Kubernetes continually respawns it over and over again.
My current hack is to append
&& tail -f /dev/null
to the container command to allow it to "run" indefinitely.It would be great if Kubernetes allowed one to specify certain containers as non-respawning in the pod spec.
The text was updated successfully, but these errors were encountered: