Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubectl create -f <directory> doesn't create services first, violating config best practice #16448

Closed
zmerlynn opened this issue Oct 28, 2015 · 24 comments
Labels
area/kubectl lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/cli Categorizes an issue or PR as relevant to SIG CLI.

Comments

@zmerlynn
Copy link
Member

Config best practices says:

Create a service before corresponding replication controllers so that the scheduler can spread the pods comprising the service. You can also create the replication controller without specifying replicas, create the service, then scale up the replication controller, which may work better in an example using progressive disclosure and may have benefits in real scenarios also, such as ensuring one replica works before creating lots of them)

It also says:

Use kubectl create -f <directory> where possible. This looks for config objects in all .yaml, .yml, and .json files in <directory> and passes them to create.

It looks like kubectl just creates things in sorted order:

zml@zml:~/kubernetes$ ls -1 examples/spark/*.yaml
examples/spark/spark-driver-controller.yaml
examples/spark/spark-master-controller.yaml
examples/spark/spark-master-service.yaml
examples/spark/spark-worker-controller.yaml
zml@zml:~/kubernetes$ kubectl create -f examples/spark
replicationcontrollers/spark-driver-controller
replicationcontrollers/spark-master-controller
services/spark-master
replicationcontrollers/spark-worker-controller

cc @gmarek @mikedanese

@zmerlynn zmerlynn changed the title kubectl create -f <directory> doesn't create services first, violating config kubectl create -f <directory> doesn't create services first, violating config best practice Oct 28, 2015
@roberthbailey
Copy link
Contributor

/cc @bgrant0607 @jlowdermilk

@bgrant0607 bgrant0607 added the priority/backlog Higher priority than priority/awaiting-more-evidence. label Oct 29, 2015
@bgrant0607
Copy link
Member

Yes, known issue. Thanks for filing.

@bgrant0607
Copy link
Member

Also relevant: #1768

@bgrant0607
Copy link
Member

In a galaxy far far away, kubecfg created services first.

@smarterclayton How would you feel about doing that in kubectl?

@smarterclayton
Copy link
Contributor

Not opposed, possibly if we detect you give us lots of things we could put
services first.

On Fri, Feb 12, 2016 at 5:02 PM, Brian Grant notifications@github.com
wrote:

In a galaxy far far away, kubecfg created services first.

@smarterclayton https://github.com/smarterclayton How would you feel
about doing that in kubectl?


Reply to this email directly or view it on GitHub
#16448 (comment)
.

@bgrant0607
Copy link
Member

cc @kubernetes/kubectl

@deads2k
Copy link
Contributor

deads2k commented Apr 6, 2016

If start establishing order amongst creates, I'd like to see it as an interface in the Factory that takes []runtime.Object and returns []runtime.Object, so that we could decide on our own ordering.

I also think that if we go down this route, we should bound ourselves. For instance, if you create a serviceaccount and then create a pod that uses that SA, the pod will be rejected unless the SA token controller has created a token. People find this frustrating (even though bare pods aren't a good idea). Would we also allow preconditions? I'm against it, but I'd like to be sure that we're willing to allow flaky failures in cases like that. Similar problems exist with quota.

@smarterclayton
Copy link
Contributor

I'd rather fix services and pods by being explicit via downward api.

On Wed, Apr 6, 2016 at 8:46 AM, David Eads notifications@github.com wrote:

If start establishing order amongst creates, I'd like to see it as an
interface in the Factory that takes []runtime.Object and returns
[]runtime.Object, so that we could decide on our own ordering.

I also think that if we go down this route, we should bound ourselves. For
instance, if you create a serviceaccount and then create a pod that uses
that SA, the pod will be rejected unless the SA token controller has
created a token. People find this frustrating (even though bare pods aren't
a good idea). Would we also allow preconditions? I'm against it, but I'd
like to be sure that we're willing to allow flaky failures in cases like
that. Similar problems exist with quota.


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
#16448 (comment)

@bgrant0607 bgrant0607 added sig/cli Categorizes an issue or PR as relevant to SIG CLI. and removed team/ux (deprecated - do not use) labels Mar 21, 2017
@renannprado
Copy link

any news about this?

@julianvmodesto
Copy link
Contributor

These service linking proposals seem to be the upcoming solution for this:

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 18, 2018
@bgrant0607
Copy link
Member

/remove-lifecycle stale
/lifecycle frozen

@k8s-ci-robot k8s-ci-robot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 22, 2018
@borg286
Copy link

borg286 commented Jan 15, 2019

I'd also like this feature. It seems that kubecfg has a hardcoded ordering of types of objects
It does ThirdPartyResource and CustomResourceDefinition
then global resources,
then namespace,
then things that don't contain pods
then lastly things that contain pods.

This seems rather simple to fetch the types and give them a priority, toss them into a priority queue, then finally empty the queue into a list. This will bring kubectl one step closer to kubecfg.

@renannprado
Copy link

renannprado commented Jan 15, 2019 via email

@vtereso
Copy link

vtereso commented May 6, 2019

@renannprado However, if the directory was of sufficiently large size to get into double digits (or greater), issues would emerge again unless files have been properly zero prefixed.

@renannprado
Copy link

@vtereso if you're afraid of that then just do like this:

00_namespace.yaml
01_service.yaml
.
.
.

If you're afraid of having hundreds, same logic applies.

@adampl
Copy link

adampl commented May 7, 2019

That doesn't look like a proper solution ;)

@vtereso
Copy link

vtereso commented May 7, 2019

@vtereso if you're afraid of that then just do like this:

00_namespace.yaml
01_service.yaml
.
.
.

If you're afraid of having hundreds, same logic applies.

This is what I was inferring by the zero prefix, which is a less than optimal solution as mentioned by @adampl, but I suppose it's ok.

@bgrant0607
Copy link
Member

cc @pwittrock @seans3

@costela
Copy link

costela commented May 7, 2019

This might not be a solution to this exact issue, but kubectl's new -k mode, based on kustomize, does reorder resources.

Even if you're not interested in kustomize's other features, as long as you can upgrade your kubectl, you should be able to work around this limitation in a relatively clean way (at least compared to 0-prefixing).

@renannprado
Copy link

That doesn't look like a proper solution ;)

@adampl I'm not saying it's a solution, it's (or was) a workaround for the current status quo.

I'll soon give it a try to @costela's solution soon

@KnVerey
Copy link
Contributor

KnVerey commented Mar 31, 2021

As some of the more recent comments allude to, the issue of resource creation ordering is a lot more complicated than just Services->Deployments. Even if we were to implement a dependency graph between all built-in kinds, which would be non-trivial, we would still encounter several stumbling blocks:

  1. Custom resources.
  2. A successful create is not always sufficient to provide behavioural startup ordering. To get that, you may need to wait for a status on the earlier resource. E.g. CRD has to be accepted, not merely created, before submitting a dependent CR.
  3. There isn't really a universally desirable ordering even within core kinds. Many applications have more ad-hoc dependence relationships that automatic reordering could not detect, and would in fact interfere with.
  4. Related to the above, changing from a system where users effectively provide the correct ordering explicitly to one where we attempt to infer it would be a breaking change.

We discussed this at a SIG-CLI meeting today, and for all these reasons, we don't think this is a change we should pursue in kubectl. The current behavior is effectively to have users fully control the dependency chain by providing the resources in the desired order. This is the right behavior for a tool at this level of abstraction. If more complex behavior is desired, there are many more opinionated, higher-level deploy orchestration tools available in the ecosystem.

/close

@k8s-ci-robot
Copy link
Contributor

@KnVerey: Closing this issue.

In response to this:

As some of the more recent comments allude to, the issue of resource creation ordering is a lot more complicated than just Services->Deployments. Even if we were to implement a dependency graph between all built-in kinds, which would be non-trivial, we would still encounter several stumbling blocks:

  1. Custom resources.
  2. A successful create is not always sufficient to provide behavioural startup ordering. To get that, you may need to wait for a status on the earlier resource. E.g. CRD has to be accepted, not merely created, before submitting a dependent CR.
  3. There isn't really a universally desirable ordering even within core kinds. Many applications have more ad-hoc dependence relationships that automatic reordering could not detect, and would in fact interfere with.
  4. Related to the above, changing from a system where users effectively provide the correct ordering explicitly to one where we attempt to infer it would be a breaking change.

We discussed this at a SIG-CLI meeting today, and for all these reasons, we don't think this is a change we should pursue in kubectl. The current behavior is effectively to have users fully control the dependency chain by providing the resources in the desired order. This is the right behavior for a tool at this level of abstraction. If more complex behavior is desired, there are many more opinionated, higher-level deploy orchestration tools available in the ecosystem.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@Jonas-Sander
Copy link

We discussed this at a SIG-CLI meeting today, and for all these reasons, we don't think this is a change we should pursue in kubectl. The current behavior is effectively to have users fully control the dependency chain by providing the resources in the desired order. This is the right behavior for a tool at this level of abstraction. If more complex behavior is desired, there are many more opinionated, higher-level deploy orchestration tools available in the ecosystem.

Sorry to reanimate this issue, but as a noobie it would be helpful to know what "higher-level deploy orchestration tools available in the ecosystem" there are. Can anyone name some examples or link to an overview?


Actually I'm quite surprised that I would stumble on this kind of issue because I just assumed I could just natively with kubernetes apply all at once in a correct order. I personally now got to this issue because I wanted to "install" knative and then deploy a knative Service all in one apply (or in more specifically running skaffold dev or kustomize build | kubectl apply -f - once).
Even with the correct ordering by using kustomize (first applying knative CRDs) I encounter this error:

Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "webhook.serving.knative.dev": failed to call webhook: Post "https://webhook.knative-serving.svc:443/?timeout=10s": service "webhook" not found

which from my guess is what was described:

A successful create is not always sufficient to provide behavioural startup ordering. To get that, you may need to wait for a status on the earlier resource. E.g. CRD has to be accepted, not merely created, before submitting a dependent CR.

So is this not possible with kubernetes?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/kubectl lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/cli Categorizes an issue or PR as relevant to SIG CLI.
Projects
None yet
Development

No branches or pull requests