Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question] instantiation of resource(s) and CRD Kind of resource #3502

Closed
TheAggressive opened this issue Jan 25, 2021 · 13 comments
Closed

[Question] instantiation of resource(s) and CRD Kind of resource #3502

TheAggressive opened this issue Jan 25, 2021 · 13 comments
Labels
area/api issues for api module kind/support Categorizes issue or PR as a support question. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@TheAggressive
Copy link

I'm not sure how to go about instantiation of a resource and then applying a CRD(?) of that resource in a fresh cluster to setup the cluster.

Example:

### kustomization.yaml ### 

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
  - sealed-secrets-namespace.yaml
  - https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.13.1/controller.yaml
  - sealed-secret.yaml

### sealed-secret.yaml ###

apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
  creationTimestamp: null
  name: argocd-dummy-repo-verification
  namespace: argocd
spec:
  encryptedData:
    dummy-password: AgBu8/cSv1DuD4GL4tdzNvtYzBp1gUz1b4O/4oaOGcdasgWk3TmvYEOQ5wlUmMzPNX6ZjiLAfn5RfJRVbGIPDr4j649Hn2xxW8UpfyY487pN4vqUYt6C9ycqUnSoWxPd6WzSB0z20I+HisoJMsBTUtGTTxouy9dElMXbdPwvBafJN8b0GUlxr5OfNO0vFmrdTeQEt9Ar7wjfuFbTT6nw/qXn2BqGMOv4+FefOK2EFhq0WpwY/eJFIV77sw5iBBlXc9ZO37yfk25cT4L3uX/uAebBQRlPTn+80hFkTw0196FvKyYJdukH9oxksXxmYZnk0UNo6IBSHQOrdz+YskYqmx90Ay6oCEnq/0q4pg6xh6TkosQ5yp8QSHUW1eDkJlTAXgBAIWivfDbyrXMHQz7mPDArS3SmqqyfjqdoEAMf8JIpa1iV6KaiRtidcMZO0UGnJFFcm0YfPMpBcXwRek8XMnLSB1TDe6it976OQRPgPOYIkZVqdI1bzB+H60H0U+oM187D1dD9XhGoMzD8yIJ368tV8Gnf/PfFo0Ejg1/gT9BAvUcLo/RGI2xEpoOG6Qe3NzuxGSgaZzgFFOvnDZqdQgpGy99j7L4z8DKBeH4fPr5ks/YorHlCB5r5YGC0eAiV6vJbFuIVjXhO6A67F00lwfdaL0R3Ac0JnPKnIPYLLGeAVyHataay2IlOmcWlU3TBlMyYsVKZ9uKEibqJEa66OA==
    dummy-username: AgC0Mt+MjOWwj3iADJ09bY0dWuaR3cAL0ivQKJVlMbySt1A/PNjjAkyo/ATZKvwnBedLZRHvN/+xAlc3Ti1GJbit4j7VG7KXUS5nsqS52PKl3nCbZsl6KtuNRtqKfa6Mo8N4yBauJWDO77sFoBqQLigP9dcrcvILgFUm4KMQuvyAJ4LABXDFPI0PZHVDp0iF23BFTF3WUjQyIPe+cG+b7yHS54TDIdMBIP8xH0Z+QOBQgDaCjddbO5mMW4rCm5+1mMb8nbiUA2xKm9njbFiiybm1dYAtKwT6+5/pr+u9qRmchJESrTOA+51V+c4pykpdUfsD4DkC2Hz8lAWoTpCZNn3n/zKQe325AbqubAFTHuASkyIY7H8a+N42GZpJTWqI+kQSxYEzztDoYd0u3VxxDd5ts1UMfBe0G2JupjeCMQjC8hPCYk0ayMDZmMv1CQrYnUqCshPZmj1AEf+0B8up6EzMnAk/yVB54zpTdsHMjHywuhL3BS+I/aM/QM2L8mlIhIj2UhZ4XN3PEd5bIIZJY7FNjyidRG5EjU4nond5Ky7MrTIMArUoUC39TWKz6S6D+OOlDU6V/nZiMZV7xC3cqEnwenqsPu4YPh2jZHBHbn6RhZQPba4UP+zYcWhmvZXvgSeukDiXX8ATVbuPTRa6ZSmr2++IoihXi5kAXAnkXqXuxgnbb8XrMlin/3A9NV3JH+CdFEoh6LsIxg==
  template:
    metadata:
      creationTimestamp: null
      name: argocd-dummy-repo-verification
      namespace: argocd

But once the kustomize build is ran and applied kustomize build . | kubectl apply -f - I see that the sealed-secret.yaml wasn't applied and no secret was generated from the sealed-secret based on the following kustomize build . | kubectl apply -f - output:

unable to recognize "STDIN": no matches for kind "SealedSecret" in version "bitnami.com/v1alpha1"

So is there a proper way to get this working? I was looking into nameReference in a configurations: section of the kustomization.yaml but not sure if that will give the desired result?

Thank you!

@Shell32-Natsu Shell32-Natsu added area/api issues for api module kind/support Categorizes issue or PR as a support question. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Jan 25, 2021
@Shell32-Natsu
Copy link
Contributor

@monopole Can you take a look at this?

@mikee
Copy link

mikee commented Mar 4, 2021

I have run into this too. I think it's due to the way in which CRDs get applied to the cluster. The CRD is applied, then the cluster accepts the kinds and makes them available. But there is a lag in that happening. So your other resource that requires the kind provided by the CRD can't be applied at the time.

If you apply the CRD and the sealed secret in two steps - the process will work - not sure how to make this work in one go right now.

@natasha41575
Copy link
Contributor

based on @mikee's comment this sounds like an issue with kubectl/kubernetes and not kustomize?

@eddiezane
Copy link
Member

@TheAggressive @johscheuer can you please try just the build step here? This may be a kubectl apply issue as opposed to a kustomize build.

@brianpursley
Copy link
Member

I think the problem might be that the CRD is being created AND used in the same call to kubectl apply.

kubectl apply won't wait for the CRD to be available before attempting to use it, which results in the error:

error: unable to recognize "STDIN": no matches for kind "SealedSecret" in version "bitnami.com/v1alpha1"

You can tell this is the case because if you just wait a few seconds and run it again, it will succeed.

You need to create the CRD in one step, wait for it to exist, then use the CRD in a second step.

@mikee
Copy link

mikee commented May 26, 2021

I think the problem might be that the CRD is being created AND used in the same call to kubectl apply.

kubectl apply won't wait for the CRD to be available before attempting to use it, which results in the error:

error: unable to recognize "STDIN": no matches for kind "SealedSecret" in version "bitnami.com/v1alpha1"

You can tell this is the case because if you just wait a few seconds and run it again, it will succeed.

You need to create the CRD in one step, wait for it to exist, then use the CRD in a second step.

that is what I've experienced in testing.

@brianpursley
Copy link
Member

that is what I've experienced in testing.

sorry @mikee I didn’t see your previous comment before I commented. Yes, I agree with your assessment of what is happening.

@brianpursley
Copy link
Member

brianpursley commented May 27, 2021

@TheAggressive @mikee What if you remove the remote controller.yaml from the kustomization resources and do something like this instead:

kubectl apply -f https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.13.1/controller.yaml
kubectl wait --for condition=established crd sealedsecrets.bitnami.com
kustomize build . | kubectl apply -f -

This should create the crd, wait for it, then apply the kustomize output

@mikee
Copy link

mikee commented May 27, 2021

@TheAggressive @mikee What if you remove the remote controller.yaml from the kustomization resources and do something like this instead:

kubectl apply -f https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.13.1/controller.yaml
kubectl wait --for condition=established crd sealedsecrets.bitnami.com
kustomize build . | kubectl apply -f -

This should create the crd, wait for it, then apply the kustomize output

That works - what I've done is just split my kustomize into two overlays (using fluxcd). The "second" layer that depends on the CRD may fail the first time, but will eventually reconcile once the CRDs are accepted

@KnVerey
Copy link
Contributor

KnVerey commented Aug 4, 2021

This is related to #3794 / #3913 in that it involves sequencing resources for deploy purposes. As discussed there and above, it is possible for Kustomize users to manually ensure that a given CRD appears before a related CR in the resource list output (using FIFO ordering), but Kustomize is not involved in the apply operation and cannot prevent the race between the cluster-side acceptance of the CRD and the attempted creation of the CR. As @brianpursley has suggested, the deploy must be separated into two phases to prevent this, although @mikee is also correct that simply retrying will eventually work.

/close

@k8s-ci-robot
Copy link
Contributor

@KnVerey: Closing this issue.

In response to this:

This is related to #3794 / #3913 in that it involves sequencing resources for deploy purposes. As discussed there and above, it is possible for Kustomize users to manually ensure that a given CRD appears before a related CR in the resource list output (using FIFO ordering), but Kustomize is not involved in the apply operation and cannot prevent the race between the cluster-side acceptance of the CRD and the attempted creation of the CR. As @brianpursley has suggested, the deploy must be separated into two phases to prevent this, although @mikee is also correct that simply retrying will eventually work.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@armenr
Copy link

armenr commented Mar 27, 2022

For anyone landing here, MY issue (using helm/helmfile) was to add:

``--include-crds` explicitly to the template/upgrade/apply commands I use in our CD pipeline.

For ArgoCD + helmfile, it looks like this now:

server:
  logLevel: warn
  serviceAccount:
    create: true
  configEnabled: true
  config:
    kustomize.buildOptions: --load-restrictor LoadRestrictionsNone
    cluster.inClusterEnabled: "true"
    ignoreAggregatedRoles: "true"
    timeout.reconciliation: 30s
    configManagementPlugins: |
      - name: helmfile
        generate:
          command: ["/bin/sh", "-c"]
          args: ["helmfile template --include-crds"]

@mrclrchtr
Copy link

Thank you @armenr!
It's working with external-secrets-operator:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

helmCharts:
  - name: external-secrets
    repo: https://charts.external-secrets.io
    releaseName: external-secrets-operator
    namespace: vault
    version: 0.9.13
    includeCRDs: true

resources:
  - cluster-secret-store.yaml

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/api issues for api module kind/support Categorizes issue or PR as a support question. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
None yet
Development

No branches or pull requests

10 participants