Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kustomize build failed: accumulating resources is a directory #1749

Closed
1 task done
kashalls opened this issue Aug 24, 2021 · 13 comments
Closed
1 task done

kustomize build failed: accumulating resources is a directory #1749

kashalls opened this issue Aug 24, 2021 · 13 comments

Comments

@kashalls
Copy link

Describe the bug

Unable to use kustomization on github resource

We've tried using different styles of URLs:

It seems to happen in some environments, perhaps I am missing a dependency? I am not sure as I am quite new to this.

Steps to reproduce

  1. Install flux and run repo
  2. Run: flux --kubeconfig=./kubeconfig get sources git
  3. kubectl --kubeconfig=./kubeconfig get kustomization -A reports that flux-system core could not build from github

cluster/core/system-upgrade/kustomization.yaml:

---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - github.com/rancher/system-upgrade-controller
images:
  - name: rancher/system-upgrade-controller
    newName: docker.io/rancher/system-upgrade-controller
    newTag: v0.7.4

flux log error:

2021-08-24T17:54:02.983Z error Kustomization/core.flux-system - Reconciliation failed after 283.070822ms, next try in 10m0s kustomize build failed: accumulating resources: accumulation err='accumulating resources from 'system-upgrade': read /tmp/core047006118/cluster/core/system-upgrade: is a directory': recursed accumulation of path '/tmp/core047006118/cluster/core/system-upgrade': accumulating resources: accumulation err='accumulating resources from 'github.com/rancher/system-upgrade-controller?ref=v0.7.4': open /tmp/core047006118/cluster/core/system-upgrade/github.com/rancher/system-upgrade-controller?ref=v0.7.4: no such file or directory': git cmd = '/usr/bin/git fetch --depth=1 origin v0.7.4': exit status 128

Expected behavior

It should just work, as other people have done this on other systems and have had no problems.

Non-working: https://github.com/Kashalls/home-cluster/blob/main/cluster/core/system-upgrade/kustomization.yaml
Working: https://github.com/onedr0p/home-cluster/blob/main/cluster/core/system-upgrade/kustomization.yaml

Screenshots and recordings

No response

OS / Distro

Ubuntu 20.04.2 LTS

Flux version

flux version 0.16.2

Flux check

► checking prerequisites
✔ kubectl 1.22.1 >=1.18.0-0
✔ Kubernetes 1.20.5+k3s1 >=1.16.0-0
► checking controllers
✔ helm-controller: deployment ready
► ghcr.io/fluxcd/helm-controller:v0.11.2
✔ kustomize-controller: deployment ready
► ghcr.io/fluxcd/kustomize-controller:v0.13.3
✔ notification-controller: deployment ready
► ghcr.io/fluxcd/notification-controller:v0.15.1
✔ source-controller: deployment ready
► ghcr.io/fluxcd/source-controller:v0.15.4
✔ all checks passed

Git provider

Github

Container Registry provider

No response

Additional context

No response

Code of Conduct

  • I agree to follow this project's Code of Conduct
@stefanprodan
Copy link
Member

I'm guessing you want to apply the manifests dir from Git. Register the repository as a Flux source and create a Flux Kustomization that overrides the image tag like so:

apiVersion: source.toolkit.fluxcd.io/v1beta1
kind: GitRepository
metadata:
  name: system-upgrade-controller
  namespace: flux-system
spec:
  interval: 10m
  url: https://github.com/rancher/system-upgrade-controller
  ref:
    branch: master
---
apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
  name: system-upgrade-controller
  namespace: flux-system
spec:
  interval: 5m
  path: "./manifests/"
  prune: true
  sourceRef:
    kind: GitRepository
    name: system-upgrade-controller
  healthChecks:
    - apiVersion: apps/v1
      kind: Deployment
      name: system-upgrade-controller
      namespace: system-upgrade
  timeout: 2m
  images:
  - name: rancher/system-upgrade-controller
    newName: docker.io/rancher/system-upgrade-controller
    newTag: v0.7.5

@onedr0p
Copy link
Contributor

onedr0p commented Aug 25, 2021

Using a kustomization to pull this in works fine on my end with flux, I'm really curious why it doesn't for some people.

Sure you can use a git repository and a flux kustomize but it should also work with a kustomization? I've seen other people also run into this when deploying metrics-server with a kustomization thru flux where for me it worked fine.

I'm really confused on this behaviour. It seems like a flux issue because @kashalls was able to run kustomize build locally and it rendered out.

@stefanprodan
Copy link
Member

I'm really confused on this behaviour. It seems like a flux issue because @kashalls was able to run kustomize build locally and it rendered out.

Well resources expect a YAML file not a a whole repo, have you tried kustomize build locally?

@kashalls
Copy link
Author

I'm really confused on this behaviour. It seems like a flux issue because @kashalls was able to run kustomize build locally and it rendered out.

Well resources expect a YAML file not a a whole repo, have you tried kustomize build locally?

It rendered out completely with no issues. Syntax was fine and nothing out of the ordinary.

@onedr0p
Copy link
Contributor

onedr0p commented Aug 25, 2021

This works locally

✖ kustomize version
{Version:kustomize/v4.2.0 GitCommit:d53a2ad45d04b0264bcee9e19879437d851cb778 BuildDate:2021-07-01T00:44:28+01:00 GoOs:darwin GoArch:amd64}
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - github.com/rancher/system-upgrade-controller?ref=v0.7.5
images:
  - name: rancher/system-upgrade-controller
    newTag: v0.7.5
… on ﴱ home-cluster () on ☁️  N. Virginia on  main [$]
❯ kustomize build
apiVersion: v1
kind: Namespace
metadata:
  name: system-upgrade
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: system-upgrade
  namespace: system-upgrade
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system-upgrade
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: system-upgrade
  namespace: system-upgrade
---
apiVersion: v1
data:
  SYSTEM_UPGRADE_CONTROLLER_DEBUG: "false"
  SYSTEM_UPGRADE_CONTROLLER_THREADS: "2"
  SYSTEM_UPGRADE_JOB_ACTIVE_DEADLINE_SECONDS: "900"
  SYSTEM_UPGRADE_JOB_BACKOFF_LIMIT: "99"
  SYSTEM_UPGRADE_JOB_IMAGE_PULL_POLICY: Always
  SYSTEM_UPGRADE_JOB_KUBECTL_IMAGE: rancher/kubectl:v1.18.3
  SYSTEM_UPGRADE_JOB_PRIVILEGED: "true"
  SYSTEM_UPGRADE_JOB_TTL_SECONDS_AFTER_FINISH: "900"
  SYSTEM_UPGRADE_PLAN_POLLING_INTERVAL: 15m
kind: ConfigMap
metadata:
  name: default-controller-env
  namespace: system-upgrade
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: system-upgrade-controller
  namespace: system-upgrade
spec:
  selector:
    matchLabels:
      upgrade.cattle.io/controller: system-upgrade-controller
  template:
    metadata:
      labels:
        upgrade.cattle.io/controller: system-upgrade-controller
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: node-role.kubernetes.io/master
                operator: In
                values:
                - "true"
      containers:
      - env:
        - name: SYSTEM_UPGRADE_CONTROLLER_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.labels['upgrade.cattle.io/controller']
        - name: SYSTEM_UPGRADE_CONTROLLER_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        envFrom:
        - configMapRef:
            name: default-controller-env
        image: rancher/system-upgrade-controller:v0.7.5
        imagePullPolicy: IfNotPresent
        name: system-upgrade-controller
        volumeMounts:
        - mountPath: /etc/ssl
          name: etc-ssl
        - mountPath: /etc/pki
          name: etc-pki
        - mountPath: /etc/ca-certificates
          name: etc-ca-certificates
        - mountPath: /tmp
          name: tmp
      serviceAccountName: system-upgrade
      tolerations:
      - key: CriticalAddonsOnly
        operator: Exists
      - effect: NoSchedule
        key: node-role.kubernetes.io/master
        operator: Exists
      - effect: NoSchedule
        key: node-role.kubernetes.io/controlplane
        operator: Exists
      - effect: NoSchedule
        key: node-role.kubernetes.io/control-plane
        operator: Exists
      - effect: NoExecute
        key: node-role.kubernetes.io/etcd
        operator: Exists
      volumes:
      - hostPath:
          path: /etc/ssl
          type: Directory
        name: etc-ssl
      - hostPath:
          path: /etc/pki
          type: DirectoryOrCreate
        name: etc-pki
      - hostPath:
          path: /etc/ca-certificates
          type: DirectoryOrCreate
        name: etc-ca-certificates
      - emptyDir: {}
        name: tmp

@stefanprodan
Copy link
Member

stefanprodan commented Aug 25, 2021

Ah ok there is a kustomization.yaml in the repo root, haven't seen that.

@onedr0p
Copy link
Contributor

onedr0p commented Aug 25, 2021

I was thinking it could be DNS but his source controller, helm controller are able to do DNS lookups. Those seem to be working fine at grabbing things from GitHub.

@stefanprodan
Copy link
Member

stefanprodan commented Aug 25, 2021

If you don't want to use the Flux objects then switching to https in the config should work:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - https://raw.githubusercontent.com/rancher/system-upgrade-controller/master/manifests/system-upgrade-controller.yaml
images:
  - name: rancher/system-upgrade-controller
    newTag: v0.7.4

@onedr0p
Copy link
Contributor

onedr0p commented Aug 25, 2021

Good idea! It's not that we don't want to use a different method, it's just a strange issue that I'm having a hard time tracking down using a kustomization with the github repo. I've seen few people report this issue, while others it seems fine.

I'm going to debug a bit more with @kashalls and see what we come up with.

Is there any way to enable trace logs in the kustomize controller?

@stefanprodan
Copy link
Member

kustomize-controller shouldn't clone repos, there are many downsides when doing this: kustomize shells out to git, has no cache and generates lots of traffic, if egress is broken then the apply will fail. If you use a GitRepository the manifests are cached inside the cluster, less Git traffic, better resilience to network outages.

@onedr0p
Copy link
Contributor

onedr0p commented Aug 25, 2021

It appears this issue is in fact DNS related, once github was able to be resolved in the kustomize controller the error went away. We can close this.

And thanks for the alternative methods, will definitely consider those.

@davinkevin
Copy link

kustomize-controller shouldn't clone repos, there are many downsides when doing this: kustomize shells out to git, has no cache and generates lots of traffic, if egress is broken then the apply will fail. If you use a GitRepository the manifests are cached inside the cluster, less Git traffic, better resilience to network outages.

Is this still true?

We want to use a git repo as a registry for every bases and components we have, and another git repo for consumption, with git:: in kustomize.
I think I won't be able to do this without git::. And this also has the advantage to be compatible with bare kustomize (good for testing).

@kingdonb
Copy link
Member

kingdonb commented Aug 1, 2022

You can use spec.includes today to achieve this today with GitRepository

I think as I am seeing it explained now that in the future, these repos would be stitched together in a CI process that clones both repos and publishes the composite to deploy as an OCIRepository artifact. Then no include will be needed, and no mixed metaphors (the spec.include feature has some weird edge cases, like around when and what downstream notifications are issued, if a parent repo hash remains the same, but a child repo has been updated. These changes are not applied to the cluster until the included GitRepo is first reconciled, then the including GitRepo, and finally the Kustomization that applies through both.)

This issue has its origin as I gather in the fact that a GitRepo only issues a notification downstream if its hash is changed, but an included repo changing its hash does not change the parent git repo's hash... this is not a problem when there is one unified artifact to deploy from. So hopefully this gets much better overall with the addition of OCIRepository.

This is a limitation of Kustomize as it is used now, it is only meant to be used to execute on a singular tree of manifests, with no outside references – as Kustomize controller does not know how to do any of the caching things that Source controller can do, those jobs are centralized in Source Controller on purpose, to simplify the overall design by exercising the single fetching responsibility in a single controller which has all logic that is needed for efficient fetching and providence.

It's not only caching, but also providence. The Kustomize Controller is not built or being calibrated to analyze cryptographic signatures or prove that artifacts are authentic and from an authorized source before they can be applied, only the Source controller does that. So as long as that feature is missing from Kustomize upstream, the only way to get verified releases with authenticated manifests (cached appropriately to avoid excessive fetching) will be to centralize this in Source controller.

Please note the suggestion of using no-remote-bases – if you find this pattern is heavily used and you need to prohibit it administratively to take advantage of the caching and providence, you can follow the guidance on the cheat sheet:

https://fluxcd.io/docs/cheatsheets/bootstrap/#multi-tenancy-lockdown

We need to be sure this use case is adequately served by our features and documentation. Please stay tuned and try making use of the new OCIRepository features when it comes out. This new feature, due out in the next minor release, permits Flux users to work in straightforward ways that don't cross trust boundaries. We're pretty sure it's going to open some doors.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants