-
Notifications
You must be signed in to change notification settings - Fork 609
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kustomize build failed: accumulating resources is a directory #1749
Comments
|
I'm guessing you want to apply the apiVersion: source.toolkit.fluxcd.io/v1beta1
kind: GitRepository
metadata:
name: system-upgrade-controller
namespace: flux-system
spec:
interval: 10m
url: https://github.com/rancher/system-upgrade-controller
ref:
branch: master
---
apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
name: system-upgrade-controller
namespace: flux-system
spec:
interval: 5m
path: "./manifests/"
prune: true
sourceRef:
kind: GitRepository
name: system-upgrade-controller
healthChecks:
- apiVersion: apps/v1
kind: Deployment
name: system-upgrade-controller
namespace: system-upgrade
timeout: 2m
images:
- name: rancher/system-upgrade-controller
newName: docker.io/rancher/system-upgrade-controller
newTag: v0.7.5 |
|
Using a kustomization to pull this in works fine on my end with flux, I'm really curious why it doesn't for some people. Sure you can use a git repository and a flux kustomize but it should also work with a kustomization? I've seen other people also run into this when deploying metrics-server with a kustomization thru flux where for me it worked fine. I'm really confused on this behaviour. It seems like a flux issue because @kashalls was able to run |
Well |
It rendered out completely with no issues. Syntax was fine and nothing out of the ordinary. |
|
This works locally ✖ kustomize version
{Version:kustomize/v4.2.0 GitCommit:d53a2ad45d04b0264bcee9e19879437d851cb778 BuildDate:2021-07-01T00:44:28+01:00 GoOs:darwin GoArch:amd64}---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- github.com/rancher/system-upgrade-controller?ref=v0.7.5
images:
- name: rancher/system-upgrade-controller
newTag: v0.7.5… on ﴱ home-cluster () on ☁️ N. Virginia on main [$]
❯ kustomize build
apiVersion: v1
kind: Namespace
metadata:
name: system-upgrade
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: system-upgrade
namespace: system-upgrade
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system-upgrade
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: system-upgrade
namespace: system-upgrade
---
apiVersion: v1
data:
SYSTEM_UPGRADE_CONTROLLER_DEBUG: "false"
SYSTEM_UPGRADE_CONTROLLER_THREADS: "2"
SYSTEM_UPGRADE_JOB_ACTIVE_DEADLINE_SECONDS: "900"
SYSTEM_UPGRADE_JOB_BACKOFF_LIMIT: "99"
SYSTEM_UPGRADE_JOB_IMAGE_PULL_POLICY: Always
SYSTEM_UPGRADE_JOB_KUBECTL_IMAGE: rancher/kubectl:v1.18.3
SYSTEM_UPGRADE_JOB_PRIVILEGED: "true"
SYSTEM_UPGRADE_JOB_TTL_SECONDS_AFTER_FINISH: "900"
SYSTEM_UPGRADE_PLAN_POLLING_INTERVAL: 15m
kind: ConfigMap
metadata:
name: default-controller-env
namespace: system-upgrade
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: system-upgrade-controller
namespace: system-upgrade
spec:
selector:
matchLabels:
upgrade.cattle.io/controller: system-upgrade-controller
template:
metadata:
labels:
upgrade.cattle.io/controller: system-upgrade-controller
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/master
operator: In
values:
- "true"
containers:
- env:
- name: SYSTEM_UPGRADE_CONTROLLER_NAME
valueFrom:
fieldRef:
fieldPath: metadata.labels['upgrade.cattle.io/controller']
- name: SYSTEM_UPGRADE_CONTROLLER_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
envFrom:
- configMapRef:
name: default-controller-env
image: rancher/system-upgrade-controller:v0.7.5
imagePullPolicy: IfNotPresent
name: system-upgrade-controller
volumeMounts:
- mountPath: /etc/ssl
name: etc-ssl
- mountPath: /etc/pki
name: etc-pki
- mountPath: /etc/ca-certificates
name: etc-ca-certificates
- mountPath: /tmp
name: tmp
serviceAccountName: system-upgrade
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- effect: NoSchedule
key: node-role.kubernetes.io/master
operator: Exists
- effect: NoSchedule
key: node-role.kubernetes.io/controlplane
operator: Exists
- effect: NoSchedule
key: node-role.kubernetes.io/control-plane
operator: Exists
- effect: NoExecute
key: node-role.kubernetes.io/etcd
operator: Exists
volumes:
- hostPath:
path: /etc/ssl
type: Directory
name: etc-ssl
- hostPath:
path: /etc/pki
type: DirectoryOrCreate
name: etc-pki
- hostPath:
path: /etc/ca-certificates
type: DirectoryOrCreate
name: etc-ca-certificates
- emptyDir: {}
name: tmp |
|
Ah ok there is a kustomization.yaml in the repo root, haven't seen that. |
|
I was thinking it could be DNS but his source controller, helm controller are able to do DNS lookups. Those seem to be working fine at grabbing things from GitHub. |
|
If you don't want to use the Flux objects then switching to https in the config should work: apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- https://raw.githubusercontent.com/rancher/system-upgrade-controller/master/manifests/system-upgrade-controller.yaml
images:
- name: rancher/system-upgrade-controller
newTag: v0.7.4 |
|
Good idea! It's not that we don't want to use a different method, it's just a strange issue that I'm having a hard time tracking down using a kustomization with the github repo. I've seen few people report this issue, while others it seems fine. I'm going to debug a bit more with @kashalls and see what we come up with. Is there any way to enable trace logs in the kustomize controller? |
|
kustomize-controller shouldn't clone repos, there are many downsides when doing this: kustomize shells out to git, has no cache and generates lots of traffic, if egress is broken then the apply will fail. If you use a |
|
It appears this issue is in fact DNS related, once github was able to be resolved in the kustomize controller the error went away. We can close this. And thanks for the alternative methods, will definitely consider those. |
Is this still true? We want to use a git repo as a |
|
You can use I think as I am seeing it explained now that in the future, these repos would be stitched together in a CI process that clones both repos and publishes the composite to deploy as an OCIRepository artifact. Then no include will be needed, and no mixed metaphors (the This issue has its origin as I gather in the fact that a GitRepo only issues a notification downstream if its hash is changed, but an included repo changing its hash does not change the parent git repo's hash... this is not a problem when there is one unified artifact to deploy from. So hopefully this gets much better overall with the addition of OCIRepository. This is a limitation of Kustomize as it is used now, it is only meant to be used to execute on a singular tree of manifests, with no outside references – as Kustomize controller does not know how to do any of the caching things that Source controller can do, those jobs are centralized in Source Controller on purpose, to simplify the overall design by exercising the single fetching responsibility in a single controller which has all logic that is needed for efficient fetching and providence. It's not only caching, but also providence. The Kustomize Controller is not built or being calibrated to analyze cryptographic signatures or prove that artifacts are authentic and from an authorized source before they can be applied, only the Source controller does that. So as long as that feature is missing from Kustomize upstream, the only way to get verified releases with authenticated manifests (cached appropriately to avoid excessive fetching) will be to centralize this in Source controller. Please note the suggestion of using https://fluxcd.io/docs/cheatsheets/bootstrap/#multi-tenancy-lockdown We need to be sure this use case is adequately served by our features and documentation. Please stay tuned and try making use of the new OCIRepository features when it comes out. This new feature, due out in the next minor release, permits Flux users to work in straightforward ways that don't cross trust boundaries. We're pretty sure it's going to open some doors. |
Describe the bug
Unable to use kustomization on github resource
We've tried using different styles of URLs:
It seems to happen in some environments, perhaps I am missing a dependency? I am not sure as I am quite new to this.
Steps to reproduce
flux --kubeconfig=./kubeconfig get sources gitkubectl --kubeconfig=./kubeconfig get kustomization -Areports that flux-system core could not build from githubcluster/core/system-upgrade/kustomization.yaml:flux log error:
Expected behavior
It should just work, as other people have done this on other systems and have had no problems.
Non-working: https://github.com/Kashalls/home-cluster/blob/main/cluster/core/system-upgrade/kustomization.yaml
Working: https://github.com/onedr0p/home-cluster/blob/main/cluster/core/system-upgrade/kustomization.yaml
Screenshots and recordings
No response
OS / Distro
Ubuntu 20.04.2 LTS
Flux version
flux version 0.16.2
Flux check
► checking prerequisites
✔ kubectl 1.22.1 >=1.18.0-0
✔ Kubernetes 1.20.5+k3s1 >=1.16.0-0
► checking controllers
✔ helm-controller: deployment ready
► ghcr.io/fluxcd/helm-controller:v0.11.2
✔ kustomize-controller: deployment ready
► ghcr.io/fluxcd/kustomize-controller:v0.13.3
✔ notification-controller: deployment ready
► ghcr.io/fluxcd/notification-controller:v0.15.1
✔ source-controller: deployment ready
► ghcr.io/fluxcd/source-controller:v0.15.4
✔ all checks passed
Git provider
Github
Container Registry provider
No response
Additional context
No response
Code of Conduct
The text was updated successfully, but these errors were encountered: