Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

git-sync with nfs-volume #839

Open
ZackovichZ opened this issue Nov 7, 2023 · 9 comments
Open

git-sync with nfs-volume #839

ZackovichZ opened this issue Nov 7, 2023 · 9 comments

Comments

@ZackovichZ
Copy link

ZackovichZ commented Nov 7, 2023

I'm trying to deploy git-sync using nfs-share, but for some reason when using this type of volume - the container crashes on startup

manifest:

 apiVersion: apps/v1
 kind: Deployment
 metadata:
   name: dev-test
   namespace: test
 spec:
   template:
     metadata:
     spec:
       containers:
       - env:
         - name: GITSYNC_REPO
           value: <any repo>
         - name: GITSYNC_REF
           value: master
         - name: GITSYNC_ROOT
           value: /git
         - name: GITSYNC_LINK
           value: www
         - name: GITSYNC_PERIOD
           value: 2s
         - name: GIT_DISCOVERY_ACROSS_FILESYSTEM
           value: "1"
         envFrom:
         - secretRef:
             name: git-env
         image: git-sync:v4.1.0
         imagePullPolicy: IfNotPresent
         name: git-sync
         resources: {}
         securityContext:
           readOnlyRootFilesystem: false
           runAsUser: 65533
         volumeMounts:
         - mountPath: /git
           name: dir
         workingDir: /git
       securityContext:
         fsGroup: 101
       volumes:
       - name: dir
         persistentVolumeClaim:
           claimName: test-pvc

Error output:

{"logger":"","ts":"2023-11-07 16:25:25.827325","caller":{"file":"main.go","line":1155},"msg":"can't get repo toplevel","error":"Run(git rev-parse --show-toplevel): exit status 128: { stdout: \"\", stderr: \"fatal: not a git repository (or any of the parent directories): .git\" }","path":"/git"}
--
{"logger":"","ts":"2023-11-07 16:25:25.827405","caller":{"file":"main.go","line":1079},"level":0,"msg":"repo directory was empty or failed checks","path":"/git"}
{"logger":"","ts":"2023-11-07 16:25:25.829354","caller":{"file":"main.go","line":784},"msg":"too many failures, aborting","error":"can't wipe unusable root directory: unlinkat /git/.snapshot: read-only file system","failCount":1}

I have no idea what this could be connected with. Other containers connect to nfs without problems. If I switch to vmware csi or emptydir - there are no errors.

Can anyone help with ?

@thockin
Copy link
Member

thockin commented Nov 7, 2023

Can you run with -v 6 and post logs?

@ZackovichZ
Copy link
Author

Can you run with -v 6 and post logs?

Hi ! Did I understand correctly that it was necessary to do v=6 to create deployment?

I1108 10:27:12.952529 2764757 loader.go:374] Config loaded from file:  /root/.kube/config
I1108 10:27:13.019820 2764757 round_trippers.go:553] GET https://any_ip:6443/openapi/v2?timeout=32s 200 OK in 66 milliseconds
I1108 10:27:13.223116 2764757 round_trippers.go:553] GET https://any_ip:6443/apis/apps/v1/namespaces/test/deployments/dev-test 404 Not Found in 2 milliseconds
I1108 10:27:13.225027 2764757 round_trippers.go:553] GET https://any_ip:6443/api/v1/namespaces/test 200 OK in 1 milliseconds
I1108 10:27:13.231702 2764757 round_trippers.go:553] POST https://any_ip:6443/apis/apps/v1/namespaces/test/deployments?fieldManager=kubectl-client-side-apply&fieldValidation=Strict 201 Created in 6 milliseconds
deployment.apps/dev-test created
I1108 10:27:13.233515 2764757 apply.go:466] Running apply post-processor function

@ZackovichZ
Copy link
Author

Perhaps I know what the reason is. NFS seems to create a hidden file that cannot be deleted, is it possible to force it to ignore a non-empty folder?

@thockin
Copy link
Member

thockin commented Nov 8, 2023

I meant to add -v 6 to the git-sync args.

As to the NFS having an unremovable file (which the -v 6 logs will show!) there is not currently an answer except to use a subdirectory of the volume. git-sync v3 used git clone which just could not work in this way. v4 MIGHT be able to tolerate it - I will have to do some digging.

@ZackovichZ
Copy link
Author

{"logger":"","ts":"2023-11-09 06:22:33.466130","caller":{"file":"main.go","line":593},"level":2,"msg":"created private gitconfig file","path":"/tmp/git-sync.gitconfig.373034938"}
{"logger":"","ts":"2023-11-09 06:22:33.466252","caller":{"file":"main.go","line":1907},"level":5,"msg":"running command","cwd":"","cmd":"git config --global gc.autoDetach false"}
{"logger":"","ts":"2023-11-09 06:22:33.470862","caller":{"file":"main.go","line":1907},"level":6,"msg":"command result","stdout":"","stderr":"","time":"4.536274ms"}
{"logger":"","ts":"2023-11-09 06:22:33.470922","caller":{"file":"main.go","line":1907},"level":5,"msg":"running command","cwd":"","cmd":"git config --global gc.pruneExpire now"}
{"logger":"","ts":"2023-11-09 06:22:33.474800","caller":{"file":"main.go","line":1907},"level":6,"msg":"command result","stdout":"","stderr":"","time":"3.820255ms"}
{"logger":"","ts":"2023-11-09 06:22:33.474859","caller":{"file":"main.go","line":1907},"level":5,"msg":"running command","cwd":"","cmd":"git config --global credential.helper \"cache --timeout 3600\""}
{"logger":"","ts":"2023-11-09 06:22:33.478735","caller":{"file":"main.go","line":1907},"level":6,"msg":"command result","stdout":"","stderr":"","time":"3.792259ms"}
{"logger":"","ts":"2023-11-09 06:22:33.478935","caller":{"file":"main.go","line":1907},"level":5,"msg":"running command","cwd":"","cmd":"git config --global core.askPass true"}
{"logger":"","ts":"2023-11-09 06:22:33.482996","caller":{"file":"main.go","line":1907},"level":6,"msg":"command result","stdout":"","stderr":"","time":"3.832083ms"}
{"logger":"","ts":"2023-11-09 06:22:33.483044","caller":{"file":"main.go","line":1907},"level":5,"msg":"running command","cwd":"","cmd":"git config --global safe.directory *"}
{"logger":"","ts":"2023-11-09 06:22:33.486971","caller":{"file":"main.go","line":1907},"level":6,"msg":"command result","stdout":"","stderr":"","time":"3.737706ms"}
{"logger":"","ts":"2023-11-09 06:22:33.487045","caller":{"file":"main.go","line":1776},"level":1,"msg":"setting up git SSH credentials"}
{"logger":"","ts":"2023-11-09 06:22:33.487138","caller":{"file":"main.go","line":1578},"level":3,"msg":"syncing","repo":"https://<url>.git"}
{"logger":"","ts":"2023-11-09 06:22:33.487185","caller":{"file":"main.go","line":1763},"level":1,"msg":"storing git credential","url":"https://<url>.git"}
{"logger":"","ts":"2023-11-09 06:22:33.487281","caller":{"file":"main.go","line":1767},"level":5,"msg":"running command","cwd":"","cmd":"git credential approve"}
{"logger":"","ts":"2023-11-09 06:22:33.500747","caller":{"file":"main.go","line":1767},"level":6,"msg":"command result","stdout":"","stderr":"","time":"13.390876ms"}
{"logger":"","ts":"2023-11-09 06:22:33.500806","caller":{"file":"main.go","line":1071},"level":3,"msg":"repo directory exists","path":"/git"}
{"logger":"","ts":"2023-11-09 06:22:33.500829","caller":{"file":"main.go","line":1143},"level":3,"msg":"sanity-checking git repo","repo":"/git"}
{"logger":"","ts":"2023-11-09 06:22:33.501888","caller":{"file":"main.go","line":1154},"level":5,"msg":"running command","cwd":"/git","cmd":"git rev-parse --show-toplevel"}
{"logger":"","ts":"2023-11-09 06:22:33.505298","caller":{"file":"main.go","line":1155},"msg":"can't get repo toplevel","error":"Run(git rev-parse --show-toplevel): exit status 128: { stdout: \"\", stderr: \"fatal: not a git repository (or any of the parent directories): .git\" }","path":"/git"}
{"logger":"","ts":"2023-11-09 06:22:33.505368","caller":{"file":"main.go","line":1079},"level":0,"msg":"repo directory was empty or failed checks","path":"/git"}
{"logger":"","ts":"2023-11-09 06:22:33.505755","caller":{"file":"main.go","line":1251},"level":4,"msg":"removing path recursively","path":"/git/.snapshot","isDir":true}
{"logger":"","ts":"2023-11-09 06:22:33.507331","caller":{"file":"main.go","line":784},"msg":"too many failures, aborting","error":"can't wipe unusable root directory: unlinkat /git/.snapshot: read-only file system","failCount":1}

@ZackovichZ
Copy link
Author

I figured out that the problem was actually related to a hidden file created by the NFS server. We managed to hide this file and the container started working. If it is possible to resolve the issue of ignoring files in a directory within this thread, let's continue; if not, the issue can be closed. Thank you very much for your help!

@thockin
Copy link
Member

thockin commented Nov 18, 2023

Got a moment to play. v4 can survive this situation. Open questions:

  • Do we just not wipe it at all?
  • Do we take a flag indicating which files are allowed to fail wipe?
  • Do we just put the repo into a subdir?
  • How do I e2e (can't chattr as a user, and chmod 000 is overridden by the remove function) ?

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 16, 2024
@thockin thockin removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 16, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 16, 2024
@thockin thockin removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants