-
Notifications
You must be signed in to change notification settings - Fork 16.8k
[stable/nextcloud] Image stuck at Initializing NextCloud... when PVC is attached #22920
Comments
I have also been trying to get this install to work with a PV and PVC and no luck, If I do it without a PVC and PV it works, as soon as I enable the PV, it says nextcloud directory isn't found, so I make the directory. Then it says "Error: failed to create subPath directory for volumeMount "nextcloud-data" of container "nextcloud"". does anyone have any ideas about this? |
I am having the same issue. I also use Have you figured out how to make this work? |
Not sure if we are having the same issue, but I will detail my investigation so far on trying to use From what I could see, the container creation process errors out with:
I've looked on the node during the time of the directory creation, and some things to note:
The only lead I've found so far to why this might be happening is kubernetes/kubernetes#61545 (comment) and the following comment links kubernetes/kubernetes#61563 (comment). My guess is that this is related to the second issue in the last comment (i.e. kubernetes/kubernetes#61545), given that the config mounts are nested inside the directory mount, however given that the error is on subpath I'm currently poking by manually changing specifications to see if any configuration works (i.e. trying different variations of the mountpaths nesting to see if I can get it to start up manually before figuring out how to correct the chart), but in the meantime if anyone else finds a solution and/or if it seems I'm going down the wrong trail, please let me know! Update: it is not the configmap causing this in my case, it's the nested mounts: https://github.com/helm/charts/blob/master/stable/nextcloud/templates/deployment.yaml#L289 Additionally, the problem only appears after the first restart (it seems that the first time it can do the mounting, but once things get written to the volumes and the container restarts, the bind mounts fail for the new container with the above error). This problem might be specific to our storage class (we're using an RClone CSI which fuse-mounts an s3 bucket) and different from yours, although I haven't tried it with a nfs layer on top yet to confirm. This does seem to be different than what you're seeing though... (sorry for hijacking your issue). In case this comes up for anyone else current workaround is keeping only the root directory mount (which is enough to backup everything else as they are nested) and that seems to fix the problem |
Okay I got it working! I am using a Open Media vault NFS share for all of my persistent volumes. I set them up with the following settings and it now works without any issues when using the regular helm install, no extra stuff required. Settings for nfs share.
|
It also works with Specific values we're using
and
I'll open a separate issue for the |
Tried changing the line in my |
Using the following snippets: nfs-client-provisioner.values.yaml nfs:
mountOptions:
- nfsvers=4
server: 172.16.0.1
nfs.path: /mnt/external I updated my nextcloud values with the new value Also didn't work. I have the following directories in my volume:
|
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions. |
Got the same problem. Tested with version 17.0.0-apache and 19.0.1-apache. Also seeing that the dirs are root:root. |
Using nfs-client-provisioner works but the main problem is that the initial rsync takes arround 5 minutes to complete (at least in my tests using GCP Filestore). You can look at the entrypoint.sh file. rsync -rlDog --chown www-data:root --delete --exclude-from=/upgrade.exclude /usr/src/nextcloud/ /var/www/html/ If you disable the readiness and the liveness in the values, it works. ❯ k logs nextcloud-5756597dbc-nhg5m I've trying some alternatives to that rsync but since there are a lot of small files to copy i haven't found any improvement. Any ideas? |
Log looks like stuck at As the workaround like jesussancheztellomm, I disable the liveness probe on first installation, and enable it after finishing the installation. Maybe we can refer to nextcloud/docker#968 |
@timtorChen i can confirm, when i disabled the LivinessProbe it took 11min to sync. Also tried it with an S3 Storage backend, it took just seconds to sync. So i looked deeper in my NFS, and we are using SYNC instead of ASYNC because we want not lose any data. I didnt test it with an ASYNC connection. |
The nextcloud chart has migrated to a new repo. Can you please raise the issue over there? https://github.com/nextcloud/helm |
Opened an incident over on the new repo. Tried to summarize some of the info from this discussion. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions. |
This issue is being automatically closed due to inactivity. |
Describe the bug
When the helm chart is bringing up NextCloud, the application does not get past the log message
Version of Helm and Kubernetes:
helm: v3.2.1
kubernetes: v1.18.4+k3s1
Which chart:
stable/nextcloud
What happened:
Namespace is created.
Helm creates persistent-volume-claim
Helm instantiates MariaDB using bitnami/mariadb chart
Helm instantiates Nextcloud container
Nextcloud container starts
Nextcloud container does not get past
What you expected to happen:
Nextcloud was supposed to finish initialization
Nextcloud files were supposed to be copied with correct permissions to the PVC
How to reproduce it (as minimally and precisely as possible):
Initialize helm with the following:
values.yaml
The text was updated successfully, but these errors were encountered: