-
Notifications
You must be signed in to change notification settings - Fork 289
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AKS - Stateful set can not use pre-existing storage resources #1906
Comments
Hi PaulCharlton, AKS bot here 👋 I might be just a bot, but I'm told my suggestions are normally quite good, as such:
|
Triage required from @Azure/aks-pm |
Action required from @Azure/aks-pm |
noting that this issue is being worked in MSFT ticket 120101821000253 |
@andyzhangx your thoughts and insights will be much appreciated. Situation persist even after forcing the mountoption - "- nosharesock" |
it should work, here is an example I used:
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-azurefile
spec:
accessModes:
- ReadWriteMany
azureFile:
secretName: azure-storage-account-accountname-secret
secretNamespace: default
shareName: test
capacity:
storage: 10Gi
persistentVolumeReclaimPolicy: Retain
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: persistent-storage-statefulset-azurefile-0
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
volumeName: pv-azurefile
storageClassName: ""
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: statefulset-azurefile
labels:
app: nginx
spec:
serviceName: statefulset-azurefile
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
nodeSelector:
"kubernetes.io/os": linux
containers:
- name: statefulset-azurefile
image: mcr.microsoft.com/oss/nginx/nginx:1.17.3-alpine
command:
- "/bin/sh"
- "-c"
- while true; do echo $(date) >> /mnt/azurefile/outfile; sleep 1; done
volumeMounts:
- name: persistent-storage
mountPath: /mnt/azurefile
updateStrategy:
type: RollingUpdate
selector:
matchLabels:
app: nginx
volumeClaimTemplates:
- metadata:
name: persistent-storage
annotations:
volume.beta.kubernetes.io/storage-class: azurefile
spec:
accessModes: ["ReadWriteMany"]
resources:
requests:
storage: 10Gi
# kubectl exec -it statefulset-azurefile-0 sh
/ # df -h
Filesystem Size Used Available Use% Mounted on
overlay 123.9G 13.8G 110.0G 11% /
tmpfs 64.0M 0 64.0M 0% /dev
tmpfs 3.4G 0 3.4G 0% /sys/fs/cgroup
//accountname.file.core.windows.net/test
10.0G 64.0K 10.0G 0% /mnt/azurefile
|
your case could be due to the incorrect account key setting in the secret, you could follow my example to try again, thanks. |
@andyzhangx the stored secret is correct and allows azure CIFS access when I extract it manually from the K8S secret resource. I can not accept closing this issue until I see a solution working with GitLab helm chart or PostgreSQL helm chart. |
@andyzhangx please reopen this issue or I will need to open a duplicate issue |
@andyzhangx and, as I said -- the secret "The same PVC works fine on pods launched with a "deployment" / "replica set" controller.", which means that the secret is correct. |
Do you have a repro/steps we can use? You mentioned you're using the postgres helm chart? Could you share the arguments as well? |
This issue has been automatically marked as stale because it has not had any activity for 60 days. It will be closed if no further activity occurs within 15 days of this comment. |
This issue will now be closed because it hasn't had any activity for 15 days after stale. PaulCharlton feel free to comment again on the next 7 days to reopen or open a new issue after that time if you still have a question/issue or suggestion. |
tried same configuration with redis, postgresql, minio and other helm charts which use statefulset controller
The auto provisioning of volumes works with the "default" StorageClass, but does not have the desired lifecycle.
Manual provisioning of azure_files and/or azure_disks, followed by manual PV, Secret and PVC shows that the PVC "bound" to the PV correctly, but the mount stage fails when the containers (including init containers) in the pod attempt to start.
The same PVC works fine on pods launched with a "deployment" / "replica set" controller.
=================
Warning FailedMount 10m kubelet MountVolume.SetUp failed for volume "gl-az-files-gitlab-postgresql-state-gitlab-pv" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/09be2f32-b1fc-4516-a596-b919b9579778/volumes/kubernetes.io
azure-file/gl-az-files-gitlab-postgresql-state-gitlab-pv --scope -- mount -t cifs -o mfsymlinks,file_mode=0777,dir_mode=0777,vers=3.0, //abl02gitlab001sa.file.core.windows.net/gitlab-postgresql-state /var/lib/kubelet/pods/09be2f32-b1fc-4516-a596-b919b9579778/volumes/kubernetes.ioazure-file/gl-az-files-gitlab-postgresql-state-gitlab-pvOutput: Running scope as unit: run-r9ffee8fcb8bd4cbe8ee959c4d5dce692.scope
mount error(2): No such file or directory
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)
Warning FailedMount 9m40s kubelet Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[postgresql-password data dshm default-token-dc76j custom-init-scripts]: timed out waiting for the condition
Warning FailedMount 33s (x9 over 9m30s) kubelet (combined from similar events): Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[dshm default-token-dc76j custom-init-scripts postgresql-password data]: timed out waiting for the condition
==================
kubectl cluster-info
Kubernetes master is running at https://gitlab-001-k8s-gitlab001-5649ad-3b181c77.hcp.eastus.azmk8s.io:443
CoreDNS is running at https://gitlab-001-k8s-gitlab001-5649ad-3b181c77.hcp.eastus.azmk8s.io:443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://gitlab-001-k8s-gitlab001-5649ad-3b181c77.hcp.eastus.azmk8s.io:443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
kubectl version
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.8", GitCommit:"73ec19bdfc6008cd3ce6de96c663f70a69e2b8fc", GitTreeState:"clean", BuildDate:"2020-09-17T04:17:08Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
The text was updated successfully, but these errors were encountered: