-
Notifications
You must be signed in to change notification settings - Fork 24
Brick goes offline when created a new PVC #24
Comments
@ksandha can you paste the |
|
RCA: as we are not persisting the |
As per my understanding, bricks are mounted in run directory. Glusterd2 restart will remount these bricks on start. Are there any failures while mounting the bricks? We don't need run dir to be persistent, am I missing something here? |
I need to Analyze the bug again, this may take time. moving out of GCS/0.2 |
not able to reproduce with the latest build, @ksandha please verify this bug with latest build |
@ksandha PTAL |
Couldn't hit the issue with the latest build. @Madhu-1 please take appropriate action |
closing as per @ksandha comment. |
delete the app pods and the pvc
[vagrant@kube1 ~]$ [vagrant@kube1 ~]$ kubectl delete pod redis1 pod "redis1" deleted [vagrant@kube1 ~]$ [vagrant@kube1 ~]$ [vagrant@kube1 ~]$ [vagrant@kube1 ~]$ kubectl -n gcs -it exec kube1-0 -- /bin/bash [root@kube1-0 /]# glustercli volume status --endpoints="http://kube2-0.glusterd2.gcs:24007" Volume : pvc-350277cfcd3111e8 +--------------------------------------+-----------------------+---------------------------------------------------------------------+--------+-------+-----+ | BRICK ID | HOST | PATH | ONLINE | PORT | PID | +--------------------------------------+-----------------------+---------------------------------------------------------------------+--------+-------+-----+ | 1e9711eb-6ab9-4381-8f37-4fe929ae8e36 | kube3-0.glusterd2.gcs | /var/run/glusterd2/bricks/pvc-350277cfcd3111e8/subvol1/brick1/brick | true | 49152 | 53 | | 044ae8e2-4dcb-45f1-9e17-7fa0b1b8084b | kube1-0.glusterd2.gcs | /var/run/glusterd2/bricks/pvc-350277cfcd3111e8/subvol1/brick2/brick | false | 0 | 0 | | ddc9610e-4216-4d96-a4e1-5558703d2f1a | kube2-0.glusterd2.gcs | /var/run/glusterd2/bricks/pvc-350277cfcd3111e8/subvol1/brick3/brick | true | 49152 | 53 | +--------------------------------------+-----------------------+---------------------------------------------------------------------+--------+-------+-----+ [root@kube1-0 /]# [root@kube1-0 /]# exit [vagrant@kube1 ~]$ kubectl get pods No resources found. [vagrant@kube1 ~]$ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE gcs-pvc1 Bound pvc-350277cfcd3111e8 2Gi RWX glusterfs-csi 19m [vagrant@kube1 ~]$ kubectl delete pvc gcs-pvc1 persistentvolumeclaim "gcs-pvc1" deleted [vagrant@kube1 ~]$ kubectl get pvc No resources found. [vagrant@kube1 ~]$ kubectl -n gcs -it exec kube1-0 -- /bin/bash [root@kube1-0 /]# glustercli volume status --endpoints="http://kube2-0.glusterd2.gcs:24007" No volumes found [root@kube1-0 /]#
delete the gd2 pod and wait for a new pod to spin
The text was updated successfully, but these errors were encountered: