-
Notifications
You must be signed in to change notification settings - Fork 85
failed to create volumes more than 600 #1364
Comments
@aravindavk ^^ |
dueto high memory utilization gd2 pods are not responding to pings properly
|
@aravindavk / @rishubhjain - can one of you please check this? |
Create GCS setup with 16 vcpus and 32GB RAM for each kube node. Then try to create 1000 PVC using script. Each pvc size 1GB. Observation: -> I have tried to create 1000 PVC, 615 PVCs are bounded remaining pvcs are in pending state. -> After 12 hours of idle time observed that GD2 pods rebooted more than 150 times -> After 12 hour of idle time one of the etcd pod is in not ready state.
glusterd2.log:
|
Attached CSI provisioner log above |
hitting this issue at 758 pvc |
@aravindavk @rishubhjain PTAL |
@ksandha please attach gd2 logs |
during volume create, before starting of transaction,add a check to validate volume already exists or not. Partial Fixes: gluster#1364 Signed-off-by: Madhu Rajanna <mrajanna@redhat.com>
during volume create, before starting of transaction,add a check to validate volume already exists or not. Partial Fixes: gluster#1364 Signed-off-by: Madhu Rajanna <mrajanna@redhat.com>
Seems like i am hitting the below issue and very consistently |
during volume create, before starting of transaction,add a check to validate volume already exists or not. Partial Fixes: #1364 Signed-off-by: Madhu Rajanna <mrajanna@redhat.com>
@aravindavk i have seen a different error message, i don't think this has been fixed |
as a chat with @ksandha seems like he is also hitting the same issue , so reopening the issue |
We haven't seen this in our multiple iterations of recent scale testing environment. |
@atinmu I will again try this scenario tonight and update you by tomorrows scrum. Last time when we tried to scale we were able to scale upto 716 PVC. |
Created the parallel PVC creation on non brick mux environment. Below is the state of cluster once the pvc reached 548
|
Can you provide the glusterd2 logs as well as etcd pod logs |
@rishubhjain can you please login to rhsqa-virt05.lab.eng.blr.redhat.com (root/GCS-karan/deploy) I see the glusterd2 logs are 78MB in size which I wont be able to attach it here. |
I believe this is already fixed in the latest master (or GCS 0.6) of GCS. |
Observed behavior
failed to create volumes more than 600 on my local setup
Expected/desired behavior
volume creation should be successful
Details on how to reproduce (minimal and precise)
create more PVC (around 1000)
logs:
The text was updated successfully, but these errors were encountered: