Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Volume size mismatch between pvc and gluster when a decimal point pvc is created #130

Open
PrasadDesala opened this issue Dec 28, 2018 · 6 comments

Comments

@PrasadDesala
Copy link

Describe the bug
I have created a 2.5G PVC. Gluster volume it created in backend is of capacity 2.5GiB but pvc capacity it is showing as 3Gi.

Steps to reproduce
Steps to reproduce the behavior:

  1. Create a 3 node setup using vagrant.
  2. create a pvc of size 2.5G

Actual results
Volume size mismatch between pvc and gluster when a decimal point pvc is created.

Expected behavior
Volume size should be same in both get pvc and gluster backend.

@Madhu-1
Copy link
Member

Madhu-1 commented Jan 2, 2019

@PrasadDesala please provide the PVC describe and gluster volume size output.

@PrasadDesala
Copy link
Author

@PrasadDesala please provide the PVC describe and gluster volume size output.

[vagrant@kube1 ~]$ kubectl describe pvc -n gcs
Name: pvc-1
Namespace: gcs
StorageClass: glusterfs-csi
Status: Bound
Volume: pvc-90a55545-0e5f-11e9-af0b-525400f94cb8
Labels:
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-provisioner: org.gluster.glusterfs
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 2Gi
Access Modes: RWX
Events:
Type Reason Age From Message


Normal ExternalProvisioning 62s persistentvolume-controller waiting for a volume to be created, either by external provisioner "org.gluster.glusterfs" or manually created by system administrator
Normal Provisioning 62s org.gluster.glusterfs_csi-provisioner-glusterfsplugin-0_b7c644f5-0e5c-11e9-a52d-0a580ae94207 External provisioner is provisioning volume for claim "gcs/pvc-1"
Normal ProvisioningSucceeded 60s org.gluster.glusterfs_csi-provisioner-glusterfsplugin-0_b7c644f5-0e5c-11e9-a52d-0a580ae94207 Successfully provisioned volume pvc-90a55545-0e5f-11e9-af0b-525400f94cb8
Mounted By:

[root@gluster-kube1-0 ~]# glustercli volume info

Volume Name: pvc-90a55545-0e5f-11e9-af0b-525400f94cb8
Type: Replicate
Volume ID: c569d4d8-296c-4f61-b81d-da88b40eb68c
State: Started
Capacity: 1.5 GiB
Transport-type: tcp
Options:
debug/io-stats.count-fop-hits: on
performance/io-cache.io-cache: off
performance/open-behind.open-behind: off
performance/quick-read.quick-read: off
performance/read-ahead.read-ahead: off
performance/readdir-ahead.readdir-ahead: off
performance/write-behind.write-behind: off
cluster/replicate.self-heal-daemon: on
performance/md-cache.md-cache: off
debug/io-stats.latency-measurement: on
Number of Bricks: 3
Brick1: gluster-kube1-0.glusterd2.gcs:/var/run/glusterd2/bricks/pvc-90a55545-0e5f-11e9-af0b-525400f94cb8/subvol1/brick1/brick
Brick2: gluster-kube2-0.glusterd2.gcs:/var/run/glusterd2/bricks/pvc-90a55545-0e5f-11e9-af0b-525400f94cb8/subvol1/brick2/brick
Brick3: gluster-kube3-0.glusterd2.gcs:/var/run/glusterd2/bricks/pvc-90a55545-0e5f-11e9-af0b-525400f94cb8/subvol1/brick3/brick
[root@gluster-kube1-0 ~]# glustercli volume status
Volume : pvc-90a55545-0e5f-11e9-af0b-525400f94cb8
+--------------------------------------+-------------------------------+-----------------------------------------------------------------------------------------+--------+-------+-----+
| BRICK ID | HOST | PATH | ONLINE | PORT | PID |
+--------------------------------------+-------------------------------+-----------------------------------------------------------------------------------------+--------+-------+-----+
| 17a046f5-a454-4a6b-8d66-a7f6df7553f3 | gluster-kube1-0.glusterd2.gcs | /var/run/glusterd2/bricks/pvc-90a55545-0e5f-11e9-af0b-525400f94cb8/subvol1/brick1/brick | true | 41916 | 680 |
| 72fa6042-f39c-434a-9457-ab8e556ee378 | gluster-kube2-0.glusterd2.gcs | /var/run/glusterd2/bricks/pvc-90a55545-0e5f-11e9-af0b-525400f94cb8/subvol1/brick2/brick | true | 41892 | 515 |
| ebdd9b68-ae56-4770-bbc2-620c719e826c | gluster-kube3-0.glusterd2.gcs | /var/run/glusterd2/bricks/pvc-90a55545-0e5f-11e9-af0b-525400f94cb8/subvol1/brick3/brick | true | 34213 | 511 |
+--------------------------------------+-------------------------------+-----------------------------------------------------------------------------------------+--------+-------+-----+

@Madhu-1
Copy link
Member

Madhu-1 commented Jan 2, 2019

@humblec @JohnStrunk is this expected behavior in the kube side, I see in CSI we are creating volume with size as bytes,

@JohnStrunk
Copy link
Member

During vol create, we pull the desired capacity (in bytes) straight from the incoming CSI request here:

volSizeBytes = capRange.GetRequiredBytes()

And pass it unchanged to gd2 here:

Size: uint64(volSizeBytes),

It would be a good check if we had the output of the create request/response:

glog.V(2).Infof("volume create request: %+v", volumeReq)

glog.V(3).Infof("volume create response : %+v", volumeCreateResp)

My thought is that we may be losing some capacity during brick creation (LVM & mkfs), leading to a volume that is smaller than expected.

@aravindavk Is there a way to validate what's happening on the gd2 side?

@Madhu-1
Copy link
Member

Madhu-1 commented Jan 3, 2019

@JohnStrunk CSI received a request to create PVC of size 1610612736 bytes equal to 1.5Gib, so the same request is passed to gd2, gd2 created a volume with the same size, is kubernetes doing any round of operation for floating point values once the PVC creation was successful?

@JohnStrunk
Copy link
Member

Ah... I misread the original description... I thought the resulting volume was smaller than requested. Re-reading, it appears that the volume is created w/ the requested size, but when querying the PVC, it shows a capacity in excess of the requested amount.

I looked through the kubernetes CSI code a bit and found this:
https://github.com/kubernetes-csi/external-provisioner/blob/d0e48803f3280973b2e3fe4588498088dc0d2c5d/pkg/controller/controller.go#L964

It appears they take the bytes and if it's > 1 Gi, it gets rounded to a whole number. This function is used in the CreateVolume call when initializing the PV object based on the capacity returned from the provisioning request.

Assuming I'm reading the code correctly, I'd consider this a bug in the kube CSI sidecar for not preserving the precision of the response.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants