Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

glusterd: add stripe_count in volume info (#3070) #3133

Merged
merged 1 commit into from
Jan 17, 2022

Conversation

Sheetalpamecha
Copy link
Member

glusterd: add stripe_count in volume info
Add stipe-count only if version is less than 10

Change-Id: Ibb589df2bb4c00c71850d787c998aff746321a17
Signed-off-by: Sheetal Pamecha spamecha@redhat.com

* glusterd: add stripe_count in volume info

Change-Id: Ib04434b41b7c299cee0ed8d81deb5a68a17b0c0a
Fixes: gluster#3066
Signed-off-by: Sheetal Pamecha <spamecha@redhat.com>

* Add stipe-count only if version is less than 10

Change-Id: Ibb589df2bb4c00c71850d787c998aff746321a17
Signed-off-by: Sheetal Pamecha <spamecha@redhat.com>
@Sheetalpamecha
Copy link
Member Author

/run regression

@gluster-ant
Copy link
Collaborator

CLANG-FORMAT FAILURE:
Before merging the patch, this diff needs to be considered for passing clang-format

index c4d75b6aa..52cbc248f 100644
--- a/xlators/mgmt/glusterd/src/glusterd-store.c
+++ b/xlators/mgmt/glusterd/src/glusterd-store.c
@@ -2809,7 +2809,8 @@ glusterd_store_retrieve_bricks(glusterd_volinfo_t *volinfo)
             cds_list_add_tail(&ta_brickinfo->brick_list, &volinfo->ta_bricks);
             ta_brick_count++;
             if (gf_store_iter_destroy(&iter)) {
-                gf_msg(this->name, GF_LOG_ERROR, 0, GD_MSG_STORE_ITER_DESTROY_FAIL,
+                gf_msg(this->name, GF_LOG_ERROR, 0,
+                       GD_MSG_STORE_ITER_DESTROY_FAIL,
                        "Failed to destroy store iter");
                 ret = -1;
                 goto out;

@gluster-ant
Copy link
Collaborator

1 test(s) failed
./tests/bugs/glusterd/brick-order-check-add-brick.t

0 test(s) generated core

6 test(s) needed retry
./tests/000-flaky/basic_mount-nfs-auth.t
./tests/000-flaky/bugs_nfs_bug-1116503.t
./tests/000-flaky/glusterd-restart-shd-mux.t
./tests/basic/afr/gfid-mismatch-resolution-with-cli.t
./tests/basic/nl-cache.t
./tests/bugs/glusterd/brick-order-check-add-brick.t

3 flaky test(s) marked as success even though they failed
./tests/000-flaky/basic_mount-nfs-auth.t
./tests/000-flaky/bugs_nfs_bug-1116503.t
./tests/000-flaky/glusterd-restart-shd-mux.t
https://build.gluster.org/job/gh_centos7-regression/2035/

@Sheetalpamecha
Copy link
Member Author

/run regression

@gluster-ant
Copy link
Collaborator

1 test(s) failed
./tests/bugs/glusterfs/bug-866459.t

0 test(s) generated core

4 test(s) needed retry
./tests/000-flaky/basic_afr_split-brain-favorite-child-policy.t
./tests/000-flaky/glusterd-restart-shd-mux.t
./tests/basic/glusterd/heald.t
./tests/bugs/glusterfs/bug-866459.t
https://build.gluster.org/job/gh_centos7-regression/2037/

@Shwetha-Acharya
Copy link
Contributor

/run regression

1 similar comment
@Sheetalpamecha
Copy link
Member Author

/run regression

@Shwetha-Acharya Shwetha-Acharya merged commit 047c95f into gluster:release-10 Jan 17, 2022
Shwetha-Acharya added a commit to Shwetha-Acharya/glusterfs that referenced this pull request Jan 17, 2022
Sheetalpamecha pushed a commit that referenced this pull request Jan 18, 2022
* Add GlusterFS 10.1 release notes

Updates: #3099
Signed-off-by: Shwetha K Acharya <sacharya@redhat.com>

* Add #3133 to GlusterFS 10.1 release notes

* Update 10.1.md

* Update 10.1.md

* Add #2962 to GlusterFS 10.1 release notes
mohit84 pushed a commit to mohit84/glusterfs that referenced this pull request Feb 3, 2022
* Add GlusterFS 10.1 release notes

Updates: gluster#3099
Signed-off-by: Shwetha K Acharya <sacharya@redhat.com>

* Add gluster#3133 to GlusterFS 10.1 release notes

* Update 10.1.md

* Update 10.1.md

* Add gluster#2962 to GlusterFS 10.1 release notes

Change-Id: I267ec267a098e6617c84f0869d522170a35e3e04
@nirs
Copy link

nirs commented Apr 28, 2022

This broke oVirt, assuming that stripeCount is always available.
https://github.com/oVirt/vdsm/blob/8fcb01807518dcf445dafa19f0cd31d97e695485/lib/vdsm/gluster/cli.py#L429

@mykaul
Copy link
Contributor

mykaul commented May 1, 2022

This broke oVirt, assuming that stripeCount is always available. https://github.com/oVirt/vdsm/blob/8fcb01807518dcf445dafa19f0cd31d97e695485/lib/vdsm/gluster/cli.py#L429

Probably the easiest would be to revert, or ptovide a fake value.
There is still an open question of why oVirt even needs this value in the 1st place.

@nirs
Copy link

nirs commented May 2, 2022

The code parsing stripeCount was added in 8b0ac63819c1123ff6d7f725d624d2689e9a81df.

stripeCount is used in 106 lines in current ovirt-engine:
https://github.com/oVirt/ovirt-engine/search?q=stripeCount

So it seems that vdsm need to parse this value and report it to engine, and gluster should continue to report the value for backward compatibility with older clients.

@mykaul
Copy link
Contributor

mykaul commented May 3, 2022

The code parsing stripeCount was added in 8b0ac63819c1123ff6d7f725d624d2689e9a81df.

stripeCount is used in 106 lines in current ovirt-engine: https://github.com/oVirt/ovirt-engine/search?q=stripeCount

So it seems that vdsm need to parse this value and report it to engine, and gluster should continue to report the value for backward compatibility with older clients.

I think at this point, if it's possible to patch VDSM to just return 1 as the stripe_count - I don't see (but perhaps I'm missing context) why would oVirt need stripe size value at all.

Of course, that doesn't solve the issue with previous oVirt versions and 10.x

@Shwetha-Acharya
Copy link
Contributor

Shwetha-Acharya commented May 13, 2022

@rchikatw^^

@rchikatw
Copy link
Contributor

Fixed in ovirt 4.5.1, see:
oVirt/vdsm#172

Actually, ovirt does not use the stripe count but I see the code with stripe count still exists in ovirt, As of now I just handled this in VDSM will clean up this code as soon as I can

This pull request was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants