Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Online upgrade - 9.x to 10.0 #3066

Closed
jirireischig opened this issue Dec 22, 2021 · 5 comments · Fixed by #3070
Closed

Online upgrade - 9.x to 10.0 #3066

jirireischig opened this issue Dec 22, 2021 · 5 comments · Fixed by #3070

Comments

@jirireischig
Copy link

Description of problem:

Error after upgrade to 10.0 on one node. Peers are in the "peer rejected" status.

Error in glusterd.log:
[2021-12-22 11:24:14.395378 +0000] E [MSGID: 106010] [glusterd-utils.c:3851:glusterd_compare_friend_volume] 0-management: Version of Cksums NAME differ. local cksum = 3140243645, remote cksum = 467185320 on peer node-1

Only file info in /var/lib/glusterd/vols/NAME/ on upgraded node is differ than nodes with 9.4 version.
The file missing line "stripe_count=1" on version 10.0

The exact command to reproduce the issue:

The full output of the command that failed:

Expected results:

Mandatory info:
- The output of the gluster volume info command:

- The output of the gluster volume status command:

- The output of the gluster volume heal command:

**- Provide logs present on following locations of client and server nodes -
/var/log/glusterfs/

**- Is there any crash ? Provide the backtrace and coredump

Additional info:

- The operating system / glusterfs version:

Note: Please hide any confidential data which you don't want to share in public like IP address, file name, hostname or any other configuration

@pranithk
Copy link
Member

cc @amarts @xhernandez
Thanks for reporting this, we'll work on it.

@pranithk
Copy link
Member

Most probably it happened because of #1812, investigation is in progress

@mohit84
Copy link
Contributor

mohit84 commented Dec 23, 2021

@Sheetalpamecha Can you check this, IIRC we already discussed the striped count handling during upgrade in gchat.

@Sheetalpamecha
Copy link
Member

Hi @jirireischig, Can you please attached the full log from /var/log/glusterfs/glusterd.log ?

Sheetalpamecha added a commit to Sheetalpamecha/glusterfs that referenced this issue Dec 23, 2021
Change-Id: Ib04434b41b7c299cee0ed8d81deb5a68a17b0c0a
Fixes: gluster#3066
Signed-off-by: Sheetal Pamecha <spamecha@redhat.com>
@mykaul
Copy link
Contributor

mykaul commented Dec 23, 2021

Are we using stripe count anywhere? If not, we need to continue and add it forever, for upgrade? How can we solve and remove unused deprecated stuff?

Shwetha-Acharya pushed a commit that referenced this issue Jan 12, 2022
* glusterd: add stripe_count in volume info

Change-Id: Ib04434b41b7c299cee0ed8d81deb5a68a17b0c0a
Fixes: #3066
Signed-off-by: Sheetal Pamecha <spamecha@redhat.com>

* Add stipe-count only if version is less than 10

Change-Id: Ibb589df2bb4c00c71850d787c998aff746321a17
Signed-off-by: Sheetal Pamecha <spamecha@redhat.com>
Sheetalpamecha added a commit to Sheetalpamecha/glusterfs that referenced this issue Jan 12, 2022
* glusterd: add stripe_count in volume info

Change-Id: Ib04434b41b7c299cee0ed8d81deb5a68a17b0c0a
Fixes: gluster#3066
Signed-off-by: Sheetal Pamecha <spamecha@redhat.com>

* Add stipe-count only if version is less than 10

Change-Id: Ibb589df2bb4c00c71850d787c998aff746321a17
Signed-off-by: Sheetal Pamecha <spamecha@redhat.com>
Shwetha-Acharya pushed a commit that referenced this issue Jan 17, 2022
* glusterd: add stripe_count in volume info

Change-Id: Ib04434b41b7c299cee0ed8d81deb5a68a17b0c0a
Fixes: #3066
Signed-off-by: Sheetal Pamecha <spamecha@redhat.com>

* Add stipe-count only if version is less than 10

Change-Id: Ibb589df2bb4c00c71850d787c998aff746321a17
Signed-off-by: Sheetal Pamecha <spamecha@redhat.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants