Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gstatus fails when unable to get self-heal status #54

Closed
ChristineTChen opened this issue Dec 23, 2020 · 3 comments
Closed

gstatus fails when unable to get self-heal status #54

ChristineTChen opened this issue Dec 23, 2020 · 3 comments

Comments

@ChristineTChen
Copy link

ChristineTChen commented Dec 23, 2020

I'm running gstatus in the latest gluster centos container image.

It seems that the gluster volume heal <VOL> info command is only available for replicate/disperse volumes:

# gluster vol heal gv0 info
Volume gv0 is not of type replicate/disperse
Volume heal failed.

Is it possible to allow gstatus to skip displaying self-heal info if not present (for example I added a replicate volume (gv1), but gstatus will throw an error because of gv0 is a distribute volume type? This currently blocks us from using gstatus at all since the -a, -b and -v flags are all throwing the same error from /gstatus/glusterlib/display_status.py.

When attempting to view the gstatus -a of my cluster, I get the following traceback:

Note: Unable to get self-heal status for one or more volumes
Traceback (most recent call last):
  File "/usr/lib64/python3.6/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/usr/lib64/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/usr/local/bin/gstatus/__main__.py", line 74, in <module>
  File "/usr/local/bin/gstatus/__main__.py", line 71, in main
  File "/usr/local/bin/gstatus/glusterlib/display_status.py", line 11, in display_status
  File "/usr/local/bin/gstatus/glusterlib/display_status.py", line 58, in _build_status
KeyError: 'healinfo'

My volume info:

Volume Name: gv0
Type: Distribute
Volume ID: 1360cf07-5a64-4452-aedc-9d0d8aba1280
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: gluster-node-1:/export
Options Reconfigured:
nfs.disable: on
storage.fips-mode-rchecksum: on
transport.address-family: inet

Gstatus version:

#  gstatus --version
gstatus 1.0.4

Gluster version:

# gluster --version
glusterfs 7.9
Repository revision: git://git.gluster.org/glusterfs.git
Copyright (c) 2006-2016 Red Hat, Inc. <https://www.gluster.org/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.
@sac
Copy link
Member

sac commented Dec 24, 2020

@Joibel fixed this issue in #52
I'll create a new release with the above patch. You can also try the latest master.

@sac
Copy link
Member

sac commented Dec 24, 2020

@ChristineTChen I have a made a release with the above fixes and is available at https://github.com/gluster/gstatus/releases/tag/v1.0.5

The binary is available at https://github.com/gluster/gstatus/releases/download/v1.0.5/gstatus

Or you can do your own build by following the below instructions

git clone https://github.com/gluster/gstatus.git
cd gstatus
VERSION=1.0.5 make gen-version
python3 setup.py install

@ChristineTChen
Copy link
Author

Thanks, the release 1.0.5 fixed this issue. Much appreciated :D

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants