Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

volume set: failed: ganesha.enable is already 'off'. #1778

Closed
hunter86bg opened this issue Nov 11, 2020 · 7 comments
Closed

volume set: failed: ganesha.enable is already 'off'. #1778

hunter86bg opened this issue Nov 11, 2020 · 7 comments
Labels
release 8 release 8

Comments

@hunter86bg
Copy link
Contributor

Description of problem:
When trying to export a volume via NFS-Ganesha, an error is produced as follows:

volume set: failed: ganesha.enable is already 'off'.

The exact command to reproduce the issue:

gluster vol set <VOL> ganesha.enable on

The full output of the command that failed:

[root@glustera nfs-ganesha]# gluster vol set custdata ganesha.enable on
volume set: failed: ganesha.enable is already 'off'.
[root@glustera nfs-ganesha]# showmount -e localhost
Export list for localhost:
[root@glustera nfs-ganesha]# 

Expected results:

Gluster volume should be exported properly (valid for replica 3 also).

Mandatory info:
- The output of the gluster volume info command:

[root@glustera nfs-ganesha]# gluster volume info
 
Volume Name: custdata
Type: Replicate
Volume ID: 47c96909-0bc8-4892-8ddd-fe12da663f8c
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 5 = 5
Transport-type: tcp
Bricks:
Brick1: glustera:/bricks/brick-a1/brick
Brick2: glusterb:/bricks/brick-b1/brick
Brick3: glusterc:/bricks/brick-c1/brick
Brick4: glusterd:/bricks/brick-d1/brick
Brick5: glustere:/bricks/brick-e1/brick
Options Reconfigured:
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
nfs-ganesha: enable
cluster.enable-shared-storage: enable
 
Volume Name: gluster_shared_storage
Type: Replicate
Volume ID: d291d189-edf8-4451-818c-e18aabe3172e
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: glusterb:/var/lib/glusterd/ss_brick
Brick2: glusterc:/var/lib/glusterd/ss_brick
Brick3: glustera.localdomain:/var/lib/glusterd/ss_brick
Options Reconfigured:
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
nfs-ganesha: enable
cluster.enable-shared-storage: enable

- The output of the gluster volume status command:

[root@glustera nfs-ganesha]# gluster volume status
Status of volume: custdata
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick glustera:/bricks/brick-a1/brick       49152     0          Y       2310 
Brick glusterb:/bricks/brick-b1/brick       49152     0          Y       1909 
Brick glusterc:/bricks/brick-c1/brick       49152     0          Y       1922 
Brick glusterd:/bricks/brick-d1/brick       49152     0          Y       1928 
Brick glustere:/bricks/brick-e1/brick       49152     0          Y       1939 
Self-heal Daemon on localhost               N/A       N/A        Y       2333 
Self-heal Daemon on glusterd                N/A       N/A        Y       1948 
Self-heal Daemon on glustere                N/A       N/A        Y       1958 
Self-heal Daemon on glusterb                N/A       N/A        Y       1928 
Self-heal Daemon on glusterc                N/A       N/A        Y       1941 
 
Task Status of Volume custdata
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: gluster_shared_storage
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick glusterb:/var/lib/glusterd/ss_brick   49153     0          Y       2100 
Brick glusterc:/var/lib/glusterd/ss_brick   49153     0          Y       2111 
Brick glustera.localdomain:/var/lib/gluster
d/ss_brick                                  49153     0          Y       2515 
Self-heal Daemon on localhost               N/A       N/A        Y       2333 
Self-heal Daemon on glustere                N/A       N/A        Y       1958 
Self-heal Daemon on glusterc                N/A       N/A        Y       1941 
Self-heal Daemon on glusterb                N/A       N/A        Y       1928 
Self-heal Daemon on glusterd                N/A       N/A        Y       1948 
 
Task Status of Volume gluster_shared_storage
------------------------------------------------------------------------------
There are no active volume tasks

- The output of the gluster volume heal command:

[root@glustera nfs-ganesha]# gluster volume heal custdata
Launching heal operation to perform index self heal on volume custdata has been successful 
Use heal info commands to check status.

**- Provide logs present on following locations of client and server nodes -
/var/log/glusterfs/

glusterfs_a.zip
glusterfs_b.zip
glusterfs_c.zip
glusterfs_d.zip
glusterfs_e.zip

**- Is there any crash ? Provide the backtrace and coredump
No

Additional info:
Exporting is still possible but requires a lot of effort:

  1. Edit the volume info file and add ganesha.enable stanza on all nodes:
echo ganesha.enable=on >> /var/lib/glusterd/vols/custdata/info
  1. Create exports subdir:
mkdir /var/run/gluster/shared_storage/nfs-ganesha/exports/
  1. Export the share on all nodes:
[root@glustera post]# bash -x  /var/lib/glusterd/hooks/1/start/post/S31ganesha-start.sh --volname=custdata --gd-workdir=/var/lib/glusterd
+ PROGNAME=Sganesha-start
+ OPTSPEC=volname:,gd-workdir:
+ VOL=
+ declare -i EXPORT_ID
+ ganesha_key=ganesha.enable
+ GANESHA_DIR=/var/run/gluster/shared_storage/nfs-ganesha
+ CONF1=/var/run/gluster/shared_storage/nfs-ganesha/ganesha.conf
+ GLUSTERD_WORKDIR=
+ parse_args --volname=custdata --gd-workdir=/var/lib/glusterd
++ getopt -l volname:,gd-workdir: -o o -name Sganesha-start --volname=custdata --gd-workdir=/var/lib/glusterd
+ ARGS=' --volname '\''custdata'\'' --gd-workdir '\''/var/lib/glusterd'\'' -- '\''Sganesha-start'\'''
+ eval set -- ' --volname '\''custdata'\'' --gd-workdir '\''/var/lib/glusterd'\'' -- '\''Sganesha-start'\'''
++ set -- --volname custdata --gd-workdir /var/lib/glusterd -- Sganesha-start
+ true
+ case $1 in
+ shift
+ VOL=custdata
+ shift
+ true
+ case $1 in
+ shift
+ GLUSTERD_WORKDIR=/var/lib/glusterd
+ shift
+ true
+ case $1 in
+ shift
+ break
+ ganesha_enabled custdata
+ local volume=custdata
+ local info_file=/var/lib/glusterd/vols/custdata/info
+ local enabled=off
++ cut -d= -f2
++ grep -w ganesha.enable /var/lib/glusterd/vols/custdata/info
+ enabled=on
+ '[' on == on ']'
+ return 0
+ is_exported custdata
+ local volume=custdata
+ dbus-send --type=method_call --print-reply --system --dest=org.ganesha.nfsd /org/ganesha/nfsd/ExportMgr org.ganesha.nfsd.exportmgr.ShowExports
+ grep -w -q /custdata
+ return 1
+ '[' '!' -e /var/run/gluster/shared_storage/nfs-ganesha/exports/export.custdata.conf ']'
+ sed -i /custdata.conf/d /var/run/gluster/shared_storage/nfs-ganesha/ganesha.conf
+ write_conf custdata
+ echo -e '# WARNING : Using Gluster CLI will overwrite manual
# changes made to this file. To avoid it, edit the
# file, copy it over to all the NFS-Ganesha nodes
# and run ganesha-ha.sh --refresh-config.'
+ echo 'EXPORT{'
+ echo '      Export_Id = 2;'
+ echo '      Path = "/custdata";'
+ echo '      FSAL {'
+ echo '           name = "GLUSTER";'
+ echo '           hostname="localhost";'
+ echo '           volume="custdata";'
+ echo '           }'
+ echo '      Access_type = RW;'
+ echo '      Disable_ACL = true;'
+ echo '      Squash="No_root_squash";'
+ echo '      Pseudo="/custdata";'
+ echo '      Protocols = "3", "4" ;'
+ echo '      Transports = "UDP","TCP";'
+ echo '      SecType = "sys";'
+ echo '}'
++ cat /var/run/gluster/shared_storage/nfs-ganesha/.export_added
+ EXPORT_ID=1
+ EXPORT_ID=EXPORT_ID+1
+ echo 2
+ sed -i 's/Export_Id.*/Export_Id=2;/' /var/run/gluster/shared_storage/nfs-ganesha/exports/export.custdata.conf
+ echo '%include "/var/run/gluster/shared_storage/nfs-ganesha/exports/export.custdata.conf"'
+ export_add custdata
+ dbus-send --print-reply --system --dest=org.ganesha.nfsd /org/ganesha/nfsd/ExportMgr org.ganesha.nfsd.exportmgr.AddExport string:/var/run/gluster/shared_storage/nfs-ganesha/exports/export.custdata.conf 'string:EXPORT(Export_Id=2)'
method return time=1605126622.658773 sender=:1.30 -> destination=:1.47 serial=1412 reply_serial=2
   string "1 exports added"
+ exit 0

[root@glustera ~]# showmount -e localhost
Export list for localhost:
/custdata (everyone)
[root@glustera ~]# mount -t nfs localhost:/custdata /mnt
[root@glustera ~]# findmnt /mnt
TARGET SOURCE              FSTYPE OPTIONS
/mnt   localhost:/custdata nfs4   rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp6,timeo=600,retrans=2,sec=sys,clientaddr=::1,local_lock=none,addr=::1
[root@glustera ~]# 

- The operating system / glusterfs version:
OS: CentOS 8.2.2004
Repos:

[root@glustera ~]# yum repolist
repo id                                                                       repo name
AppStream                                                                     CentOS-8 - AppStream
BaseOS                                                                        CentOS-8 - Base
HighAvailability                                                              CentOS-8 - HA
PowerTools                                                                    CentOS-8 - PowerTools
centos-gluster8                                                               CentOS-8 - Gluster 8
centos-nfs-ganesha3                                                           CentOS-8 - NFS Ganesha 3
centos-samba412                                                               CentOS-8 - Samba 4.12
epel                                                                          Extra Packages for Enterprise Linux 8 - x86_64
epel-modular                                                                  Extra Packages for Enterprise Linux Modular 8 - x86_64
extras                                                                        CentOS-8 - Extras

RPMs:

[root@glustera ~]# rpm -qa | grep -E "ganesha|gluster" | sort
centos-release-gluster8-1.0-1.el8.noarch
centos-release-nfs-ganesha30-1.0-2.el8.noarch
glusterfs-8.2-4.el8.x86_64
glusterfs-cli-8.2-4.el8.x86_64
glusterfs-client-xlators-8.2-4.el8.x86_64
glusterfs-fuse-8.2-4.el8.x86_64
glusterfs-ganesha-8.2-4.el8.x86_64
glusterfs-server-8.2-4.el8.x86_64
libglusterd0-8.2-4.el8.x86_64
libglusterfs0-8.2-4.el8.x86_64
nfs-ganesha-3.3-2.el8.x86_64
nfs-ganesha-gluster-3.3-2.el8.x86_64
nfs-ganesha-selinux-3.3-2.el8.noarch

SELINUX status : Enforcing

@hunter86bg
Copy link
Contributor Author

Unexporting also doesn't work (selinux permissive) :

[root@glustera ~]# gluster volume set custdata ganesha.enable off
volume set: success
[root@glustera ~]# showmount -e localhost
Export list for localhost:
/custdata (everyone)

@hunter86bg
Copy link
Contributor Author

I can confirm that the patch proposed over the mailing list is working:

[root@glustera ~]# gluster volume set custdata ganesha.enable on
volume set: success
[root@glustera ~]# showmount -e localhost
Export list for localhost:
/custdata (everyone)
diff --git a/xlators/mgmt/glusterd/src/glusterd-op-sm.c
>>> b/xlators/mgmt/glusterd/src/glusterd-op-sm.c
 index 558f04fb2..d7bf96adf 100644
 --- a/xlators/mgmt/glusterd/src/glusterd-op-sm.c
 +++ b/xlators/mgmt/glusterd/src/glusterd-op-sm.c
 @@ -1177,7 +1177,7 @@ glusterd_op_stage_set_volume(dict_t *dict, char
 **op_errstr)
                }
            } else if (len_strcmp(key, keylen, "ganesha.enable")) {
                key_matched = _gf_true;
 -            if (!strcmp(value, "off") == 0) {
 +            if (strcmp(value, "off") == 0) {
                    ret = ganesha_manage_export(dict, "off", _gf_true,
 op_errstr);
                    if (ret)
                        goto out;

The pull request should be #1813

itisravi added a commit to itisravi/glusterfs that referenced this issue Nov 18, 2020
As detailed in the github issue,`gluster volume set Svolname ganesha.enable on`
is currently broken due to a minor typo in the commit e081ac6,

Fixing it now.

Updates: gluster#1778
Change-Id: I99276fedc43f40e8a439e545bd2b8d1698aa03ee
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Tested-by: Strahil Nikolov <hunter86_bg@yahoo.com>
thotz pushed a commit that referenced this issue Nov 19, 2020
As detailed in the github issue,`gluster volume set Svolname ganesha.enable on`
is currently broken due to a minor typo in the commit e081ac6,

Fixing it now.

Updates: #1778
Change-Id: I99276fedc43f40e8a439e545bd2b8d1698aa03ee
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Tested-by: Strahil Nikolov <hunter86_bg@yahoo.com>
itisravi added a commit to itisravi/glusterfs that referenced this issue Nov 19, 2020
As detailed in the github issue,`gluster volume set Svolname ganesha.enable on`
is currently broken due to a minor typo in the commit e081ac6,

Fixing it now.

Fixes: gluster#1778
Change-Id: I99276fedc43f40e8a439e545bd2b8d1698aa03ee
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Tested-by: Strahil Nikolov <hunter86_bg@yahoo.com>
rkothiya pushed a commit that referenced this issue Nov 23, 2020
As detailed in the github issue,`gluster volume set Svolname ganesha.enable on`
is currently broken due to a minor typo in the commit e081ac6,

Fixing it now.

Fixes: #1778
Change-Id: I99276fedc43f40e8a439e545bd2b8d1698aa03ee
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Tested-by: Strahil Nikolov <hunter86_bg@yahoo.com>
@rkothiya rkothiya added the release 8 release 8 label Nov 29, 2020
@hunter86bg
Copy link
Contributor Author

If it's merged then we can close this one ?

pranithk pushed a commit to pranithk/glusterfs that referenced this issue May 25, 2021
)

As detailed in the github issue,`gluster volume set Svolname ganesha.enable on`
is currently broken due to a minor typo in the commit e081ac6,

Fixing it now.

Fixes: gluster#1778
Change-Id: I99276fedc43f40e8a439e545bd2b8d1698aa03ee
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Tested-by: Strahil Nikolov <hunter86_bg@yahoo.com>
@stale
Copy link

stale bot commented Jul 24, 2021

Thank you for your contributions.
Noticed that this issue is not having any activity in last ~6 months! We are marking this issue as stale because it has not had recent activity.
It will be closed in 2 weeks if no one responds with a comment here.

@stale stale bot added the wontfix Managed by stale[bot] label Jul 24, 2021
@hunter86bg
Copy link
Contributor Author

@pranithk , I thought this one is already fixed

@stale stale bot removed the wontfix Managed by stale[bot] label Jul 24, 2021
@hunter86bg
Copy link
Contributor Author

Is this one fixed ?
I think it should be solved a long time ago

@xhernandez
Copy link
Contributor

Yes, it's fixed. I'm closing this issue. Thanks !!!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
release 8 release 8
Projects
None yet
Development

No branches or pull requests

3 participants