You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
On Wednesday 03 June 2015 09:13 AM, Pranith Kumar Karampuri wrote:
On 06/01/2015 11:07 AM, Bipin Kunal wrote:
Hi All,
Is there a way to find total number of gluster mounts?
If not, what would be the complexity for this RFE?
As far as I understand finding the number of fuse mount should be
possible but seems unfeasible for nfs and samba mounts.
True. Bricks have connections from each of the clients. Each of
fuse/nfs/glustershd/quotad/glfsapi-based-clients(samba/glfsheal) would
have separate client-context set on the bricks. So We can get this
information. But like you said I am not sure how it can be done in nfs
server/samba. Adding more people.
Depends on why you would want to know about the clients:
For most of the use cases, admin might just need to know how many
Samba/NFS servers are currently using the given volume(Say just to perform umount everywhere).
In this case, each Samba/NFS server is just like a FUSE mount and we can use the same
technique that we would use for the above case that Pranith has mentioned.
If the requirement is to identify all the machines which are accessing a volume,
(probable use case:- you may want a end-user to close a file etc)
above method won't be sufficient. To get details of SMB clients, you would have to
run 'smbstatus' command on all SMB server nodes and it would output details of
connected SMB clients in this format.
PID Username Group Machine Protocol Version Service pid machine Connected at
Thanks,
Raghavendra Talur
Pranith
Please let me know your precious thoughts on this.
Gluster/NFS supports the 'showmount' command (over the MOUNT RPC
protocol). It can be used to list all the NFS-clients that have a
volume/subdir mounted.
This list should not be 100% trusted though. NFSv3 uses the MOUNT RPC
protocol to get the file-handle for the mountpoint. After that, the
NFSv3 protocol can use the export until it wants to. When the NFS-client
unmounts the export, it sends the UMNT procedure to the NFS-server what
causes the NFS-client/export combination to be removed from the
client-list (showmount output).
A client that does not send a UMNT, will not be removed from the list of
active clients. This can happen when a client does a umount during
network issues, or a client spontaneously reboots (or kernel panic or
..). Very similar are clients that mount the exact same export/subdir in
multiple mountpoints. The NFS-server can not differentiate between a
client that did not set a UMNT, or a client that mounts the same
export/subdir more than once. These clients will only be listed once.
HTH,
Niels
Comment 8340402
Date: 2015-06-16 08:14:42 -0400
From: Niels de Vos <>
Ya that do provide me a solution. But showmount will list all the nfs mount irrespective of gluster volume mount. We can use "showmount" output and display only the gluster mount information.
Ya that do provide me a solution. But showmount will list all the nfs mount
irrespective of gluster volume mount. We can use "showmount" output and
display only the gluster mount information.
It is not really clear to me what the expected result is if this bug. You can use 'showmount' to display which NFS-clients mounted a volume/subdir. Is there anything else needed?
Note that you can set nfs.rmtab to a file on a Gluster/FUSE mountpoint. In that case, you can do a 'showmount' against one Gluster server, and all the clients from all servers will be listed. This comes with a performance cost while mounting, though. Mount-storms can get delayed quite a bit due to that option (Try to boot a whole HPC-cluster with hundreds of clients at once, mounting will get serialized.)
The text was updated successfully, but these errors were encountered:
Thank you for your contributions.
Noticed that this issue is not having any activity in last ~6 months! We are marking this issue as stale because it has not had recent activity.
It will be closed in 2 weeks if no one responds with a comment here.
This issue was created automatically with bugzilla2github
Bugzilla Bug 1231202
Date: 2015-06-12 07:40:00 -0400
From: Bipin Kunal <>
To: bugs
CC: bkunal, bugs, ndevos, sasundar
Blocker for: #1231171
See also: https://bugzilla.redhat.com/show_bug.cgi?id=1231207
See also: https://bugzilla.redhat.com/show_bug.cgi?id=1231175
See also: https://bugzilla.redhat.com/show_bug.cgi?id=1231171
Last updated: 2016-01-11 04:04:26 -0500
Comment 8330570
Date: 2015-06-12 07:40:07 -0400
From: Bipin Kunal <>
+++ This bug was initially created as a clone of Bug #1231175 +++
+++ This bug was initially created as a clone of Bug #1231171 +++
Description of problem:
We should be able to find total number of glusterfs nfs client mounts from server nodes. It should list all the nfs client.
http://www.gluster.org/pipermail/gluster-devel/2015-June/045462.html
On Wednesday 03 June 2015 09:13 AM, Pranith Kumar Karampuri wrote:
Depends on why you would want to know about the clients:
For most of the use cases, admin might just need to know how many
Samba/NFS servers are currently using the given volume(Say just to perform umount everywhere).
In this case, each Samba/NFS server is just like a FUSE mount and we can use the same
technique that we would use for the above case that Pranith has mentioned.
If the requirement is to identify all the machines which are accessing a volume,
(probable use case:- you may want a end-user to close a file etc)
above method won't be sufficient. To get details of SMB clients, you would have to
run 'smbstatus' command on all SMB server nodes and it would output details of
connected SMB clients in this format.
PID Username Group Machine Protocol Version Service pid machine Connected at
Thanks,
Raghavendra Talur
Comment 8334648
Date: 2015-06-15 04:02:19 -0400
From: Bipin Kunal <>
Reply from Niels from nfs point of view:
http://www.gluster.org/pipermail/gluster-devel/2015-June/045640.html
Gluster/NFS supports the 'showmount' command (over the MOUNT RPC
protocol). It can be used to list all the NFS-clients that have a
volume/subdir mounted.
This list should not be 100% trusted though. NFSv3 uses the MOUNT RPC
protocol to get the file-handle for the mountpoint. After that, the
NFSv3 protocol can use the export until it wants to. When the NFS-client
unmounts the export, it sends the UMNT procedure to the NFS-server what
causes the NFS-client/export combination to be removed from the
client-list (showmount output).
A client that does not send a UMNT, will not be removed from the list of
active clients. This can happen when a client does a umount during
network issues, or a client spontaneously reboots (or kernel panic or
..). Very similar are clients that mount the exact same export/subdir in
multiple mountpoints. The NFS-server can not differentiate between a
client that did not set a UMNT, or a client that mounts the same
export/subdir more than once. These clients will only be listed once.
HTH,
Niels
Comment 8340402
Date: 2015-06-16 08:14:42 -0400
From: Niels de Vos <>
Does comment #1 not provide a solution for you?
Comment 8341771
Date: 2015-06-16 10:15:04 -0400
From: Bipin Kunal <>
Niels,
Ya that do provide me a solution. But showmount will list all the nfs mount irrespective of gluster volume mount. We can use "showmount" output and display only the gluster mount information.
I have opened this bug as a child bug of https://bugzilla.redhat.com/show_bug.cgi?id=1231171.
BZ: 1231171 addresses a tool which will list all the gluster mounts
Comment 8430336
Date: 2015-07-12 18:22:36 -0400
From: Niels de Vos <>
(In reply to Bipin Kunal from comment #3)
It is not really clear to me what the expected result is if this bug. You can use 'showmount' to display which NFS-clients mounted a volume/subdir. Is there anything else needed?
Note that you can set nfs.rmtab to a file on a Gluster/FUSE mountpoint. In that case, you can do a 'showmount' against one Gluster server, and all the clients from all servers will be listed. This comes with a performance cost while mounting, though. Mount-storms can get delayed quite a bit due to that option (Try to boot a whole HPC-cluster with hundreds of clients at once, mounting will get serialized.)
The text was updated successfully, but these errors were encountered: