Skip to content
This repository has been archived by the owner on Mar 26, 2020. It is now read-only.

Add support for glusterfs client authorization #971

Closed
prashanthpai opened this issue Jul 9, 2018 · 0 comments · Fixed by #967
Closed

Add support for glusterfs client authorization #971

prashanthpai opened this issue Jul 9, 2018 · 0 comments · Fixed by #967
Labels
FW: Security FW: Volume Management GCS/1.0 Issue is blocker for Gluster for Container Storage

Comments

@prashanthpai
Copy link
Contributor

prashanthpai commented Jul 9, 2018

GlusterFS supports many methods to allow or disallow certain clients to access volume. Users can specify patterns of host/IP addresses using options such as auth.allow and auth.reject

@prashanthpai prashanthpai added FW: Volume Management FW: Security GCS/1.0 Issue is blocker for Gluster for Container Storage labels Jul 9, 2018
@prashanthpai prashanthpai added this to the GCS-Sprint1 milestone Jul 9, 2018
gluster-ant pushed a commit to gluster/glusterfs that referenced this issue Jul 11, 2018
This change explicitly adds 'ssl-allow' options to the server xlator's
options table so that glusterd2 can see it as a settable option. This
change also marks 'auth.allow' and 'auth.reject' options as a settable.

Glusterd2 doesn't maintain a separate volume options table. Glusterd2
dynamically loads shared objects of xlators to read their option table
and other information. Glusterd2 reads 'xlator_api_t' if available. If
that's not available, it falls back to reading just the options table
directly.

In glusterd2, volume set operations are performed by users on keys of
the format <xlator>.<option-name>. Glusterd2 uses xlator name set in
'xlator_api_t.identifier'. If that's not present it will use the shared
object's file name as xlator name. Hence, it is important for
'xlator_api_t.identifier' to be set properly, and in this case, the
proper value is "server". This name shall be used by users as prefix
while setting volume options implemented in server xlator. The name will
also be used in volfile.

A user in glusterd2 can authorize a client over TLS as follows:

$ glustercli volume set <volname> server.ssl-allow <client1-CN>[,<clientN-CN>]

gd2 References:
gluster/glusterd2#971
gluster/glusterd2#214
gluster/glusterd2#967

Updates: bz#1193929
Change-Id: I59ef58acb8d51917e6365a83be03e79ae7c5ad17
Signed-off-by: Prashanth Pai <ppai@redhat.com>
amarts pushed a commit to amarts/glusterfs_fork that referenced this issue Sep 11, 2018
This change explicitly adds 'ssl-allow' options to the server xlator's
options table so that glusterd2 can see it as a settable option. This
change also marks 'auth.allow' and 'auth.reject' options as a settable.

Glusterd2 doesn't maintain a separate volume options table. Glusterd2
dynamically loads shared objects of xlators to read their option table
and other information. Glusterd2 reads 'xlator_api_t' if available. If
that's not available, it falls back to reading just the options table
directly.

In glusterd2, volume set operations are performed by users on keys of
the format <xlator>.<option-name>. Glusterd2 uses xlator name set in
'xlator_api_t.identifier'. If that's not present it will use the shared
object's file name as xlator name. Hence, it is important for
'xlator_api_t.identifier' to be set properly, and in this case, the
proper value is "server". This name shall be used by users as prefix
while setting volume options implemented in server xlator. The name will
also be used in volfile.

A user in glusterd2 can authorize a client over TLS as follows:

$ glustercli volume set <volname> server.ssl-allow <client1-CN>[,<clientN-CN>]

gd2 References:
gluster/glusterd2#971
gluster/glusterd2#214
gluster/glusterd2#967

Updates: bz#1193929
Change-Id: I59ef58acb8d51917e6365a83be03e79ae7c5ad17
Signed-off-by: Prashanth Pai <ppai@redhat.com>
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
FW: Security FW: Volume Management GCS/1.0 Issue is blocker for Gluster for Container Storage
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant