New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for DHSS=true mode in Manila #1087
Comments
We have considered multi-tenancy issues but have not yet committed to any specific effort. The best out of the box option I think would be running multiple Ganesha instances in containers, but I understand the desire for scalability to have a single Ganesha instance. I think you were asking about each tenant utilizing a unique server IP address? If so, we COULD enhance the EXPORT security to add server IP address constraints in the same way client addresses are restrained. Note that exports are hidden from clients that don't have access which is helpful (though the pseudo path to the export may be visible - this CAN be worked around by creating a FSAL_PSEUDO export for each tenant's sub-tree and restrict that to clients from that tenant. A useful enhancement would be to automate that and hide pseudofs directories from clients that don't have access to any exports under the directory, but that could be complex if there are several exports with different client lists (we have to build a superset that includes all the clients). We have also considered enhancements to prevent a client from spoofing a handle into a different export to help tenant isolation. |
But that limits to some IPs assigned to the Ganesha units during the deployment, isn't it ?
So you mean, when running request to Manila , depening on tenant in which the share is create the resource will get different uuid path ? But still anyone else haveing proper IP in allow, and path can mount it , right ?
I don't think I grasp the concept.
What enhancements? :-) |
And by the way there are already specs for that : I know from this point it would require proper design and the best would be to find a sponsor for that. |
So the pseduopath stuff I was talking about earlier: Let's say you have exports: /export/customer1/export1 Without doing anything and configuring 4 exports, all users will be able to see the 2 sub-directories of /export /export/customer1 One assumes that customer1 would prefer customer2 not know that customer1 uses the service. So we COULD analyze the tree while building the pseudofs and only expose /export/customer1 to the union of clients that have access to /export/customer1/export1 and /export/customer1/export2. Then no customer2 clients will be able to see the existence of /export/customer1. Rinse and repeat for the customer2 exports. The other enhancement would mostly benefit FSAL_VFS users, but might benefit other FSALs depending on how they work. Let's say we have two exports: /export/vfs/fs1 Now to understand the following, consider that a FSAL_VFS filehandle is more or less: export_id:file_system_uuid:inode_number Each export has a different set of allowed clients. Currently there is nothing stopping a client that has access to fs2 creating the handle: export_id2:file_system_uuid_1:inode_number_0 FSAL_VFS will currently blindly accept this handle and give a client that has access to export/vfs/fs2 access to files from /export/vfs/fs1. It would not be complex at all to verify that fs1 is exported by export2, which it isn't, and return an access error (and further, log a message that an audit could find). Definitely support for DHSS=true sounds doable, but would need someone to do. |
What would it take to support that ?
Do you think it would be possible to have tenant isolated shares for manila ? Like opening neutron ports and redirecting traffic to Ganesha server and or optionally spawning instance like Octavia with Amphora image. Thoughts ?
I opened discussion here: https://bugs.launchpad.net/charm-manila/+bug/2054432
about what are the possibilities of open driver that would support this multi-tenancy and network isolation.
Thank you in advantage for your input.
The text was updated successfully, but these errors were encountered: