-
Notifications
You must be signed in to change notification settings - Fork 76
NSFS on Kubernetes
NSFS (short for Namespace-Filesystem) is a capability to use a shared filesystem (mounted in the endpoints) for the storage of S3 buckets, while keeping a 1-1 mapping between Object and File.
This feature is currently (2021-Apr-22) under development, and it is recommended to use latest weekly master releases, starting from the build tagged with master-20210419
.
Refer to this guide: https://github.com/noobaa/noobaa-core/wiki/Weekly-Master-Builds
For NSFS to work it requires a PVC for the filesystem, with a ReadWriteMany accessMode so that we can scale the endpoints to any node in the cluster and still be able to share the volume.
Ideally this PVC will be allocated by a provisioner, such as the CSI provisioner of rook-ceph.cephfs.csi.ceph.com (link).
If you don't have a CSI provisioner you can just set up a local volume manually using this guide: https://github.com/noobaa/noobaa-core/wiki/NSFS-using-a-Local-PV-on-k8s
S3 access will be determined by the mapping of each S3 account to UID/GID (see Step 6 - Create Account) and the access of that UID/GID to the dirs and files in the filesystem. The filesystem admin should set up the ACLs/unix-permissions for the mounted FS path to the needed UIDs/GIDs that would be used to access it.
For dev/test the simplest way to set this up is to give full access to all:
mkdir -p /nsfs/fs1
chmod -R 777 /nsfs/fs1
The endpoint pods should mount the PVC in order to access the filesystem.
This step should be automated by the operator, but for now we manually patch the endpoints deployment like this:
kubectl patch deployment noobaa-endpoint --patch '{
"spec": { "template": { "spec": {
"volumes": [{
"name": "nsfs",
"persistentVolumeClaim": {"claimName": "nsfs-vol"}
}],
"containers": [{
"name": "endpoint",
"volumeMounts": [{ "name": "nsfs", "mountPath": "/nsfs" }]
}]
}}}
}'
A namespace resource is a configuration entity that represents the mounted filesystem in the noobaa system.
You need to provide it with some information:
- name - choose how to name it, perhaps follow the same name as the PVC or the Filesystem. You will use this name later when creating buckets that use this filesystem.
- fs_root_path - The mount point of the filesystem in the endpoints (see Step 4 - Mount PVC).
- fs_backend (optional) - When empty will assume basic POSIX filesystem only. Supported backend types:
NFSv4
,CEPH_FS
,GPFS
. Setting the more specific backend will allow optimization based on the capabilities of the underlying filesystem.
Here is an example of calling this API:
noobaa api pool_api create_namespace_resource '{
"name": "fs1",
"nsfs_config": {
"fs_root_path": "/nsfs/fs1",
"fs_backend": "GPFS"
}
}'
Create accounts with NSFS configuration:
- Map the account to a UID/GID
- Set up the directory for new buckets created from S3 for this account (TBD)
- Note that allowed_buckets should be set to full_permission because the filesystem permissions of the UID will be used to resolve the allowed buckets for this account.
noobaa api account_api create_account '{
"email": "jenia@noobaa.io",
"name" : "jenia",
"has_login": false,
"s3_access": true,
"allowed_buckets": { "full_permission": true },
"default_resource": "fs1", // needed for buckets creation using s3
"nsfs_account_config": {
"uid": *INSERT_UID*,
"gid": *INSERT_GID*,
"new_buckets_path": "/", // the path in which new buckets created via s3 will be placed
}
}'
Create account returns a response with S3 credentials:
INFO[0001] ✅ RPC: account.create_account() Response OK: took 205.7ms
access_keys:
- access_key: *NOOBAA_ACCOUNT_ACCESS_KEY*
secret_key: *NOOBAA_ACCOUNT_SECRET_KEY*
You can also perform a list accounts command in order to see the configured NSFS accounts (besides all other accounts of the system)
noobaa api account_api list_accounts
If you are interested in a particular account you can read it's information directly by email:
noobaa api account_api read_account '{"email":"jenia@noobaa.io"}'
NSFS Buckets are like creating an "export" for a filesystem directory in the S3 service.
The following API call will create a bucket with the specified name, and redirect it to a specified path from the NSFS resource that was created in Step 5 - Create NSFS Resource.
noobaa api bucket_api create_bucket '{
"name": "fs1-jenia-bucket",
"namespace":{
"write_resource": { "resource": "fs1", "path": "jenia/" },
"read_resources": [ { "resource": "fs1", "path": "jenia/" }]
}
}'
Configure the S3 client application and access the FS via S3 from the endpoint
Application S3 config:
AWS_ACCESS_KEY_ID=*NOOBAA_ACCOUNT_ACCESS_KEY*
AWS_SECRET_ACCESS_KEY=*NOOBAA_ACCOUNT_SECRET_KEY*
S3_ENDPOINT=s3.noobaa.svc (or nodePort address from noobaa status)
BUCKET_NAME=fs1-jenia-bucket
Use the S3 client configured in step 8 to create new buckets under the new_buckets_path in the default_resource configured by the requesting account
for instance, use the s3 CLI tool:
# s3 mb s3://test
a new filesystem directory called test will be created by noobaa.