Skip to content

NSFS on Kubernetes

Ashish Pandey edited this page Apr 21, 2023 · 32 revisions

NSFS (short for Namespace-Filesystem) is a capability to use a shared filesystem (mounted in the endpoints) for the storage of S3 buckets, while keeping a 1-1 mapping between Object and File.

Step 1 - Deploy in Kubernetes

This feature is currently under development, and it is recommended to use latest weekly master releases, starting from the build tagged with master-20210419.

Refer to this guide: https://github.com/noobaa/noobaa-core/wiki/Weekly-Master-Builds

Step 2 - Create PVC

For NSFS to work it requires a PVC for the filesystem, with a ReadWriteMany accessMode so that we can scale the endpoints to any node in the cluster and still be able to share the volume.

Ideally this PVC will be allocated by a provisioner, such as the CSI provisioner of rook-ceph.cephfs.csi.ceph.com (link).

If you don't have a CSI provisioner you can just set up a local volume manually using this guide: https://github.com/noobaa/noobaa-core/wiki/NSFS-using-a-Local-PV-on-k8s

Step 3 - Access Permissions

S3 access will be determined by the mapping of each S3 account to UID/GID (see Step 7 - Create Account) and the access of that UID/GID to the dirs and files in the filesystem. The filesystem admin should set up the ACLs/unix-permissions for the mounted FS path to the needed UIDs/GIDs that would be used to access it.

For dev/test the simplest way to set this up is to give full access to all:

mkdir -p /nsfs/bucket-path
chmod -R 777 /nsfs/bucket-path

NOTE: on minikube run minikube ssh and then run the above with sudo

Step 4 - Create NSFS Resource

A namespace resource is a configuration entity that represents the mounted filesystem in the noobaa system.

You need to provide it with some information:

  • name - choose how to name it, perhaps follow the same name as the PVC or the Filesystem. You will use this name later when creating buckets that use this filesystem.
  • pvc-name - the name of the pvc in which the filesystem resides.
  • fs_backend (optional) - When empty will assume basic POSIX filesystem only. Supported backend types: NFSv4, CEPH_FS, GPFS. Setting the more specific backend will allow optimization based on the capabilities of the underlying filesystem.

Here is an example of calling this API:

noobaa namespacestore create nsfs fs1 --pvc-name='nsfs-vol' --fs-backend='GPFS'

NOTE: on minkube do not use the fs-backend flag, leave it empty.

Step 5 - Create Bucket(s) - via noobaa API

NSFS Buckets are like creating an "export" for a filesystem directory in the S3 service.

The following API call will create a bucket with the specified name, and redirect it to a specified path from the NSFS resource that was created in Step 4 - Create NSFS Resource.

noobaa api bucket_api create_bucket '{
  "name": "fs1-jenia-bucket",
  "namespace":{
    "write_resource": { "resource": "fs1", "path": "bucket-path/" },
    "read_resources": [ { "resource": "fs1", "path": "bucket-path/" }]
  }
}'

Step 6 - Add bucket policy

NOTE: We should update the bucket policy using admin account. Use the admin credentials from nb status

aws s3api put-bucket-policy --bucket fs1-jenia-bucket --policy file://policy.json
policy.json:
{
"Version":"2012-10-17",
"Statement":[
        {
        "Sid":"id-1",
        "Effect":"Allow",
        "Principal":"*",
        "Action":["s3:*"],
        "Resource":["arn:aws:s3:::*"]
        }
]
}

Step 7 - Create Account(s)

Create accounts with NSFS configuration:

  • Map the account to a UID/GID
  • Set up the directory for new buckets created from S3 for this account (TBD)
  • Note that allowed_buckets should be set to full_permission because the filesystem permissions of the UID will be used to resolve the allowed buckets for this account.
noobaa api account_api create_account '{
  "email": "jenia@noobaa.io",
  "name" : "jenia",
  "has_login": false,
  "s3_access": true,
  "default_resource": "fs1", // needed for buckets creation using s3
  "nsfs_account_config": {
    "uid": *INSERT_UID*,
    "gid": *INSERT_GID*,
    "new_buckets_path": "/", // the path in which new buckets created via s3 will be placed
    "nsfs_only": *INSERT_HERE* // nsfs_only is a boolean field that defines the access permissions of account to non nsfs buckets
  },
  "bucket_claim_owner": "fs1-jenia-bucket" //This is due to the fact that `allowed_buckets` was removed. avoid using this on a real system. use bucket policy instead.
}'

Create account returns a response with S3 credentials:

INFO[0001] ✅ RPC: account.create_account() Response OK: took 205.7ms 
access_keys:
- access_key: *NOOBAA_ACCOUNT_ACCESS_KEY*
  secret_key: *NOOBAA_ACCOUNT_SECRET_KEY*

You can also perform a list accounts command in order to see the configured NSFS accounts (besides all other accounts of the system)

noobaa api account_api list_accounts {}

If you are interested in a particular account you can read it's information directly by email:

noobaa api account_api read_account '{"email":"jenia@noobaa.io"}'

Step 8 - Connect and configure S3 Client

Configure the S3 client application and access the FS via S3 from the endpoint. Use the S3 credentials, access_key and secret_key, resulted from step 7 of "Create Account(s)".

Application S3 config:

AWS_ACCESS_KEY_ID=*NOOBAA_ACCOUNT_ACCESS_KEY*
AWS_SECRET_ACCESS_KEY=*NOOBAA_ACCOUNT_SECRET_KEY*
S3_ENDPOINT=s3.noobaa.svc (or nodePort address from noobaa status)
BUCKET_NAME=fs1-jenia-bucket

As we can create different accounts, it will be helpful to have these keys and end-points configured as alias which can be used in step 9. For Example:

alias s3-user-1='AWS_ACCESS_KEY_ID=NsFsisNamEFlSytm AWS_SECRET_ACCESS_KEY=HiN00baa0nK8SDfsyV+VLoGK6ZMyCEDvklQCqW0 aws --endpoint "NodePort address" --no-verify-ssl s3'

Step 9 - Create Bucket(s) - via S3

Use the S3 client configured in step 8 to create new buckets under the new_buckets_path in the default_resource configured by the requesting account.

s3 CLI tool is the part of alias created in step 8.

# s3-user-1 mb s3://test-bucket

A new filesystem directory called "test-bucket" will be created by noobaa. Based on the input which we provided in this guide, "test-bucket" directory can be seen in /nsfs/fs1 of "endpoint" pod.