Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Helm chart: Allow existing S3 config secret for the filer statefulset and the s3 deployment #5039

Merged
merged 5 commits into from Nov 21, 2023
Merged

Helm chart: Allow existing S3 config secret for the filer statefulset and the s3 deployment #5039

merged 5 commits into from Nov 21, 2023

Conversation

jessebot
Copy link
Contributor

@jessebot jessebot commented Nov 21, 2023

What problem are we solving?

Allowing users to specify their own existing S3 configuration Kubernetes Secret via the values.yaml. Before this change, you could either enable auth and use the default secret with one admin user and one readonly user or disable auth entirely or enable auth and skip the secret creation, but there was no way to specify an existing secret. There was also an issue where the kubernetes secret and volumes/volumemounts were being created even if you didn't enable auth.

How are we solving the problem?

I've added filer.s3.existingConfigSecret and s3.existingConfigSecret to values.yaml for the helm chart, so a user can specify their own s3 config with whatever name they'd like as long as it has a key called seaweedfs-s3-secret.

Example parameters for your values.yaml to use authentication and provide your own secret:

filer:
  s3:
    enabled: true
    enableAuth: true
    # you do not need to provide skipAuthSecretCreation anymore
    # this can be any Kubernetes secret with a key called seaweedfs_s3_config containing your s3 config
    existingConfigSecret: my-s3-secret

Example existing secret with your s3 config to create an admin user and readonly user, both with credentials:

---
# Source: seaweedfs/templates/seaweedfs-s3-secret.yaml
apiVersion: v1
kind: Secret
type: Opaque
metadata:
  name: my-s3-secret
  namespace: seaweedfs
  labels:
    app.kubernetes.io/name: seaweedfs
    app.kubernetes.io/component: s3
stringData:
  # this key must be an inline json config file
  seaweedfs_s3_config: '{"identities":[{"name":"anvAdmin","credentials":[{"accessKey":"snu8yoP6QAlY0ne4","secretKey":"PNzBcmeLNEdR0oviwm04NQAicOrDH1Km"}],"actions":["Admin","Read","Write"]},{"name":"anvReadOnly","credentials":[{"accessKey":"SCigFee6c5lbi04A","secretKey":"kgFhbT38R8WUYVtiFQ1OiSVOrYr3NKku"}],"actions":["Read"]}]}'
Human readable s3 config json from the seaweedfs_s3_config key in the above example Kubernetes secret

This is the contents of the seaweedfs_s3_config key in the secret above. It creates an admin user and a read only user.

{
  "identities": [
    {
      "name": "anvAdmin",
      "credentials": [
        {
          "accessKey": "snu8yoP6QAlY0ne4",
          "secretKey": "PNzBcmeLNEdR0oviwm04NQAicOrDH1Km"
        }
      ],
      "actions": [
        "Admin",
        "Read",
        "Write"
      ]
    },
    {
      "name": "anvReadOnly",
      "credentials": [
        {
          "accessKey": "SCigFee6c5lbi04A",
          "secretKey": "kgFhbT38R8WUYVtiFQ1OiSVOrYr3NKku"
        }
      ],
      "actions": [
        "Read"
      ]
    }
  ]
}

Full list of Changes made to facilitate this feature:

  • Now we only create the s3 secret volumes or volumeMounts in both the S3 Deployment and filer StatefulSet if you pass in both s3.enableAuth or filer.s3.enableAuth.

  • We will use filer.s3.existingConfigSecret and s3.existingConfigSecret for the name of the S3 secret volume if they are passed in. The default value of "" will result in us choosing the default name for the secret we create.

  • Added both existingConfigSecret values to the list of exceptions for the default s3 secret creation, meaning if either are passed in, the default s3 secret will not be created.

  • Changed the name of the default secret template file to match the other s3 specific files by prefixing it with s3- for easier maintainability

  • Bumped the helm chart version to 3.59.2

  • Added an S3 section to the README.md in the helm chart directory

How is the PR tested?

Clone the seaweedfs repo and cd into the k8s/charts/seaweedfs directory.

First test that the default configuration still works

We start by rendering the templates with s3 and s3 auth enabled:

$ helm template . --set filer.s3.enabled=true,filer.s3.enableAuth=true --output-dir auto-secret
wrote auto-secret/seaweedfs/templates/service-account.yaml
wrote auto-secret/seaweedfs/templates/master-configmap.yaml
wrote auto-secret/seaweedfs/templates/service-account.yaml
wrote auto-secret/seaweedfs/templates/service-account.yaml
wrote auto-secret/seaweedfs/templates/filer-service-client.yaml
wrote auto-secret/seaweedfs/templates/filer-service.yaml
wrote auto-secret/seaweedfs/templates/master-service.yaml
wrote auto-secret/seaweedfs/templates/s3-service.yaml
wrote auto-secret/seaweedfs/templates/volume-service.yaml
wrote auto-secret/seaweedfs/templates/filer-statefulset.yaml
wrote auto-secret/seaweedfs/templates/master-statefulset.yaml
wrote auto-secret/seaweedfs/templates/volume-statefulset.yaml
wrote auto-secret/seaweedfs/templates/service-account.yaml
wrote auto-secret/seaweedfs/templates/s3-secret.yaml
wrote auto-secret/seaweedfs/templates/secret-seaweedfs-db.yaml

We can see it wrote out the expected s3-secret.yaml above. Next, we need to check the auto-secret dir to make sure the correct secret name is rendered in the filer's statefulset:

$ grep -A 3 config-users auto-secret/seaweedfs/templates/filer-statefulset.yaml
            - name: config-users
              mountPath: /etc/sw
              readOnly: true
            - name: data-filer
--
        - name: config-users
          secret:
            defaultMode: 420
            secretName: seaweedfs-s3-secret

The secretName is the default secret name 🎉

full filer-statefulset.yaml
cat auto-secret/seaweedfs/templates/filer-statefulset.yaml

output:

---
# Source: seaweedfs/templates/filer-statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: seaweedfs-filer
  namespace: seaweedfs
  labels:
    app.kubernetes.io/name: seaweedfs
    helm.sh/chart: seaweedfs-3.59.2
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/component: filer
spec:
  serviceName: seaweedfs-filer
  podManagementPolicy: Parallel
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: seaweedfs
      helm.sh/chart: seaweedfs-3.59.2
      app.kubernetes.io/instance: release-name
      app.kubernetes.io/component: filer
  template:
    metadata:
      labels:
        app.kubernetes.io/name: seaweedfs
        helm.sh/chart: seaweedfs-3.59.2
        app.kubernetes.io/instance: release-name
        app.kubernetes.io/component: filer

      annotations:

    spec:
      restartPolicy: Always
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchLabels:
                  app.kubernetes.io/name: seaweedfs
                  app.kubernetes.io/instance: release-name
                  app.kubernetes.io/component: filer
              topologyKey: kubernetes.io/hostname

      serviceAccountName: seaweedfs-rw-sa #hack for delete pod master after migration
      terminationGracePeriodSeconds: 60
      enableServiceLinks: false
      containers:
        - name: seaweedfs
          image: chrislusf/seaweedfs:3.59
          imagePullPolicy: IfNotPresent
          env:
            - name: POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: WEED_MYSQL_USERNAME
              valueFrom:
                secretKeyRef:
                  name: secret-seaweedfs-db
                  key: user
            - name: WEED_MYSQL_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: secret-seaweedfs-db
                  key: password
            - name: SEAWEEDFS_FULLNAME
              value: "seaweedfs"
            - name: WEED_FILER_BUCKETS_FOLDER
              value: "/buckets"
            - name: WEED_FILER_OPTIONS_RECURSIVE_DELETE
              value: "false"
            - name: WEED_LEVELDB2_ENABLED
              value: "true"
            - name: WEED_MYSQL_CONNECTION_MAX_IDLE
              value: "5"
            - name: WEED_MYSQL_CONNECTION_MAX_LIFETIME_SECONDS
              value: "600"
            - name: WEED_MYSQL_CONNECTION_MAX_OPEN
              value: "75"
            - name: WEED_MYSQL_DATABASE
              value: "sw_database"
            - name: WEED_MYSQL_ENABLED
              value: "false"
            - name: WEED_MYSQL_HOSTNAME
              value: "mysql-db-host"
            - name: WEED_MYSQL_INTERPOLATEPARAMS
              value: "true"
            - name: WEED_MYSQL_PORT
              value: "3306"
            - name: WEED_CLUSTER_DEFAULT
              value: "sw"
            - name: WEED_CLUSTER_SW_FILER
              value: "seaweedfs-filer-client.seaweedfs:8888"
            - name: WEED_CLUSTER_SW_MASTER
              value: "seaweedfs-master.seaweedfs:9333"
          command:
            - "/bin/sh"
            - "-ec"
            - |
              exec /usr/bin/weed \
              -logdir=/logs \
              -v=1 \
              filer \
              -port=8888 \
              -metricsPort=9327 \
              -dirListLimit=100000 \
              -defaultReplicaPlacement=000 \
              -ip=${POD_IP} \
              -s3 \
              -s3.port=8333 \
              -s3.config=/etc/sw/seaweedfs_s3_config \
              -master=${SEAWEEDFS_FULLNAME}-master-0.${SEAWEEDFS_FULLNAME}-master.seaweedfs:9333
          volumeMounts:
            - name: seaweedfs-filer-log-volume
              mountPath: "/logs/"
            - name: config-users
              mountPath: /etc/sw
              readOnly: true
            - name: data-filer
              mountPath: /data

          ports:
            - containerPort: 8888
              name: swfs-filer
            - containerPort: 9327
              name: metrics
            - containerPort: 18888
              #name: swfs-filer-grpc
          readinessProbe:
            httpGet:
              path: /
              port: 8888
              scheme:
            initialDelaySeconds: 10
            periodSeconds: 15
            successThreshold: 1
            failureThreshold: 100
            timeoutSeconds: 10
          livenessProbe:
            httpGet:
              path: /
              port: 8888
              scheme:
            initialDelaySeconds: 20
            periodSeconds: 30
            successThreshold: 1
            failureThreshold: 5
            timeoutSeconds: 10
      volumes:
        - name: seaweedfs-filer-log-volume
          hostPath:
            path: /storage/logs/seaweedfs/filer
            type: DirectoryOrCreate
        - name: data-filer
          hostPath:
            path: /storage/filer_store
            type: DirectoryOrCreate
        - name: db-schema-config-volume
          configMap:
            name: seaweedfs-db-init-config
        - name: config-users
          secret:
            defaultMode: 420
            secretName: seaweedfs-s3-secret

      nodeSelector:
        beta.kubernetes.io/arch: amd64

Second let's test the new use case, using an existing secret

Render the helm templates with s3 enabled, s3 auth enabled, and an existing secret:

$ helm template . --set filer.s3.enabled=true,filer.s3.enableAuth=true,filer.s3.existingConfigSecret=mytestsecret --output-dir existing-secret
wrote existing-secret/seaweedfs/templates/service-account.yaml
wrote existing-secret/seaweedfs/templates/master-configmap.yaml
wrote existing-secret/seaweedfs/templates/service-account.yaml
wrote existing-secret/seaweedfs/templates/service-account.yaml
wrote existing-secret/seaweedfs/templates/filer-service-client.yaml
wrote existing-secret/seaweedfs/templates/filer-service.yaml
wrote existing-secret/seaweedfs/templates/master-service.yaml
wrote existing-secret/seaweedfs/templates/s3-service.yaml
wrote existing-secret/seaweedfs/templates/volume-service.yaml
wrote existing-secret/seaweedfs/templates/filer-statefulset.yaml
wrote existing-secret/seaweedfs/templates/master-statefulset.yaml
wrote existing-secret/seaweedfs/templates/volume-statefulset.yaml
wrote existing-secret/seaweedfs/templates/service-account.yaml
wrote existing-secret/seaweedfs/templates/secret-seaweedfs-db.yaml

Notice that there is no s3-secret.yaml rendered, which is expected as we provided that ourselves.

Finally, let's verify that our existing secret name was rendered out correctly:

$ grep -A 3 config-users existing-secret/seaweedfs/templates/filer-statefulset.yaml
            - name: config-users
              mountPath: /etc/sw
              readOnly: true
            - name: data-filer
--
        - name: config-users
          secret:
            defaultMode: 420
            secretName: mytestsecret

That looks correct, as the secretName is set to mytestsecret 🎉

full filer statefulset for verification
cat existing-secret/seaweedfs/templates/filer-statefulset.yaml

output:

---
# Source: seaweedfs/templates/filer-statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: seaweedfs-filer
  namespace: seaweedfs
  labels:
    app.kubernetes.io/name: seaweedfs
    helm.sh/chart: seaweedfs-3.59.2
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/component: filer
spec:
  serviceName: seaweedfs-filer
  podManagementPolicy: Parallel
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: seaweedfs
      helm.sh/chart: seaweedfs-3.59.2
      app.kubernetes.io/instance: release-name
      app.kubernetes.io/component: filer
  template:
    metadata:
      labels:
        app.kubernetes.io/name: seaweedfs
        helm.sh/chart: seaweedfs-3.59.2
        app.kubernetes.io/instance: release-name
        app.kubernetes.io/component: filer

      annotations:

    spec:
      restartPolicy: Always
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchLabels:
                  app.kubernetes.io/name: seaweedfs
                  app.kubernetes.io/instance: release-name
                  app.kubernetes.io/component: filer
              topologyKey: kubernetes.io/hostname

      serviceAccountName: seaweedfs-rw-sa #hack for delete pod master after migration
      terminationGracePeriodSeconds: 60
      enableServiceLinks: false
      containers:
        - name: seaweedfs
          image: chrislusf/seaweedfs:3.59
          imagePullPolicy: IfNotPresent
          env:
            - name: POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: WEED_MYSQL_USERNAME
              valueFrom:
                secretKeyRef:
                  name: secret-seaweedfs-db
                  key: user
            - name: WEED_MYSQL_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: secret-seaweedfs-db
                  key: password
            - name: SEAWEEDFS_FULLNAME
              value: "seaweedfs"
            - name: WEED_FILER_BUCKETS_FOLDER
              value: "/buckets"
            - name: WEED_FILER_OPTIONS_RECURSIVE_DELETE
              value: "false"
            - name: WEED_LEVELDB2_ENABLED
              value: "true"
            - name: WEED_MYSQL_CONNECTION_MAX_IDLE
              value: "5"
            - name: WEED_MYSQL_CONNECTION_MAX_LIFETIME_SECONDS
              value: "600"
            - name: WEED_MYSQL_CONNECTION_MAX_OPEN
              value: "75"
            - name: WEED_MYSQL_DATABASE
              value: "sw_database"
            - name: WEED_MYSQL_ENABLED
              value: "false"
            - name: WEED_MYSQL_HOSTNAME
              value: "mysql-db-host"
            - name: WEED_MYSQL_INTERPOLATEPARAMS
              value: "true"
            - name: WEED_MYSQL_PORT
              value: "3306"
            - name: WEED_CLUSTER_DEFAULT
              value: "sw"
            - name: WEED_CLUSTER_SW_FILER
              value: "seaweedfs-filer-client.seaweedfs:8888"
            - name: WEED_CLUSTER_SW_MASTER
              value: "seaweedfs-master.seaweedfs:9333"
          command:
            - "/bin/sh"
            - "-ec"
            - |
              exec /usr/bin/weed \
              -logdir=/logs \
              -v=1 \
              filer \
              -port=8888 \
              -metricsPort=9327 \
              -dirListLimit=100000 \
              -defaultReplicaPlacement=000 \
              -ip=${POD_IP} \
              -s3 \
              -s3.port=8333 \
              -s3.config=/etc/sw/seaweedfs_s3_config \
              -master=${SEAWEEDFS_FULLNAME}-master-0.${SEAWEEDFS_FULLNAME}-master.seaweedfs:9333
          volumeMounts:
            - name: seaweedfs-filer-log-volume
              mountPath: "/logs/"
            - name: config-users
              mountPath: /etc/sw
              readOnly: true
            - name: data-filer
              mountPath: /data

          ports:
            - containerPort: 8888
              name: swfs-filer
            - containerPort: 9327
              name: metrics
            - containerPort: 18888
              #name: swfs-filer-grpc
          readinessProbe:
            httpGet:
              path: /
              port: 8888
              scheme:
            initialDelaySeconds: 10
            periodSeconds: 15
            successThreshold: 1
            failureThreshold: 100
            timeoutSeconds: 10
          livenessProbe:
            httpGet:
              path: /
              port: 8888
              scheme:
            initialDelaySeconds: 20
            periodSeconds: 30
            successThreshold: 1
            failureThreshold: 5
            timeoutSeconds: 10
      volumes:
        - name: seaweedfs-filer-log-volume
          hostPath:
            path: /storage/logs/seaweedfs/filer
            type: DirectoryOrCreate
        - name: data-filer
          hostPath:
            path: /storage/filer_store
            type: DirectoryOrCreate
        - name: db-schema-config-volume
          configMap:
            name: seaweedfs-db-init-config
        - name: config-users
          secret:
            defaultMode: 420
            secretName: mytestsecret

      nodeSelector:
        beta.kubernetes.io/arch: amd64

Checks

  • I have added unit tests if possible.
  • I will add related wiki document changes and link to this PR after merging.

@jessebot jessebot changed the title Allow existing s3 secret for filer statefulset and s3 deployment Helm chart: Allow existing S3 config secret for the filer statefulset and the s3 deployment Nov 21, 2023
@@ -571,6 +571,9 @@ filer:
# enable user & permission to s3 (need to inject to all services)
enableAuth: false
skipAuthSecretCreation: false
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As a suggestion, we could reasonably deprecate this value, as ideally a user enables auth and provides their own secret, or we create it for them, which should be taken care of by enableAuth and existingConfigSecret for both s3 and filer.s3. I left it in though, as I want the new feature to stand in production for a bit before officially starting the deprecation process. Same with story with s3.skipAuthSecretCreation.

@chrislusf chrislusf merged commit f4cafc1 into seaweedfs:master Nov 21, 2023
6 checks passed
@jessebot jessebot deleted the allow-existing-s3-secret branch November 21, 2023 18:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants