Skip to content
This repository has been archived by the owner on Mar 3, 2023. It is now read-only.

[Heron-3723] Add support for Persistent Volumes for stateful storage #3725

Merged
merged 248 commits into from Nov 30, 2021

Conversation

surahman
Copy link
Member

@surahman surahman commented Nov 2, 2021

Feature #3723: Add support for Persistent Volumes for stateful storage

PersistentVolumeClaims and mount points similar to the feature found in Spark.

This will overwrite the Volumes and VolumeMounts provided in a custom Pod Template which clash with CLI input. Heron's entries will take precedence. This is to support on-topology-submit updates to the Volumes and VolumeMounts.

PersistentVolumeClaims for dynamic volumes will be created in the same namespace as the Heron API Server. They will be removed once the Topology is killed.

Dynamic Volume provisioning must be enabled on the K8s cluster.

To disable the feature: -D heron.kubernetes.persistent.volume.claims.cli.disabled=true.

Added support for the following options:

  • storageClass
  • sizeLimit
  • accessModes
  • volumeMode
  • path
  • subPath

CLI Command:

--config-property heron.kubernetes.volumes.persistentVolumeClaim.[VOLUME NAME].[OPTION]=[VALUE]

Example:
Commands:

--config-property heron.kubernetes.volumes.persistentVolumeClaim.volumeNameOfChoice.storageClassName=storageClassNameOfChoice
--config-property heron.kubernetes.volumes.persistentVolumeClaim.volumeNameOfChoice.accessModes=comma,separated,list
--config-property heron.kubernetes.volumes.persistentVolumeClaim.volumeNameOfChoice.sizeLimit=555Gi
--config-property heron.kubernetes.volumes.persistentVolumeClaim.volumeNameOfChoice.volumeMode=volumeModeOfChoice
--config-property heron.kubernetes.volumes.persistentVolumeClaim.volumeNameOfChoice.path=path/to/mount
--config-property heron.kubernetes.volumes.persistentVolumeClaim.volumeNameOfChoice.subPath=sub/path/to/mount

Will generate the PVC:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nameOfVolumeClaim
spec:
  volumeName: volumeNameOfChoice
  accessModes:
    - comma
    - separated
    - list
  volumeMode: volumeModeOfChoice
  resources:
    requests:
      storage: 555Gi
  storageClassName: storageClassNameOfChoice

With Pod Spec entries for Volume:

volumes:
  - name: volumeNameOfChoice
    persistentVolumeClaim:
      claimName: nameOfVolumeClaim

With Executor container entries for VolumeMounts:

volumeMounts:
  - mountPath: path/to/mount
    subPath: sub/path/to/mount
    name: volumeNameOfChoice

DESIGN:

There is an enum in the KubernetesConstants class called Kubernetes.PersistentVolumeClaimOptions which contains the CLI options that are accepted.

Parameters from the CLI are parsed into the data structure:
Map<String, Map<Kubernetes.PersistentVolumeClaimOptions, String>>.
This maps the volume's name to its key-value pairs for its configuration.

Adding more options to the functionality simply requires the extension of the PersistentVolumeClaimOptions enum and adding the entries to the switch statement in V1Controller.createPersistentVolumeClaims.

Adding checks for null pointers. Default constructed V1 objects tend to have uninitialised fields set to null by default. Extracting <getConfigMaps> to method to support mocking.
Judging from <release-11.0.0/kubernetes/src/main/java/io/kubernetes/client/openapi/apis/CoreV1Api.java> "optional" means the field can be set to <null>.
@surahman
Copy link
Member Author

surahman commented Nov 15, 2021

I added support for shared Volumes via a claimName but have not had the chance to complete a full battery of deployment tests.

A Claim name of OnDemand will result in PVCs being created. Any other Claim name will only add Volumes to the Pod Spec and Volume Mounts to the Executor Pod and will be replicated between all Pods in the topology.

Documentation is here.

I think we may be able to whittle down the permissions for the PVC to:

rules:
- apiGroups: 
 - ""
 resources: 
 - persistentvolumeclaims
 verbs: 
 - create
 - deletecollection

<configurePodWithPersistentVolumeClaimVolumesAndMounts> adds both Volumes and Mounts.

// Testing loop.
for (TestTuple<Pair<List<V1Volume>, List<V1VolumeMount>>,
Pair<List<V1Volume>, List<V1VolumeMount>>> testCase : testCases) {
Copy link
Member Author

@surahman surahman Nov 16, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Strange CI error because this line is indented 13.

[ERROR] /home/travis/.cache/bazel/_bazel_travis/be6dac4936703c7eedcb4f5cf38cdd65/execroot/org_apache_heron/heron/schedulers/tests/java/org/apache/heron/scheduler/kubernetes/V1ControllerTest.java:902: 'Pair' have incorrect indentation level 12, expected level should be 13. [Indentation]

Weird indentation error indicating line is indented to column 12 instead of 13 when it is at 13. Added single space to line 902.
@surahman
Copy link
Member Author

I ran some quick tests before I begin separating the executors and the manager and everything looks good. YAML for each case is below, I can confirm that a shared volume will not add a PVC.

Shared StatefulSet
apiVersion: apps/v1
kind: StatefulSet
metadata:
  creationTimestamp: "2021-11-18T17:23:24Z"
  generation: 1
  name: acking
  namespace: default
  resourceVersion: "2717"
  uid: f4e9b2a9-f880-4b80-bd15-7cd54cdd23fb
spec:
  podManagementPolicy: Parallel
  replicas: 3
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: heron
      topology: acking
  serviceName: acking
  template:
    metadata:
      annotations:
        prometheus.io/port: "8080"
        prometheus.io/scrape: "true"
      creationTimestamp: null
      labels:
        app: heron
        topology: acking
    spec:
      containers:
      - command:
        - sh
        - -c
        - './heron-core/bin/heron-downloader-config kubernetes && ./heron-core/bin/heron-downloader
          distributedlog://zookeeper:2181/heronbkdl/acking-saad-tag-0-160636999515090835.tar.gz
          . && SHARD_ID=${POD_NAME##*-} && echo shardId=${SHARD_ID} && ./heron-core/bin/heron-executor
          --topology-name=acking --topology-id=ackingcf66189f-ca7f-46a3-a0a9-cb41a5d658c4
          --topology-defn-file=acking.defn --state-manager-connection=zookeeper:2181
          --state-manager-root=/heron --state-manager-config-file=./heron-conf/statemgr.yaml
          --tmanager-binary=./heron-core/bin/heron-tmanager --stmgr-binary=./heron-core/bin/heron-stmgr
          --metrics-manager-classpath=./heron-core/lib/metricsmgr/* --instance-jvm-opts="LVhYOitIZWFwRHVtcE9uT3V0T2ZNZW1vcnlFcnJvcg(61)(61)"
          --classpath=heron-api-examples.jar --heron-internals-config-file=./heron-conf/heron_internals.yaml
          --override-config-file=./heron-conf/override.yaml --component-ram-map=exclaim1:1073741824,word:1073741824
          --component-jvm-opts="" --pkg-type=jar --topology-binary-file=heron-api-examples.jar
          --heron-java-home=$JAVA_HOME --heron-shell-binary=./heron-core/bin/heron-shell
          --cluster=kubernetes --role=saad --environment=default --instance-classpath=./heron-core/lib/instance/*
          --metrics-sinks-config-file=./heron-conf/metrics_sinks.yaml --scheduler-classpath=./heron-core/lib/scheduler/*:./heron-core/lib/packing/*:./heron-core/lib/statemgr/*
          --python-instance-binary=./heron-core/bin/heron-python-instance --cpp-instance-binary=./heron-core/bin/heron-cpp-instance
          --metricscache-manager-classpath=./heron-core/lib/metricscachemgr/* --metricscache-manager-mode=disabled
          --is-stateful=false --checkpoint-manager-classpath=./heron-core/lib/ckptmgr/*:./heron-core/lib/statefulstorage/*:
          --stateful-config-file=./heron-conf/stateful.yaml --checkpoint-manager-ram=1073741824
          --health-manager-mode=disabled --health-manager-classpath=./heron-core/lib/healthmgr/*
          --shard=$SHARD_ID --server-port=6001 --tmanager-controller-port=6002 --tmanager-stats-port=6003
          --shell-port=6004 --metrics-manager-port=6005 --scheduler-port=6006 --metricscache-manager-server-port=6007
          --metricscache-manager-stats-port=6008 --checkpoint-manager-port=6009'
        env:
        - name: HOST
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: status.podIP
        - name: POD_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
        - name: var_one
          value: variable one
        - name: var_three
          value: variable three
        - name: var_two
          value: variable two
        image: apache/heron:testbuild
        imagePullPolicy: IfNotPresent
        name: executor
        ports:
        - containerPort: 5555
          name: tcp-port-kept
          protocol: TCP
        - containerPort: 5556
          name: udp-port-kept
          protocol: UDP
        - containerPort: 6001
          name: server
          protocol: TCP
        - containerPort: 6002
          name: tmanager-ctl
          protocol: TCP
        - containerPort: 6003
          name: tmanager-stats
          protocol: TCP
        - containerPort: 6004
          name: shell-port
          protocol: TCP
        - containerPort: 6005
          name: metrics-mgr
          protocol: TCP
        - containerPort: 6006
          name: scheduler
          protocol: TCP
        - containerPort: 6007
          name: metrics-cache-m
          protocol: TCP
        - containerPort: 6008
          name: metrics-cache-s
          protocol: TCP
        - containerPort: 6009
          name: ckptmgr
          protocol: TCP
        resources:
          limits:
            cpu: "3"
            memory: 4Gi
          requests:
            cpu: "3"
            memory: 4Gi
        securityContext:
          allowPrivilegeEscalation: false
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /shared_volume
          name: shared-volume
        - mountPath: path/to/mount
          name: volumenameofchoice
          subPath: sub/path/to/mount
      - image: alpine
        imagePullPolicy: Always
        name: sidecar-container
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /shared_volume
          name: shared-volume
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 0
      tolerations:
      - effect: NoExecute
        key: node.kubernetes.io/not-ready
        operator: Exists
        tolerationSeconds: 10
      - effect: NoExecute
        key: node.kubernetes.io/unreachable
        operator: Exists
        tolerationSeconds: 10
      volumes:
      - emptyDir: {}
        name: shared-volume
      - name: volumenameofchoice
        persistentVolumeClaim:
          claimName: requested-claim-by-user
  updateStrategy:
    rollingUpdate:
      partition: 0
    type: RollingUpdate
status:
  collisionCount: 0
  currentReplicas: 3
  currentRevision: acking-76494fb95
  observedGeneration: 1
  replicas: 3
  updateRevision: acking-76494fb95
  updatedReplicas: 3
Shared Pod
apiVersion: v1
kind: Pod
metadata:
  annotations:
    prometheus.io/port: "8080"
    prometheus.io/scrape: "true"
  creationTimestamp: "2021-11-18T17:23:24Z"
  generateName: acking-
  labels:
    app: heron
    controller-revision-hash: acking-76494fb95
    statefulset.kubernetes.io/pod-name: acking-1
    topology: acking
  name: acking-1
  namespace: default
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: StatefulSet
    name: acking
    uid: f4e9b2a9-f880-4b80-bd15-7cd54cdd23fb
  resourceVersion: "2718"
  uid: 0e28c20c-e3aa-4f36-a2dd-da063147f5c0
spec:
  containers:
  - command:
    - sh
    - -c
    - './heron-core/bin/heron-downloader-config kubernetes && ./heron-core/bin/heron-downloader
      distributedlog://zookeeper:2181/heronbkdl/acking-saad-tag-0-160636999515090835.tar.gz
      . && SHARD_ID=${POD_NAME##*-} && echo shardId=${SHARD_ID} && ./heron-core/bin/heron-executor
      --topology-name=acking --topology-id=ackingcf66189f-ca7f-46a3-a0a9-cb41a5d658c4
      --topology-defn-file=acking.defn --state-manager-connection=zookeeper:2181 --state-manager-root=/heron
      --state-manager-config-file=./heron-conf/statemgr.yaml --tmanager-binary=./heron-core/bin/heron-tmanager
      --stmgr-binary=./heron-core/bin/heron-stmgr --metrics-manager-classpath=./heron-core/lib/metricsmgr/*
      --instance-jvm-opts="LVhYOitIZWFwRHVtcE9uT3V0T2ZNZW1vcnlFcnJvcg(61)(61)" --classpath=heron-api-examples.jar
      --heron-internals-config-file=./heron-conf/heron_internals.yaml --override-config-file=./heron-conf/override.yaml
      --component-ram-map=exclaim1:1073741824,word:1073741824 --component-jvm-opts=""
      --pkg-type=jar --topology-binary-file=heron-api-examples.jar --heron-java-home=$JAVA_HOME
      --heron-shell-binary=./heron-core/bin/heron-shell --cluster=kubernetes --role=saad
      --environment=default --instance-classpath=./heron-core/lib/instance/* --metrics-sinks-config-file=./heron-conf/metrics_sinks.yaml
      --scheduler-classpath=./heron-core/lib/scheduler/*:./heron-core/lib/packing/*:./heron-core/lib/statemgr/*
      --python-instance-binary=./heron-core/bin/heron-python-instance --cpp-instance-binary=./heron-core/bin/heron-cpp-instance
      --metricscache-manager-classpath=./heron-core/lib/metricscachemgr/* --metricscache-manager-mode=disabled
      --is-stateful=false --checkpoint-manager-classpath=./heron-core/lib/ckptmgr/*:./heron-core/lib/statefulstorage/*:
      --stateful-config-file=./heron-conf/stateful.yaml --checkpoint-manager-ram=1073741824
      --health-manager-mode=disabled --health-manager-classpath=./heron-core/lib/healthmgr/*
      --shard=$SHARD_ID --server-port=6001 --tmanager-controller-port=6002 --tmanager-stats-port=6003
      --shell-port=6004 --metrics-manager-port=6005 --scheduler-port=6006 --metricscache-manager-server-port=6007
      --metricscache-manager-stats-port=6008 --checkpoint-manager-port=6009'
    env:
    - name: HOST
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: status.podIP
    - name: POD_NAME
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.name
    - name: var_one
      value: variable one
    - name: var_three
      value: variable three
    - name: var_two
      value: variable two
    image: apache/heron:testbuild
    imagePullPolicy: IfNotPresent
    name: executor
    ports:
    - containerPort: 5555
      name: tcp-port-kept
      protocol: TCP
    - containerPort: 5556
      name: udp-port-kept
      protocol: UDP
    - containerPort: 6001
      name: server
      protocol: TCP
    - containerPort: 6002
      name: tmanager-ctl
      protocol: TCP
    - containerPort: 6003
      name: tmanager-stats
      protocol: TCP
    - containerPort: 6004
      name: shell-port
      protocol: TCP
    - containerPort: 6005
      name: metrics-mgr
      protocol: TCP
    - containerPort: 6006
      name: scheduler
      protocol: TCP
    - containerPort: 6007
      name: metrics-cache-m
      protocol: TCP
    - containerPort: 6008
      name: metrics-cache-s
      protocol: TCP
    - containerPort: 6009
      name: ckptmgr
      protocol: TCP
    resources:
      limits:
        cpu: "3"
        memory: 4Gi
      requests:
        cpu: "3"
        memory: 4Gi
    securityContext:
      allowPrivilegeEscalation: false
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /shared_volume
      name: shared-volume
    - mountPath: path/to/mount
      name: volumenameofchoice
      subPath: sub/path/to/mount
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-8pmpr
      readOnly: true
  - image: alpine
    imagePullPolicy: Always
    name: sidecar-container
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /shared_volume
      name: shared-volume
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-8pmpr
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  hostname: acking-1
  preemptionPolicy: PreemptLowerPriority
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  subdomain: acking
  terminationGracePeriodSeconds: 0
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 10
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 10
  volumes:
  - emptyDir: {}
    name: shared-volume
  - name: volumenameofchoice
    persistentVolumeClaim:
      claimName: requested-claim-by-user
  - name: kube-api-access-8pmpr
    projected:
      defaultMode: 420
      sources:
      - serviceAccountToken:
          expirationSeconds: 3607
          path: token
      - configMap:
          items:
          - key: ca.crt
            path: ca.crt
          name: kube-root-ca.crt
      - downwardAPI:
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
            path: namespace
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2021-11-18T17:23:24Z"
    message: '0/1 nodes are available: 1 persistentvolumeclaim "requested-claim-by-user"
      not found.'
    reason: Unschedulable
    status: "False"
    type: PodScheduled
  phase: Pending
  qosClass: Burstable
Static StatefulSet
apiVersion: apps/v1
kind: StatefulSet
metadata:
  creationTimestamp: "2021-11-18T17:12:52Z"
  generation: 1
  name: acking
  namespace: default
  resourceVersion: "2061"
  uid: 54337851-2bc2-4cf3-bf66-d9e8a5716aa2
spec:
  podManagementPolicy: Parallel
  replicas: 3
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: heron
      topology: acking
  serviceName: acking
  template:
    metadata:
      annotations:
        prometheus.io/port: "8080"
        prometheus.io/scrape: "true"
      creationTimestamp: null
      labels:
        app: heron
        topology: acking
    spec:
      containers:
      - command:
        - sh
        - -c
        - './heron-core/bin/heron-downloader-config kubernetes && ./heron-core/bin/heron-downloader
          distributedlog://zookeeper:2181/heronbkdl/acking-saad-tag-0--1619810271492859618.tar.gz
          . && SHARD_ID=${POD_NAME##*-} && echo shardId=${SHARD_ID} && ./heron-core/bin/heron-executor
          --topology-name=acking --topology-id=acking87b0a985-49ec-4749-aa2d-d7f4aaa736a4
          --topology-defn-file=acking.defn --state-manager-connection=zookeeper:2181
          --state-manager-root=/heron --state-manager-config-file=./heron-conf/statemgr.yaml
          --tmanager-binary=./heron-core/bin/heron-tmanager --stmgr-binary=./heron-core/bin/heron-stmgr
          --metrics-manager-classpath=./heron-core/lib/metricsmgr/* --instance-jvm-opts="LVhYOitIZWFwRHVtcE9uT3V0T2ZNZW1vcnlFcnJvcg(61)(61)"
          --classpath=heron-api-examples.jar --heron-internals-config-file=./heron-conf/heron_internals.yaml
          --override-config-file=./heron-conf/override.yaml --component-ram-map=exclaim1:1073741824,word:1073741824
          --component-jvm-opts="" --pkg-type=jar --topology-binary-file=heron-api-examples.jar
          --heron-java-home=$JAVA_HOME --heron-shell-binary=./heron-core/bin/heron-shell
          --cluster=kubernetes --role=saad --environment=default --instance-classpath=./heron-core/lib/instance/*
          --metrics-sinks-config-file=./heron-conf/metrics_sinks.yaml --scheduler-classpath=./heron-core/lib/scheduler/*:./heron-core/lib/packing/*:./heron-core/lib/statemgr/*
          --python-instance-binary=./heron-core/bin/heron-python-instance --cpp-instance-binary=./heron-core/bin/heron-cpp-instance
          --metricscache-manager-classpath=./heron-core/lib/metricscachemgr/* --metricscache-manager-mode=disabled
          --is-stateful=false --checkpoint-manager-classpath=./heron-core/lib/ckptmgr/*:./heron-core/lib/statefulstorage/*:
          --stateful-config-file=./heron-conf/stateful.yaml --checkpoint-manager-ram=1073741824
          --health-manager-mode=disabled --health-manager-classpath=./heron-core/lib/healthmgr/*
          --shard=$SHARD_ID --server-port=6001 --tmanager-controller-port=6002 --tmanager-stats-port=6003
          --shell-port=6004 --metrics-manager-port=6005 --scheduler-port=6006 --metricscache-manager-server-port=6007
          --metricscache-manager-stats-port=6008 --checkpoint-manager-port=6009'
        env:
        - name: HOST
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: status.podIP
        - name: POD_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
        - name: var_one
          value: variable one
        - name: var_three
          value: variable three
        - name: var_two
          value: variable two
        image: apache/heron:testbuild
        imagePullPolicy: IfNotPresent
        name: executor
        ports:
        - containerPort: 5555
          name: tcp-port-kept
          protocol: TCP
        - containerPort: 5556
          name: udp-port-kept
          protocol: UDP
        - containerPort: 6001
          name: server
          protocol: TCP
        - containerPort: 6002
          name: tmanager-ctl
          protocol: TCP
        - containerPort: 6003
          name: tmanager-stats
          protocol: TCP
        - containerPort: 6004
          name: shell-port
          protocol: TCP
        - containerPort: 6005
          name: metrics-mgr
          protocol: TCP
        - containerPort: 6006
          name: scheduler
          protocol: TCP
        - containerPort: 6007
          name: metrics-cache-m
          protocol: TCP
        - containerPort: 6008
          name: metrics-cache-s
          protocol: TCP
        - containerPort: 6009
          name: ckptmgr
          protocol: TCP
        resources:
          limits:
            cpu: "3"
            memory: 4Gi
          requests:
            cpu: "3"
            memory: 4Gi
        securityContext:
          allowPrivilegeEscalation: false
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /shared_volume
          name: shared-volume
        - mountPath: path/to/mount
          name: volumenameofchoice
          subPath: sub/path/to/mount
      - image: alpine
        imagePullPolicy: Always
        name: sidecar-container
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /shared_volume
          name: shared-volume
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 0
      tolerations:
      - effect: NoExecute
        key: node.kubernetes.io/not-ready
        operator: Exists
        tolerationSeconds: 10
      - effect: NoExecute
        key: node.kubernetes.io/unreachable
        operator: Exists
        tolerationSeconds: 10
      volumes:
      - emptyDir: {}
        name: shared-volume
  updateStrategy:
    rollingUpdate:
      partition: 0
    type: RollingUpdate
  volumeClaimTemplates:
  - apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      creationTimestamp: null
      labels:
        onDemand: "true"
        topology: acking
      name: volumenameofchoice
    spec:
      accessModes:
      - ReadWriteOnce
      - ReadOnlyMany
      resources:
        requests:
          storage: 555Gi
      volumeMode: Block
    status:
      phase: Pending
status:
  collisionCount: 0
  currentReplicas: 3
  currentRevision: acking-6c4c987ddc
  observedGeneration: 1
  replicas: 3
  updateRevision: acking-6c4c987ddc
  updatedReplicas: 3
Static Pod
apiVersion: v1
kind: Pod
metadata:
  annotations:
    prometheus.io/port: "8080"
    prometheus.io/scrape: "true"
  creationTimestamp: "2021-11-18T17:12:52Z"
  generateName: acking-
  labels:
    app: heron
    controller-revision-hash: acking-6c4c987ddc
    statefulset.kubernetes.io/pod-name: acking-1
    topology: acking
  name: acking-1
  namespace: default
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: StatefulSet
    name: acking
    uid: 54337851-2bc2-4cf3-bf66-d9e8a5716aa2
  resourceVersion: "2051"
  uid: 027fff42-3f1e-4457-a3f8-6505b256f3af
spec:
  containers:
  - command:
    - sh
    - -c
    - './heron-core/bin/heron-downloader-config kubernetes && ./heron-core/bin/heron-downloader
      distributedlog://zookeeper:2181/heronbkdl/acking-saad-tag-0--1619810271492859618.tar.gz
      . && SHARD_ID=${POD_NAME##*-} && echo shardId=${SHARD_ID} && ./heron-core/bin/heron-executor
      --topology-name=acking --topology-id=acking87b0a985-49ec-4749-aa2d-d7f4aaa736a4
      --topology-defn-file=acking.defn --state-manager-connection=zookeeper:2181 --state-manager-root=/heron
      --state-manager-config-file=./heron-conf/statemgr.yaml --tmanager-binary=./heron-core/bin/heron-tmanager
      --stmgr-binary=./heron-core/bin/heron-stmgr --metrics-manager-classpath=./heron-core/lib/metricsmgr/*
      --instance-jvm-opts="LVhYOitIZWFwRHVtcE9uT3V0T2ZNZW1vcnlFcnJvcg(61)(61)" --classpath=heron-api-examples.jar
      --heron-internals-config-file=./heron-conf/heron_internals.yaml --override-config-file=./heron-conf/override.yaml
      --component-ram-map=exclaim1:1073741824,word:1073741824 --component-jvm-opts=""
      --pkg-type=jar --topology-binary-file=heron-api-examples.jar --heron-java-home=$JAVA_HOME
      --heron-shell-binary=./heron-core/bin/heron-shell --cluster=kubernetes --role=saad
      --environment=default --instance-classpath=./heron-core/lib/instance/* --metrics-sinks-config-file=./heron-conf/metrics_sinks.yaml
      --scheduler-classpath=./heron-core/lib/scheduler/*:./heron-core/lib/packing/*:./heron-core/lib/statemgr/*
      --python-instance-binary=./heron-core/bin/heron-python-instance --cpp-instance-binary=./heron-core/bin/heron-cpp-instance
      --metricscache-manager-classpath=./heron-core/lib/metricscachemgr/* --metricscache-manager-mode=disabled
      --is-stateful=false --checkpoint-manager-classpath=./heron-core/lib/ckptmgr/*:./heron-core/lib/statefulstorage/*:
      --stateful-config-file=./heron-conf/stateful.yaml --checkpoint-manager-ram=1073741824
      --health-manager-mode=disabled --health-manager-classpath=./heron-core/lib/healthmgr/*
      --shard=$SHARD_ID --server-port=6001 --tmanager-controller-port=6002 --tmanager-stats-port=6003
      --shell-port=6004 --metrics-manager-port=6005 --scheduler-port=6006 --metricscache-manager-server-port=6007
      --metricscache-manager-stats-port=6008 --checkpoint-manager-port=6009'
    env:
    - name: HOST
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: status.podIP
    - name: POD_NAME
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.name
    - name: var_one
      value: variable one
    - name: var_three
      value: variable three
    - name: var_two
      value: variable two
    image: apache/heron:testbuild
    imagePullPolicy: IfNotPresent
    name: executor
    ports:
    - containerPort: 5555
      name: tcp-port-kept
      protocol: TCP
    - containerPort: 5556
      name: udp-port-kept
      protocol: UDP
    - containerPort: 6001
      name: server
      protocol: TCP
    - containerPort: 6002
      name: tmanager-ctl
      protocol: TCP
    - containerPort: 6003
      name: tmanager-stats
      protocol: TCP
    - containerPort: 6004
      name: shell-port
      protocol: TCP
    - containerPort: 6005
      name: metrics-mgr
      protocol: TCP
    - containerPort: 6006
      name: scheduler
      protocol: TCP
    - containerPort: 6007
      name: metrics-cache-m
      protocol: TCP
    - containerPort: 6008
      name: metrics-cache-s
      protocol: TCP
    - containerPort: 6009
      name: ckptmgr
      protocol: TCP
    resources:
      limits:
        cpu: "3"
        memory: 4Gi
      requests:
        cpu: "3"
        memory: 4Gi
    securityContext:
      allowPrivilegeEscalation: false
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /shared_volume
      name: shared-volume
    - mountPath: path/to/mount
      name: volumenameofchoice
      subPath: sub/path/to/mount
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-txlbw
      readOnly: true
  - image: alpine
    imagePullPolicy: Always
    name: sidecar-container
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /shared_volume
      name: shared-volume
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-txlbw
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  hostname: acking-1
  preemptionPolicy: PreemptLowerPriority
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  subdomain: acking
  terminationGracePeriodSeconds: 0
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 10
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 10
  volumes:
  - name: volumenameofchoice
    persistentVolumeClaim:
      claimName: volumenameofchoice-acking-1
  - emptyDir: {}
    name: shared-volume
  - name: kube-api-access-txlbw
    projected:
      defaultMode: 420
      sources:
      - serviceAccountToken:
          expirationSeconds: 3607
          path: token
      - configMap:
          items:
          - key: ca.crt
            path: ca.crt
          name: kube-root-ca.crt
      - downwardAPI:
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
            path: namespace
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2021-11-18T17:12:52Z"
    message: '0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.'
    reason: Unschedulable
    status: "False"
    type: PodScheduled
  phase: Pending
  qosClass: Burstable
Static PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    volume.beta.kubernetes.io/storage-provisioner: k8s.io/minikube-hostpath
  creationTimestamp: "2021-11-18T17:12:52Z"
  finalizers:
  - kubernetes.io/pvc-protection
  labels:
    app: heron
    onDemand: "true"
    topology: acking
  name: volumenameofchoice-acking-1
  namespace: default
  resourceVersion: "2047"
  uid: 50a226ac-d7cb-44b9-8d55-8ecc9bf8796d
spec:
  accessModes:
  - ReadWriteOnce
  - ReadOnlyMany
  resources:
    requests:
      storage: 555Gi
  storageClassName: standard
  volumeMode: Block
status:
  phase: Pending
Dynamic StatefulSet
apiVersion: apps/v1
kind: StatefulSet
metadata:
  creationTimestamp: "2021-11-18T17:17:59Z"
  generation: 1
  name: acking
  namespace: default
  resourceVersion: "2412"
  uid: 4263ad61-4d70-4803-9664-5e1cc04c174e
spec:
  podManagementPolicy: Parallel
  replicas: 3
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: heron
      topology: acking
  serviceName: acking
  template:
    metadata:
      annotations:
        prometheus.io/port: "8080"
        prometheus.io/scrape: "true"
      creationTimestamp: null
      labels:
        app: heron
        topology: acking
    spec:
      containers:
      - command:
        - sh
        - -c
        - './heron-core/bin/heron-downloader-config kubernetes && ./heron-core/bin/heron-downloader
          distributedlog://zookeeper:2181/heronbkdl/acking-saad-tag-0--8727299585809965289.tar.gz
          . && SHARD_ID=${POD_NAME##*-} && echo shardId=${SHARD_ID} && ./heron-core/bin/heron-executor
          --topology-name=acking --topology-id=ackinge77c4afd-1609-4616-8344-a3474f0cd2ba
          --topology-defn-file=acking.defn --state-manager-connection=zookeeper:2181
          --state-manager-root=/heron --state-manager-config-file=./heron-conf/statemgr.yaml
          --tmanager-binary=./heron-core/bin/heron-tmanager --stmgr-binary=./heron-core/bin/heron-stmgr
          --metrics-manager-classpath=./heron-core/lib/metricsmgr/* --instance-jvm-opts="LVhYOitIZWFwRHVtcE9uT3V0T2ZNZW1vcnlFcnJvcg(61)(61)"
          --classpath=heron-api-examples.jar --heron-internals-config-file=./heron-conf/heron_internals.yaml
          --override-config-file=./heron-conf/override.yaml --component-ram-map=exclaim1:1073741824,word:1073741824
          --component-jvm-opts="" --pkg-type=jar --topology-binary-file=heron-api-examples.jar
          --heron-java-home=$JAVA_HOME --heron-shell-binary=./heron-core/bin/heron-shell
          --cluster=kubernetes --role=saad --environment=default --instance-classpath=./heron-core/lib/instance/*
          --metrics-sinks-config-file=./heron-conf/metrics_sinks.yaml --scheduler-classpath=./heron-core/lib/scheduler/*:./heron-core/lib/packing/*:./heron-core/lib/statemgr/*
          --python-instance-binary=./heron-core/bin/heron-python-instance --cpp-instance-binary=./heron-core/bin/heron-cpp-instance
          --metricscache-manager-classpath=./heron-core/lib/metricscachemgr/* --metricscache-manager-mode=disabled
          --is-stateful=false --checkpoint-manager-classpath=./heron-core/lib/ckptmgr/*:./heron-core/lib/statefulstorage/*:
          --stateful-config-file=./heron-conf/stateful.yaml --checkpoint-manager-ram=1073741824
          --health-manager-mode=disabled --health-manager-classpath=./heron-core/lib/healthmgr/*
          --shard=$SHARD_ID --server-port=6001 --tmanager-controller-port=6002 --tmanager-stats-port=6003
          --shell-port=6004 --metrics-manager-port=6005 --scheduler-port=6006 --metricscache-manager-server-port=6007
          --metricscache-manager-stats-port=6008 --checkpoint-manager-port=6009'
        env:
        - name: HOST
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: status.podIP
        - name: POD_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
        - name: var_one
          value: variable one
        - name: var_three
          value: variable three
        - name: var_two
          value: variable two
        image: apache/heron:testbuild
        imagePullPolicy: IfNotPresent
        name: executor
        ports:
        - containerPort: 5555
          name: tcp-port-kept
          protocol: TCP
        - containerPort: 5556
          name: udp-port-kept
          protocol: UDP
        - containerPort: 6001
          name: server
          protocol: TCP
        - containerPort: 6002
          name: tmanager-ctl
          protocol: TCP
        - containerPort: 6003
          name: tmanager-stats
          protocol: TCP
        - containerPort: 6004
          name: shell-port
          protocol: TCP
        - containerPort: 6005
          name: metrics-mgr
          protocol: TCP
        - containerPort: 6006
          name: scheduler
          protocol: TCP
        - containerPort: 6007
          name: metrics-cache-m
          protocol: TCP
        - containerPort: 6008
          name: metrics-cache-s
          protocol: TCP
        - containerPort: 6009
          name: ckptmgr
          protocol: TCP
        resources:
          limits:
            cpu: "3"
            memory: 4Gi
          requests:
            cpu: "3"
            memory: 4Gi
        securityContext:
          allowPrivilegeEscalation: false
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /shared_volume
          name: shared-volume
        - mountPath: path/to/mount
          name: volumenameofchoice
          subPath: sub/path/to/mount
      - image: alpine
        imagePullPolicy: Always
        name: sidecar-container
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /shared_volume
          name: shared-volume
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 0
      tolerations:
      - effect: NoExecute
        key: node.kubernetes.io/not-ready
        operator: Exists
        tolerationSeconds: 10
      - effect: NoExecute
        key: node.kubernetes.io/unreachable
        operator: Exists
        tolerationSeconds: 10
      volumes:
      - emptyDir: {}
        name: shared-volume
  updateStrategy:
    rollingUpdate:
      partition: 0
    type: RollingUpdate
  volumeClaimTemplates:
  - apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      creationTimestamp: null
      labels:
        onDemand: "true"
        topology: acking
      name: volumenameofchoice
    spec:
      accessModes:
      - ReadWriteOnce
      - ReadOnlyMany
      resources:
        requests:
          storage: 555Gi
      storageClassName: storage-class-name
      volumeMode: Block
    status:
      phase: Pending
status:
  collisionCount: 0
  currentReplicas: 3
  currentRevision: acking-7bfcb94659
  observedGeneration: 1
  replicas: 3
  updateRevision: acking-7bfcb94659
  updatedReplicas: 3
Dynamic Pod
apiVersion: v1
kind: Pod
metadata:
  annotations:
    prometheus.io/port: "8080"
    prometheus.io/scrape: "true"
  creationTimestamp: "2021-11-18T17:17:59Z"
  generateName: acking-
  labels:
    app: heron
    controller-revision-hash: acking-7bfcb94659
    statefulset.kubernetes.io/pod-name: acking-1
    topology: acking
  name: acking-1
  namespace: default
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: StatefulSet
    name: acking
    uid: 4263ad61-4d70-4803-9664-5e1cc04c174e
  resourceVersion: "2406"
  uid: 807570fd-1a38-4a5a-bb5e-cb5fe13acecc
spec:
  containers:
  - command:
    - sh
    - -c
    - './heron-core/bin/heron-downloader-config kubernetes && ./heron-core/bin/heron-downloader
      distributedlog://zookeeper:2181/heronbkdl/acking-saad-tag-0--8727299585809965289.tar.gz
      . && SHARD_ID=${POD_NAME##*-} && echo shardId=${SHARD_ID} && ./heron-core/bin/heron-executor
      --topology-name=acking --topology-id=ackinge77c4afd-1609-4616-8344-a3474f0cd2ba
      --topology-defn-file=acking.defn --state-manager-connection=zookeeper:2181 --state-manager-root=/heron
      --state-manager-config-file=./heron-conf/statemgr.yaml --tmanager-binary=./heron-core/bin/heron-tmanager
      --stmgr-binary=./heron-core/bin/heron-stmgr --metrics-manager-classpath=./heron-core/lib/metricsmgr/*
      --instance-jvm-opts="LVhYOitIZWFwRHVtcE9uT3V0T2ZNZW1vcnlFcnJvcg(61)(61)" --classpath=heron-api-examples.jar
      --heron-internals-config-file=./heron-conf/heron_internals.yaml --override-config-file=./heron-conf/override.yaml
      --component-ram-map=exclaim1:1073741824,word:1073741824 --component-jvm-opts=""
      --pkg-type=jar --topology-binary-file=heron-api-examples.jar --heron-java-home=$JAVA_HOME
      --heron-shell-binary=./heron-core/bin/heron-shell --cluster=kubernetes --role=saad
      --environment=default --instance-classpath=./heron-core/lib/instance/* --metrics-sinks-config-file=./heron-conf/metrics_sinks.yaml
      --scheduler-classpath=./heron-core/lib/scheduler/*:./heron-core/lib/packing/*:./heron-core/lib/statemgr/*
      --python-instance-binary=./heron-core/bin/heron-python-instance --cpp-instance-binary=./heron-core/bin/heron-cpp-instance
      --metricscache-manager-classpath=./heron-core/lib/metricscachemgr/* --metricscache-manager-mode=disabled
      --is-stateful=false --checkpoint-manager-classpath=./heron-core/lib/ckptmgr/*:./heron-core/lib/statefulstorage/*:
      --stateful-config-file=./heron-conf/stateful.yaml --checkpoint-manager-ram=1073741824
      --health-manager-mode=disabled --health-manager-classpath=./heron-core/lib/healthmgr/*
      --shard=$SHARD_ID --server-port=6001 --tmanager-controller-port=6002 --tmanager-stats-port=6003
      --shell-port=6004 --metrics-manager-port=6005 --scheduler-port=6006 --metricscache-manager-server-port=6007
      --metricscache-manager-stats-port=6008 --checkpoint-manager-port=6009'
    env:
    - name: HOST
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: status.podIP
    - name: POD_NAME
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.name
    - name: var_one
      value: variable one
    - name: var_three
      value: variable three
    - name: var_two
      value: variable two
    image: apache/heron:testbuild
    imagePullPolicy: IfNotPresent
    name: executor
    ports:
    - containerPort: 5555
      name: tcp-port-kept
      protocol: TCP
    - containerPort: 5556
      name: udp-port-kept
      protocol: UDP
    - containerPort: 6001
      name: server
      protocol: TCP
    - containerPort: 6002
      name: tmanager-ctl
      protocol: TCP
    - containerPort: 6003
      name: tmanager-stats
      protocol: TCP
    - containerPort: 6004
      name: shell-port
      protocol: TCP
    - containerPort: 6005
      name: metrics-mgr
      protocol: TCP
    - containerPort: 6006
      name: scheduler
      protocol: TCP
    - containerPort: 6007
      name: metrics-cache-m
      protocol: TCP
    - containerPort: 6008
      name: metrics-cache-s
      protocol: TCP
    - containerPort: 6009
      name: ckptmgr
      protocol: TCP
    resources:
      limits:
        cpu: "3"
        memory: 4Gi
      requests:
        cpu: "3"
        memory: 4Gi
    securityContext:
      allowPrivilegeEscalation: false
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /shared_volume
      name: shared-volume
    - mountPath: path/to/mount
      name: volumenameofchoice
      subPath: sub/path/to/mount
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-wl7h4
      readOnly: true
  - image: alpine
    imagePullPolicy: Always
    name: sidecar-container
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /shared_volume
      name: shared-volume
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-wl7h4
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  hostname: acking-1
  preemptionPolicy: PreemptLowerPriority
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  subdomain: acking
  terminationGracePeriodSeconds: 0
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 10
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 10
  volumes:
  - name: volumenameofchoice
    persistentVolumeClaim:
      claimName: volumenameofchoice-acking-1
  - emptyDir: {}
    name: shared-volume
  - name: kube-api-access-wl7h4
    projected:
      defaultMode: 420
      sources:
      - serviceAccountToken:
          expirationSeconds: 3607
          path: token
      - configMap:
          items:
          - key: ca.crt
            path: ca.crt
          name: kube-root-ca.crt
      - downwardAPI:
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
            path: namespace
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2021-11-18T17:17:59Z"
    message: '0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.'
    reason: Unschedulable
    status: "False"
    type: PodScheduled
  phase: Pending
  qosClass: Burstable
Dynamic PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  creationTimestamp: "2021-11-18T17:17:59Z"
  finalizers:
  - kubernetes.io/pvc-protection
  labels:
    app: heron
    onDemand: "true"
    topology: acking
  name: volumenameofchoice-acking-1
  namespace: default
  resourceVersion: "2395"
  uid: 65a9291e-ea83-41bb-a1ce-56d76574f0f3
spec:
  accessModes:
  - ReadWriteOnce
  - ReadOnlyMany
  resources:
    requests:
      storage: 555Gi
  storageClassName: storage-class-name
  volumeMode: Block
status:
  phase: Pending

@surahman
Copy link
Member Author

Updated documentation.

@surahman
Copy link
Member Author

I have completed my final review of this PR and it is now completed on my end (functionality, documentation, and testing). It is now awaiting review, testing, and merging.

Copy link
Contributor

@nicknezis nicknezis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the difference between "dynamic" and "static"? Is it merely the presence of the storage class name? Is there any difference with relating to the deletion of the PVCs when the job is killed? Perhaps they both are the same use case and should just be the "dynamic" (i.e. OnDemand) mode?

website2/docs/schedulers-k8s-persistent-volume-claims.md Outdated Show resolved Hide resolved
website2/docs/schedulers-k8s-persistent-volume-claims.md Outdated Show resolved Hide resolved
website2/docs/schedulers-k8s-persistent-volume-claims.md Outdated Show resolved Hide resolved
website2/docs/schedulers-k8s-persistent-volume-claims.md Outdated Show resolved Hide resolved
surahman and others added 4 commits November 28, 2021 11:18
Co-authored-by: Nicholas Nezis <nicholas.nezis@gmail.com>
Co-authored-by: Nicholas Nezis <nicholas.nezis@gmail.com>
Co-authored-by: Nicholas Nezis <nicholas.nezis@gmail.com>
Co-authored-by: Nicholas Nezis <nicholas.nezis@gmail.com>
@surahman
Copy link
Member Author

What is the difference between "dynamic" and "static"? Is it merely the presence of the storage class name?

Correct, they only differ in their Storage Class name, please see Provisioning for Persistent Volumes.

Is there any difference with relating to the deletion of the PVCs when the job is killed?

Heron will remove all PVCs it adds for a topology using selector labels irrespective.

Perhaps they both are the same use case and should just be the "dynamic" (i.e. OnDemand) mode?

Correct, you caught a number of typos/stale command examples in the documentation. Claims for both dynamic and static Persistent Volumes is triggered using the Claim Name of OnDemand.

Kubernetes Scheduler Improvements automation moved this from In progress to Reviewer approved Nov 29, 2021
@nicknezis nicknezis merged commit 5a1b981 into apache:master Nov 30, 2021
Kubernetes Scheduler Improvements automation moved this from Reviewer approved to Done Nov 30, 2021
@surahman
Copy link
Member Author

Thank you, Nick.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Development

Successfully merging this pull request may close these issues.

Add support for Persistent Volumes for stateful storage
6 participants