Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature][Helm] Enable sidecar configuration in Helm chart #604

Merged
merged 2 commits into from
Sep 30, 2022

Conversation

kevin85421
Copy link
Member

Why are these changes needed?

This PR exposed an interface in KubeRay helm charts to enable users to configure sidecar containers (See the logging documentation and #353 for more details).

Example

configMap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
    name: fluentbit-config
data:
    fluent-bit.conf: |
      [INPUT]
        Name tail
        Path /tmp/ray/session_latest/logs/*
        Tag ray
        Path_Key true
        Refresh_Interval 5
      [OUTPUT]
        Name stdout
        Match *
Updated values.yaml
  • Update head.volumes and head.sidecarContainers
# Default values for ray-cluster.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

image:
  repository: rayproject/ray
  tag: 2.0.0
  pullPolicy: IfNotPresent

nameOverride: "kuberay"
fullnameOverride: ""

imagePullSecrets: []
  # - name: an-existing-secret

head:
  groupName: headgroup
  replicas: 1
  type: head
  labels:
    key: value
  initArgs:
    port: '6379'
    redis-password: 'LetMeInRay'  # Deprecated since Ray 1.11 due to GCS bootstrapping enabled
    dashboard-host: '0.0.0.0'
    num-cpus: '1'  # can be auto-completed from the limits
    node-ip-address: $MY_POD_IP  # auto-completed as the head pod IP
    block: 'true'
  containerEnv:
    - name: MY_POD_IP
      valueFrom:
        fieldRef:
          fieldPath: status.podIP
  envFrom: []
    # - secretRef:
    #     name: my-env-secret
  resources:
    limits:
      cpu: 1
    requests:
      cpu: 1
  annotations: {}
  nodeSelector: {}
  tolerations: []
  affinity: {}
  volumes:
    - name: log-volume
      emptyDir: {}
    - name: fluentbit-config
      configMap:
        name: fluentbit-config
  volumeMounts:
    - mountPath: /tmp/ray
      name: log-volume
  sidecarContainers:
    - name: fluentbit
      image: fluent/fluent-bit:1.9.6
      # These resource requests for Fluent Bit should be sufficient in production.
      resources:
        requests:
          cpu: 100m
          memory: 128Mi
        limits:
          cpu: 100m
          memory: 128Mi
      volumeMounts:
        - mountPath: /tmp/ray
          name: log-volume
        - mountPath: /fluent-bit/etc/fluent-bit.conf
          subPath: fluent-bit.conf
          name: fluentbit-config


worker:
  # If you want to disable the default workergroup
  # uncomment the line below
  # disabled: true
  groupName: workergroup
  replicas: 1
  type: worker
  labels:
    key: value
  initArgs:
    node-ip-address: $MY_POD_IP
    redis-password: LetMeInRay
    block: 'true'
  containerEnv:
    - name: MY_POD_IP
      valueFrom:
        fieldRef:
          fieldPath: status.podIP
    - name: RAY_DISABLE_DOCKER_CPU_WARNING
      value: "1"
    - name: CPU_REQUEST
      valueFrom:
        resourceFieldRef:
          containerName: ray-worker
          resource: requests.cpu
    - name: MY_POD_NAME
      valueFrom:
        fieldRef:
          fieldPath: metadata.name
  envFrom: []
    # - secretRef:
    #     name: my-env-secret
  ports:
    - containerPort: 80
      protocol: TCP
  resources:
    limits:
      cpu: 1
    requests:
      cpu: 200m
  annotations:
    key: value
  nodeSelector: {}
  tolerations: []
  affinity: {}
  volumes:
    - name: log-volume
      emptyDir: {}
  volumeMounts:
    - mountPath: /tmp/ray
      name: log-volume
  sidecarContainers: {}

# The map's key is used as the groupName.
# For example, key:small-group in the map below
# will be used as the groupName
additionalWorkerGroups:
  small-group:
    # Disabled by default
    disabled: true
    replicas: 1
    miniReplicas: 1
    maxiReplicas: 3
    type: worker
    labels: {}
    initArgs:
      node-ip-address: $MY_POD_IP
      redis-password: LetMeInRay
      block: 'true'
    containerEnv:
      - name: MY_POD_IP
        valueFrom:
          fieldRef:
            fieldPath: status.podIP
      - name: RAY_DISABLE_DOCKER_CPU_WARNING
        value: "1"
      - name: CPU_REQUEST
        valueFrom:
          resourceFieldRef:
            containerName: ray-worker
            resource: requests.cpu
      - name: MY_POD_NAME
        valueFrom:
          fieldRef:
            fieldPath: metadata.name
    envFrom: []
      # - secretRef:
      #     name: my-env-secret
    ports:
      - containerPort: 80
        protocol: TCP
    resources:
      limits:
        cpu: 1
      requests:
        cpu: 200m
    annotations:
      key: value
    nodeSelector: {}
    tolerations: []
    affinity: {}
    volumes:
      - name: log-volume
        emptyDir: {}
    volumeMounts:
      - mountPath: /tmp/ray
        name: log-volume
    sidecarContainers: {}

headServiceSuffix: "ray-operator.svc"

service:
  type: ClusterIP
  port: 8080
# Step0: Install KubeRay operator

# (Path: helm-chart/ray-cluster)
# Step1: Create a configMap.yaml file, and create the ConfigMap
kubectl apply -f configMap.yaml

# Step2: Install 
helm install ray-cluster .

# Step3: Check log
kubectl logs ray-cluster-kuberay-head-xxxxx-c fluentbit

截圖 2022-09-28 下午11 35 14

Related issue number

Closes #511

Checks

See the "Example" section.

  • I've made sure the tests are passing.
  • Testing Strategy
    • Unit tests
    • Manual tests
    • This PR is not tested :(

Copy link
Collaborator

@DmitriGekhtman DmitriGekhtman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@DmitriGekhtman DmitriGekhtman merged commit 0adc508 into ray-project:master Sep 30, 2022
lowang-bh pushed a commit to lowang-bh/kuberay that referenced this pull request Sep 24, 2023
…ct#604)

This PR exposes an interface in the ray-cluster helm chart to enable users to configure sidecar containers.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Feature][Helm] Enable sidecar configuration in Helm chart
2 participants