Skip to content
This repository has been archived by the owner on Apr 26, 2024. It is now read-only.

Support environment variables read from kubernetes secrets #83

Closed
1 task
tekumara opened this issue Jul 22, 2023 · 5 comments · Fixed by #94
Closed
1 task

Support environment variables read from kubernetes secrets #83

tekumara opened this issue Jul 22, 2023 · 5 comments · Fixed by #94
Assignees
Labels
bug Something isn't working

Comments

@tekumara
Copy link

Expectation / Proposal

eg:

      env:
        - name: AWS_ACCESS_KEY_ID
          valueFrom:
            secretKeyRef:
              name: minio
              key: root-user
        - name: AWS_SECRET_ACCESS_KEY
          valueFrom:
            secretKeyRef:
              name: minio
              key: root-password
@desertaxle
Copy link
Member

Thanks for opening an issue @tekumara! You can allow your worker to read from Kubernetes secrets by editing the base job template for your work pool in the Prefect UI. You can extend the manifest defined in the job_configuration section of the base job template to include the values you want to pull from secrets. That new section can be hard-coded, or you can create new variables to allow values to be overridden by individual deployments.

You'll need to leave the env varia empty if you modify the env section of job_manifest in the base job template due to a quirk in the current implementation. Let me know if that is an issue and we can fix that.

@mousetree
Copy link

@desertaxle Could you tell me what is meant by:

You'll need to leave the env varia empty if you modify the env section of job_manifest in the base job template due to a quirk in the current implementation. Let me know if that is an issue and we can fix that.

I'm having an issue where I'm hardcoding a bunch of env variables in the manifest but they're not showing up in the pod. Not sure what/where I need to leave blank in the worker pool manifest. Code samples would help.

@desertaxle
Copy link
Member

@mousetree After taking a second look at the code, I see that we are overriding hardcoded env values in the manifest in all cases. We'll need to fix that behavior in order to support your use case.

@desertaxle desertaxle added the bug Something isn't working label Aug 21, 2023
@desertaxle desertaxle self-assigned this Aug 21, 2023
@mousetree
Copy link

Thanks @desertaxle . For now we can just include them in the work pool. Does it handle arbitrary Kubernetes YAML there though?

Currently we have:

definitions:
  work_pools: &staging-pool
    name: staging
    job_variables:
      image: "xxx/xxx:{{ get-commit-hash.stdout }}"
      env:
        # All other variables should be added in Doppler
        DD_ENV: staging
        DD_SERVICE: prefect-job
        DD_VERSION: "{{ get-commit-hash.stdout }}"
        PREFECT_LOGGING_HANDLERS_CONSOLE_FORMATTER: json
        PREFECT_LOGGING_HANDLERS_CONSOLE_TASK_RUNS_FORMATTER: json
        PREFECT_LOGGING_HANDLERS_CONSOLE_FLOW_RUNS_FORMATTER: json

But the above "object style" definitions are not the typical Kubernetes env var specification. Do you know how we could specify more complex things like these (from a Kubernetes YAML we used to use in an infra block):

env:
            - name: DD_ENV
              valueFrom:
                fieldRef:
                  fieldPath: metadata.labels['tags.datadoghq.com/env']
            - name: DD_SERVICE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.labels['tags.datadoghq.com/service']
            - name: DD_VERSION
              valueFrom:
                fieldRef:
                  fieldPath: metadata.labels['tags.datadoghq.com/version']
            - name: DD_AGENT_HOST
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: status.hostIP
            - name: DD_ENTITY_ID
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.uid

@meggers
Copy link

meggers commented Sep 17, 2023

+1 for this!

I think the only workaround for this right now is to mount the secret as a volume.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Something isn't working
Projects
None yet
4 participants