Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Helm lookup Function Support #5202

Open
rajivml opened this issue Jan 7, 2021 · 77 comments
Open

Helm lookup Function Support #5202

rajivml opened this issue Jan 7, 2021 · 77 comments
Labels
enhancement New feature or request workaround There's a workaround, might not be great, but exists

Comments

@rajivml
Copy link

rajivml commented Jan 7, 2021

Hello,

Happy new year Guys !!

So, I have this requirement to build the imagePath by reading the dockerRegistryIP value from configMap, so that I need not ask user explicitly where the registry is located.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ template "helm-guestbook.fullname" . }}
    spec:
      containers:
        - name: {{ .Chart.Name }}
          image: {{ printf "%s/%s:%s" **$dockerRegistryIP** .Values.image.repository .Values.image.tag }}

Helm3 has introduced support for this where they introduced a lookup function through which configMap can be read at runtime like this,

{{ (lookup "v1" "ConfigMap" "default" "my-configmap").data.registryURL}}

But the lookup function will return nil when templates are rendered using "helm dryrun" or "helm template" as a result when you parse a field on nil, you will see an exception like this,

"nil pointer evaluating interface {}.registryURL Use --debug flag to render out invalid YAML"

The solution which was proposed on stack overflow is to use "helm template --validate" instead of "helm template"

Can you guys add support for this ?

Right now am populating docker-registry-ip like this, but with this kustomize-plugin approach am loosing the ability to render values.yaml file as an config screen through which user can override certain values i.e. the fix to solve one issue has lead to an other issue

kubectl -n argocd get cm argocd-cm -o yaml
apiVersion: v1
data:
  configManagementPlugins: |
    - name: kustomized-helm
      generate:
        command: [sh, -c]
        args: ["DOCKER_REG_IP=$(kubectl -n registry get svc registry -o jsonpath={.spec.clusterIP}) && sed -i \"s/DOCKER_REGISTRY_IP/$DOCKER_REG_IP/g\" kustomization.yaml | helm template $ARGOCD_APP_NAME --namespace $ARGOCD_APP_NAMESPACE . > all.yaml && kustomize build"]
    - name: kustomized
      generate:
        command: [sh, -c]
        args: ["DOCKER_REG_IP=$(kubectl -n registry get svc registry -o jsonpath={.spec.clusterIP}) && sed -i \"s/DOCKER_REGISTRY_IP/$DOCKER_REG_IP/g\" kustomization.yaml | kustomize build"]
@jessesuen
Copy link
Member

Note that even if we allowed configuring Argo CD to append the --validate arg when running the helm template command, the repo-server would still need to be given API server credentials (i.e. mount a service account token) in order to perform the lookup. We would not ever allow kubernetes credentials to repo-server as a default (though you are welcome to modify your deployment in your environment) so there would not be a value in adding a --validate option configuration.

Since you would anyways need a customized repo-server, you can already accomplish this today using a wrapper script around the helm binary which appends the argument in the script (coupled with the service account given to the repo server).

@jessesuen jessesuen added the workaround There's a workaround, might not be great, but exists label Jan 7, 2021
@kvaps
Copy link

kvaps commented Jan 7, 2021

@jessesuen, I guess this workaround is possible only with in-cluster configuration, and wont work for the external ones.

@jessesuen
Copy link
Member

Ah yes, you are right about that unfortunately.

@Gowiem
Copy link

Gowiem commented Feb 16, 2021

@jessesuen just coming across this issue and I'm running into the same. There is duplicate issue here as well: #3640

You mentioned two things to accomplish as a work around for Argo not supporting this:

  1. Add a service account for Argo + mount the token -- This is straight forward and would be easy to implement.
  2. "using a wrapper script around the helm binary which appends the argument in the script" -- This I don't really get.

Can you expand more on the wrapper script? How would one inject that into a standard Argo deployment?

@dvcanton
Copy link

Hi @jessesuen, @Gowiem,

any updates on this?

Thanks in advance, Dave

@Gowiem
Copy link

Gowiem commented Apr 14, 2021

@dvcanton I tried the Argo plugin / wrapper script approach that @jessesuen mentioned after asking about it directly in the Argo Slack. You can find more about that by looking at the plugins documentation.

Unfortunately, that solution seemed overly hacky and pretty esoteric to me and my team. Instead we've now moved towards not using lookup in our charts and copy / pasting certain configuration manually instead. It's not great and I wish Argo would support this, but doesn't seem like there is enough momentum unfortunately.

@randrusiak
Copy link

A lot of charts use build-in objects such as Capabilities to provide backward compatibility for old APIs. Capabilities.APIVersions works properly only with --validate flag because without this flag it returns only API versions without available resources.
There is an example in grafana chart: https://github.com/grafana/helm-charts/blob/main/charts/grafana/templates/ingress.yaml#L7

@kvaps
Copy link

kvaps commented Apr 27, 2021

As about Capabilities, helm template command supports setting capabilities manually, ref #3594

@randrusiak
Copy link

randrusiak commented Apr 28, 2021

@kvaps take a look for an example which I posted.
{{- $newAPI := .Capabilities.APIVersions.Has "networking.k8s.io/v1/Ingress" -}} it returns false because without --validate flag only APIs versions are returned.

@kvaps
Copy link

kvaps commented Apr 28, 2021

@randrusiak, it works to me:

# helm template . --set ingress.enabled=true --include-crds > /tmp/1.yaml
# helm template . --api-versions networking.k8s.io/v1/Ingress --set ingress.enabled=true --include-crds > /tmp/2.yaml
# diff -u /tmp/1.yaml /tmp/2.yaml
@@ -399,7 +399,7 @@
           emptyDir: {}
 ---
 # Source: grafana/templates/ingress.yaml
-apiVersion: extensions/v1beta1
+apiVersion: networking.k8s.io/v1
 kind: Ingress
 metadata:
   name: RELEASE-NAME-grafana
@@ -417,9 +417,12 @@
         paths:
 
           - path: /
+            pathType: Prefix
             backend:
-              serviceName: RELEASE-NAME-grafana
-              servicePort: 80
+              service:
+                name: RELEASE-NAME-grafana
+                port:
+                  number: 80

my idea was that ArgoCD could provide repo-server list of api-versions from destination Kubernets API. eg:

kubectl api-versions

will return all available apiversions for the cluster.

Not sure if lookup function support can be implemented with the same simplicity, as it already requires direct access to the cluster.

@randrusiak
Copy link

@kvaps I understand how it works on the helm level, but I don't know how to pass this additional flag--api-versions networking.k8s.io/v1/Ingress via argo manifest. It's still unclear to me. Could you explain that to me? I'd appreciate your help.

@kvaps
Copy link

kvaps commented Apr 28, 2021

Actually my note was more likely for contributors than for users :)
They could implement api-versions passing from the argocd-application-controller to the argocd-repo-server via API call.

On current stage, I think you got nothing to do. The only workaround for you is to add serviceAccount to your repo-server and use --validate flag for helm template (you would need to create small shell wrapper script for helm command, or use custom plugin). Unfortunately this is less secure and would work only with the single cluster (the current one).

Another option for you is to hardcode those parameters somewhere, eg. save the output of the following command:

kubectl api-versions | awk '{printf " --api-versions " $1 } END{printf "\n"}' 

And pass it to helm, using any suitable way for you, eg, you can still use wrapper script for helm, something like that:

cat /usr/local/bin/helm
#!/bin/sh
exec /usr/local/bin/helm.bin $HELM_EXTRA_ARGS "$@"

where:

  • /usr/local/bin/helm.bin - the original helm binary
  • #HELM_EXTRA_ARGS - extra variable for your repo-server

or using custom-plugin

@jgoeres
Copy link

jgoeres commented May 7, 2021

We also ran into the issue with the non-working lookup function. Background is that we want to make sure that for a certain service, a secure random password is generated instead of having a hardcoded default. If desired, the use can explicitly set his own password, but most people don't.
Since we are using Helm's random function, the password is newly generated with each helm upgrade (resp. helm template), so the password would not remain stable. So we use the lookup function to check if the secret holding the password already exists and only generate the password if it doesn't. So effectively, it will only be generated initially on "helm install".
With the non-working lookup function in Argo, we have the issue that the password will be regenerated on each sync, wreaking quite some havoc, as you might guess.

Our goal is to keep the helmchart usage as simple as possible and require as little parameters for simple installations. So I would like to keep the "generate a secure (and stable) random password" as the default for "pure" Helm usage.
Is there a way to find out in the helmchart that we are actually running inside Argo? That would allow me to react to this and add a validation that enforces explicitly setting a password in Argo-based deployments.

@krutsko
Copy link

krutsko commented Oct 1, 2021

Any update on this issue?

@sarahhenkens
Copy link

Ran into this same problem today :( more context in https://cloud-native.slack.com/archives/C01TSERG0KZ/p1635024460105000

@mosheavni
Copy link
Contributor

Same thing happens with aws-load-balancer-controller's mutating webhook that defines tls key, cert and CA:
https://github.com/aws/eks-charts/blob/f4be91b5ae4a2959e821940a77d50dd0424841c1/stable/aws-load-balancer-controller/templates/_helpers.tpl
It can't reuse the previously defined keys if it can't access the cluster, thus producing an Argo app that's always out of sync

@Phil0ctetes
Copy link

Ran into this today :(

Do you have any timeline when the lookup will be available?

@Phil0ctetes
Copy link

Ran into this today :(

Do you have any timeline when the lookup will be available?

It looks like you have overseen my question so I wanted to ask again. Is there any plan to get this in and when?

@richie-tt
Copy link

I run into the same issue, but maybe the lack of this function is a good reason to move to the fluxv2, which already supports it

fluxcd/helm-operator#335

@13013SwagR
Copy link

FYI, this project provides a decent workaround https://github.com/kuuji/helm-external-val

@yinzara
Copy link

yinzara commented Apr 22, 2024

Would love to see this make it downstream soon. Giving this a friendly bump....

Sadly there is no PR to look at to try to make it downstream yet.

@farioas
Copy link

farioas commented Apr 22, 2024

I like this solution but this only works if you use Kustomize with helm. If we just want to use the normal helm plugin, is there any way to swap out the executable for a wrapper like this?

Sure it's possible, small hint can be found in repo-server:

argocd@argocd-repo-server-xxx~$ echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
argocd@argocd-repo-server-xxx:~$ which helm
/usr/local/bin/helm

The helm binary is located in /usr/local/bin/ folder, while /usr/local/sbin has more priority, so you can put the script from my message in /usr/local/sbin/helm and make is executable + replace HELM_BIN=${HELM_BIN:-helm} to HELM_BIN=${HELM_BIN:-/usr/local/bin/helm} or just set global env variable HELM_BIN=/usr/local/bin/helm.

@nivg1992
Copy link

nivg1992 commented Apr 28, 2024

I like this solution but this only works if you use Kustomize with helm. If we just want to use the normal helm plugin, is there any way to swap out the executable for a wrapper like this?

Sure it's possible, small hint can be found in repo-server:

argocd@argocd-repo-server-xxx~$ echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
argocd@argocd-repo-server-xxx:~$ which helm
/usr/local/bin/helm

The helm binary is located in /usr/local/bin/ folder, while /usr/local/sbin has more priority, so you can put the script from my message in /usr/local/sbin/helm and make is executable + replace HELM_BIN=${HELM_BIN:-helm} to HELM_BIN=${HELM_BIN:-/usr/local/bin/helm} or just set global env variable HELM_BIN=/usr/local/bin/helm.

@farioas The workaround functioning perfectly, Thanks!
Additionally, I've added the necessary values to the original Helm chart for ArgoCD. It's all up and running smoothly now,

The helm values

values:
 repoServer:
   volumes:
    - name: helm-replace
      configMap:
        name: config-map-helm-replace
        defaultMode: 0777
   volumeMounts:
    - name: helm-replace
      mountPath: /usr/local/sbin/helm
      subPath: helm
   env:
    - name: HELM_BIN
      value: /usr/local/bin/helm

The additional resources:

apiVersion: v1
kind: ConfigMap
metadata:
  name: config-map-helm-replace
data:
  helm: |-
    #!/bin/bash
    
    HELM_BIN=${HELM_BIN:-helm}
    
    new_args=()
    template_found=false
    
    for arg in "$@"; do
      if [[ "$arg" == "template" ]]; then
        template_found=true
        new_args+=("$arg")
      elif $template_found && [[ "${#new_args[@]}" -eq 1 ]]; then
        new_args+=("--dry-run=server" "$arg")
        template_found=false
      else
        new_args+=("$arg")
      fi
    done
    
    $HELM_BIN "${new_args[@]}"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: argocd-repo-server-access
rules:
- apiGroups: [""]
  resources: ["configmaps", "secrets"]
  verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: argocd-repo-server-access-binding
subjects:
- kind: ServiceAccount
  name: argocd-repo-server
  namespace: argocd
roleRef:
  kind: ClusterRole
  name: argocd-repo-server-access
  apiGroup: rbac.authorization.k8s.io

@Ben10k
Copy link

Ben10k commented Apr 29, 2024

Thank you @farioas and @nivg1992. Your workaround actually solves the long-standing problem!
Let's hope this can finally be implemented upstream.

@Stevenpc3
Copy link

@nivg1992 What is the purpose of the additional ClusterRole and ClusterRoleBinding in this helm wrapper workaround?

@nivg1992
Copy link

nivg1992 commented Apr 30, 2024

When using the lookup function with --dry-run=server, it actually utilizes a Kubernetes service account to perform the lookup operation within the cluster, the service account of "argocd-repo-server" needs access to read secrets from the namespaces.

To address this, I've granted access for the "argocd-repo-server" service account to read secrets from all namespaces. Feel free to adapt this setup to suit your specific requirements.

@akessner
Copy link

akessner commented May 1, 2024 via email

@adrianmiron
Copy link

I think the main reason this is not a feature already is related to security, not the howto itself. Still a nice trick to work around it, if you need to have this working now.

@crenshaw-dev
Copy link
Collaborator

I think the main reason this is not a feature already is related to security

Yep, granting the repo-server full k8s access is a non-starter upstream. We need a way for the user to safely grant the repo-server temporary access to only the necessary subset of resources. imo, Project-level service account impersonation (for which there is a WIP PR) would be a huge step towards unblocking that access.

I'll offer that, in my opinion, Helm lookups are a GitOps anti-pattern. But I do understand that we shouldn't make the perfect the enemy of the good.

@adrianmiron
Copy link

I'll offer that, in my opinion, Helm lookups are a GitOps anti-pattern. But I do understand that we shouldn't make the perfect the enemy of the good.

What other options are there, other than helm templating using lookup, to grab values from configmaps/secrets and use them in resources like ingress/service (other than saving them in the values file ofc :) ) ? Maybe there's a better option somewhere out there already....

@FernandoMiguel
Copy link

I'll offer that, in my opinion, Helm lookups are a GitOps anti-pattern. But I do understand that we shouldn't make the perfect the enemy of the good.

What other options are there, other than helm templating using lookup, to grab values from configmaps/secrets and use them in resources like ingress/service (other than saving them in the values file ofc :) ) ? Maybe there's a better option somewhere out there already....

We went with terraform Data sources, saving the outputs to git files, which are used as extra value files in Argo-cd.
A bit opac, but works great for us.

@adrianmiron
Copy link

We went with terraform Data sources, saving the outputs to git files, which are used as extra value files in Argo-cd. A bit opac, but works great for us.

But that's like having your secret values in both the terraform state and a values file, right ? At this point, why not just save it directly in an extra values file ?

@FernandoMiguel
Copy link

We went with terraform Data sources, saving the outputs to git files, which are used as extra value files in Argo-cd. A bit opac, but works great for us.

But that's like having your secret values in both the terraform state and a values file, right ? At this point, why not just save it directly in an extra values file ?

Nothing of those are sensitive values.
They are configuration parameters.
Stuff like, region, subnets, ACM cert ARNs, etc.
Things each cluster knows about it self, but a centralized Argo-cd instance doesn't have access during templating.

@mtougeron
Copy link

fwiw, since some use cases are being mentioned here, a use case that we'd like to use the lookup function for is to create resources like VPCs using the AWS ACK controllers, use its functionality to write a configmap with the newly created resource IDs, and feed those values into other charts that need the VPC or Subnet IDs (e.g., the elasticache chart).

@FernandoMiguel
Copy link

fwiw, since some use cases are being mentioned here, a use case that we'd like to use the lookup function for is to create resources like VPCs using the AWS ACK controllers, use its functionality to write a configmap with the newly created resource IDs, and feed those values into other charts that need the VPC or Subnet IDs (e.g., the elasticache chart).

That particular example seems to fit much better in using labels and annotations in the vpc and subnet and let other resources find them

@mtougeron
Copy link

That particular example seems to fit much better in using labels and annotations in the vpc and subnet and let other resources find them

Not all CRDs support looking up the live resources from the cloud nor the nice vpcRef sort of thing that the ACK controllers do. e.g., the cluster-api cluster resources need the VPC ID as a hardcoded string in the spec. :/

@crenshaw-dev
Copy link
Collaborator

What other options are there, other than helm templating using lookup

For charts that only accept these things via values or lookups, there is no other option. I'd suggest that those charts should be changed to support things like configMapRefs and secretRefs so they're agnostic towards how you populate the data. For secrets, there is already a great set of external secrets operators.

In general I think there are three ways to separate the concerns of app manifests from relevant external state:

  1. Write/use code to push the necessary external info into the app's git manifests repo (image tag updaters are an example)
  2. Write/use a mutating webhook to inject internal state on admission (some secrets injectors work this way, I think it could also work for stuff like ARNs)
  3. Write/use controllers to pull the necessary external state, and reference that state from the app's manifests (via config map refs, secret refs, etc.).

@mtougeron
Copy link

  1. Write/use code to push the necessary external info into the app's git manifests repo (image tag updaters are an example)

That's what we currently have in our setup. It works, it's just more brittle than a simple helm lookup.

  1. Write/use a mutating webhook to inject internal state on admission (some secrets injectors work this way, I think it could also work for stuff like ARNs)

interesting idea. I hadn't thought of that. Not sure it'd work with argocd's self healing but it's definitely worth looking into.

  1. Write/use controllers to pull the necessary external state, and reference that state from the app's manifests (via config map refs, secret refs, etc.).

ah how I wish all controllers worked this way. :)

@FernandoMiguel
Copy link

  1. Write/use a mutating webhook to inject internal state on admission (some secrets injectors work this way, I think it could also work for stuff like ARNs)

interesting idea. I hadn't thought of that. Not sure it'd work with argocd's self healing but it's definitely worth looking into.

We do this with kyverno. It was our approach after trying shared configmaps.
It's not great. Policies are complex, not every chart/resource make it easy.
Requires Argo-cd diff ignores to avoid self health.
We actively use this for dynamic resources, like aws lb arn as edns target.

But for most of the stuff we used it early, we replaced it with the terraform approach I mentioned earlier.

Lakshanya-M-ML pushed a commit to Lakshanya-M-ML/helm-charts that referenced this issue May 22, 2024
Deepa-Martin-ML added a commit to snappyflow/helm-charts that referenced this issue May 22, 2024
Lakshanya-M-ML pushed a commit to Lakshanya-M-ML/helm-charts that referenced this issue May 29, 2024
Deepa-Martin-ML added a commit to snappyflow/helm-charts that referenced this issue Jun 4, 2024
Lakshanya-M-ML pushed a commit to Lakshanya-M-ML/helm-charts that referenced this issue Jun 5, 2024
Lakshanya-M-ML pushed a commit to Lakshanya-M-ML/helm-charts that referenced this issue Jun 20, 2024
Deepa-Martin-ML added a commit to snappyflow/helm-charts that referenced this issue Jun 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request workaround There's a workaround, might not be great, but exists
Projects
None yet
Development

No branches or pull requests