Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Authentication error for ArgoCD kustomization helmchart with private oic repository using Azure Container registery #16894

Open
3 tasks done
marcusnh opened this issue Jan 17, 2024 · 8 comments

Comments

@marcusnh
Copy link

Checklist:

  • I've searched in the docs and FAQ for my answer: https://bit.ly/argocd-faq.
  • I've included steps to reproduce the bug.
  • I've pasted the output of argocd version.

Describe the bug

We are experiencing a bug when creating an ArgoCD application with a kustomization file through an ArgoCD ApplicationSet. We want to reference an external helm chart in our Azure container registry using the helmchart generator in kustomization. Below is our kustomization file:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
# Referencing a public repo outside the main repository

helmCharts:
  - name: <ARO-REPO-NAME>
    repo: oci://ACR-NAME.azurecr.io/ARO-REPO-NAME
    version: 0.1.6-5
    releaseName:ARO-REPO-NAME
    namespace: poseidon2-dev
    valuesFile: values.yaml

The error we receive in our ArgoCD controller is the following:

level=error msg="finished unary call with code Unknown" error="Manifest generation error (cached): `kustomize build <path to cached source>/applicationsets/dev/demo-helm-2 --enable-helm` failed exit status 1: Error: Error: failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://<ACR-NAME>.azurecr.io/oauth2/token?scope=repository%!A(MISSING)<ARO-REPO-NAME>%!F(MISSING)<ARO-REPO-NAME>%!A(MISSING)pull&service=<ACR-NAME>.azurecr.io: 401 Unauthorized\n: unable to run: 'helm pull --untar --untardir <path to cached source>/applicationsets/dev/demo-helm-2/charts oci://<ACR-NAME>.azurecr.io/<ARO-REPO-NAME>/<ARO-REPO-NAME> --version 0.1.6-5' with env=[HELM_CONFIG_HOME=/tmp/kustomize-helm-3509023410/helm HELM_CACHE_HOME=/tmp/kustomize-helm-3509023410/helm/.cache HELM_DATA_HOME=/tmp/kustomize-helm-3509023410/helm/.data] (is 'helm' installed?): exit status 1" grpc.code=Unknown grpc.method=GenerateManifest grpc.service=repository.RepoServerService grpc.start_time="2024-01-17T09:44:38Z" grpc.time_ms=2.234 span.kind=server system=grpc

There seems to a problem with connecting to the ACR, but we have created a secret with access credentials and passed it to the ArgoCD instance.
When creating a ArgoCD application with the same setup, it works fine:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: test-<ARO-REPO-NAME>
  namespace: gitops-developers # argocd instance namespace
spec:
  source:
    chart: <ARO-REPO-NAME>
    repoURL: <ARO-NAME>.azurecr.io
    targetRevision: 0.1.6-5
    helm:
      values: |
        application_name: "<ARO-REPO-NAME>-test"
        namespace: <ARO-REPO-NAME>-test

  destination:
    namespace: <ARO-REPO-NAME>-test
    server: https://kubernetes.default.svc
  project: <ARO-REPO-NAME>-test
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
      allowEmpty: true
    # syncOptions:
    #   - Replace=true

To Reproduce

To reproduce the error one has to create a argoCD applicationset with the following configuration:

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  annotations:
    argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true
  labels:
    app.kubernetes.io/instance: <ARO-REPO-NAME>
  name: <ARO-REPO-NAME>-test
  namespace: gitops-developers

spec:
  generators:
  - git:
      directories:
      - path: applicationsets/test/*
      repoURL: MY-REPO-WHERE-KOSTUMIZATION-IS-DEFINED
      revision: HEAD
  template:
    metadata:
      name: <ARO-REPO-NAME>-apps-{{path.basename}}
      namespace: gitops-developers
    spec:
      destination:
        namespace: <ARO-REPO-NAME>-test
        server: https://kubernetes.default.svc
      project: <ARO-REPO-NAME>
      source:
        path: applicationsets/test/{{path.basename}}
        repoURL: MY-REPO-WHERE-KOSTUMIZATION-IS-DEFINED
        targetRevision: HEAD
      syncPolicy:
        automated:
          allowEmpty: true
          prune: true
          selfHeal: true

The create a kustomization file in the path: applicationsets/test/kustomization-file.
Create a secret that gives access to ACR.
We have used the kustomization definition defined above.

Expected behavior

That we would be able to sync the resources defined in the helm chart

Version

argocd: v2.9.2+c5ea5c4
  BuildDate: 2023-12-01T19:21:49Z
  GitCommit: c5ea5c4df52943a6fff6c0be181fde5358970304
  GitTreeState: clean
  GoVersion: go1.20.10
  Compiler: gc
  Platform: linux/amd64
  ExtraBuildInfo: {Vendor Information: Red Hat OpenShift GitOps version: v1.11.0}
@marcusnh
Copy link
Author

After investigating the issue, this feature seems not yet available. It is not possible to use a private repository with the kustomization integration with Helm. Need to add this feature ref this issue.

@fandujar
Copy link

@marcusnh in your case you can upgrade argocd to 2.9.3 that will add support to OCI, but you will need to do some manual stuff to inject the credentials.

@marcusnh
Copy link
Author

Could you tell me what the manual steps need to be done?
When using the ArgoCD application, it is enough to use a helm secret. Can we not do something similar with the customize helmchart?

@fandujar
Copy link

@marcusnh you can follow the manual steps that Paul described here #16623 (comment)

Personally, I built a proxy for my oci private repository and exposed it to ArgoCD.

@reginapizza
Copy link
Contributor

@marcusnh have you been able to follow the steps from the comment above and if so, are they working for you or are you still facing the same issues?

@marcusnh
Copy link
Author

marcusnh commented Jan 30, 2024

@fandujar @reginapizza, we could not make it work, and I don´t think it is a suitable solution. The comment from Paul might solve the problem, but it is unsuitable to run in a production setup.
To get the solution to work, one has to change the argocd-repo-server deployment. If the deployment is restarted in the case of, for instance, an ArgoCD upgrade, we will have to do the same configuration again.
In addition, this solution does not make it scalable to use several OCI repositories. We don´t know all the repositories that might be used beforehand. The credentials need to be set together with the kustomization and helmchart config and not through some filesystem trick of the live ArgoCD deployment.

The current approaches, including manual filesystem changes or leveraging temporary credentials, are not viable for sustainable production use. These methods introduce significant challenges:
The current approaches, including manual filesystem changes or leveraging temporary credentials, are not viable for sustainable production use. These methods introduce significant challenges:

Security and Stability Risks: Manual interventions in the filesystem of a running container go against best practices for containerized environments, potentially compromising security and stability.

Lack of Persistence: Such changes are ephemeral and do not survive pod restarts, leading to additional maintenance overhead and potential downtime.

Scalability Concerns: For organizations utilizing multiple private OCI registries, managing individual configurations and credentials for each is neither scalable nor practical.

Credential Management: The reliance on continuously refreshing credentials, especially in environments like AWS ECR where tokens expire frequently, adds unnecessary complexity and potential points of failure.

We need a solution that integrates seamlessly with ArgoCD, providing a secure, scalable, and maintainable way to manage private OCI registries. This solution should ideally:

  1. Support native handling of multiple private OCI registries within ArgoCD.
    
  2. Automate credential management, potentially integrating with cloud-native solutions like AWS IAM roles and IRSA, or equivalent in other cloud environments.
    
  3. Ensure configurations are persistent and not require manual intervention upon pod restarts or updates.
    
  4. Be well-documented and supported, aligning with the ArgoCD project's standards for production-ready features.
    

@fandujar
Copy link

@marcusnh I totally agree with you.

@ArkShocer
Copy link

Same problem here, would love if it gets fixed in an upcoming release.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants