Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create ArgoCD cluster with declarative setup using existing secret containing kubeconfig #4651

Open
MPV opened this issue Oct 23, 2020 · 26 comments
Labels
enhancement New feature or request

Comments

@MPV
Copy link
Contributor

MPV commented Oct 23, 2020

Summary

Does declarative setup for ArgoCD already support creating new clusters using an existing secret containing a kubeconfig file?

If this isn't already supported: this is what I'm after.

Motivation

I'd like to use ArgoCD + Cluster-API for:

  1. Creating new clusters using Cluster-API + ArgoCD.
  2. Letting ArgoCD become aware of (get access to) those clusters.
  3. Letting ArgoCD deploy apps into it's newly created clusters.

Proposal

Allowing argocd.argoproj.io/secret-type: cluster secrets to refer to another secret containing a kubeconfig (as part of/instead of it's config section) and use it for reaching that target cluster.

I'm referring to this functionality:
https://argoproj.github.io/argo-cd/operator-manual/declarative-setup/#clusters

So instead of:

apiVersion: v1
kind: Secret
metadata:
  name: mycluster-secret
  labels:
    argocd.argoproj.io/secret-type: cluster
type: Opaque
stringData:
  name: mycluster.com
  server: https://mycluster.com
  config: |
    {
      "bearerToken": "<authentication token>",
      "tlsClientConfig": {
        "insecure": false,
        "caData": "<base64 encoded certificate>"
      }
    }

You could do this:

apiVersion: v1
kind: Secret
metadata:
  name: mycluster-secret
  labels:
    argocd.argoproj.io/secret-type: cluster
type: Opaque
stringData:
  name: mycluster.com
  server: https://mycluster.com
  config: |
    {
      "existingKubeconfigSecret": "mycluster-kubeconfig"
    }

Or:

apiVersion: v1
kind: Secret
metadata:
  name: mycluster-secret
  labels:
    argocd.argoproj.io/secret-type: cluster
type: Opaque
stringData:
  name: mycluster.com
  server: https://mycluster.com
  existingKubeconfigSecret: mycluster-kubeconfig
@MPV MPV added the enhancement New feature or request label Oct 23, 2020
@alexmt
Copy link
Collaborator

alexmt commented Oct 23, 2020

It seems strange to reference secret from another secret. Can you please clarify why don't create argocd compatible secret? Do you need to share one secret with Kubeconfig between Argo CD and some other tool?

@alexmt alexmt added the more-information-needed Further information is requested label Oct 23, 2020
@MPV
Copy link
Contributor Author

MPV commented Oct 23, 2020

It seems strange to reference secret from another secret. Can you please clarify why don't create argocd compatible secret? Do you need to share one secret with Kubeconfig between Argo CD and some other tool?

Yes, that kubeconfig secret is created/managed by another tool:

The new cluster is created dynamically from custom resources. After its creation, the "Cluster API" controller (which created the cluster) creates a new secret which contains just the kubeconfig.

@no-response no-response bot removed the more-information-needed Further information is requested label Oct 23, 2020
@alexmt
Copy link
Collaborator

alexmt commented Oct 23, 2020

Thank you for the information @MPV ! So config is created by Cluster API.

In the past, we've made changes to get smooth integration with other opensource project. E.g. changed secret key to tls.crt/tks.key in order to simplify using cert manager.

How do you think, is it possible to do something similar in this case? E.g. Argo CD might support a new secret like kube.config and "Cluster API" would create secret with that field.

@MPV
Copy link
Contributor Author

MPV commented Oct 24, 2020

I'm looking for something like what is shown in this video (which I already have working), but with the addition of automatically deploying apps into the newly created cluster, just after its creation:

https://youtu.be/9pFBgxDa7Aw

...and without needing to push more than one git commit, containing all of:

  • the cluster definition (in cluster API resources) for creating the cluster.
  • the ArgoCD cluster config/secret for the new cluster.
  • multiple ArgoCD apps instructed to be deployed into the new cluster (as soon as it would become created + ArgoCD becoming configured to be aware of it).

@MPV
Copy link
Contributor Author

MPV commented Oct 24, 2020

How do you think, is it possible to do something similar in this case? E.g. Argo CD might support a new secret like kube.config and "Cluster API" would create secret with that field.

@alexmt Yeah, let's investigate. I opened up a mirror issue over at kubernetes-sigs/cluster-api#3866

@janwillies
Copy link

I'd love to see this too. Currently you have to jump through hoops to make this work, e.g. https://github.com/janwillies/crossargo-sync, https://github.com/infracloudio/crossplane-secrets-controller, ...

It would be very convenient if argocd would take a regular kubeconfig

@MPV
Copy link
Contributor Author

MPV commented Oct 26, 2020

For reference, this is how such a secret looks like (in the Cluster-API example):

$ kubectl -n mycluster get secret mycluster-1-kubeconfig -o yaml
apiVersion: v1
data:
  value: obfuscated-base-64-encoded
kind: Secret
metadata:
  creationTimestamp: "2020-10-23T14:52:51Z"
  labels:
    cluster.x-k8s.io/cluster-name: mycluster
  name: mycluster-1-kubeconfig
  namespace: mycluster
  ownerReferences:
  - apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
    blockOwnerDeletion: true
    controller: true
    kind: KubeadmControlPlane
    name: mycluster-kubeadm-control-plane-1
    uid: aa9353fd-25b6-49b4-afa7-c7924191baa1
  resourceVersion: "779511"
  selfLink: /api/v1/namespaces/mycluster/secrets/mycluster-1-kubeconfig
  uid: <obfuscated>
type: Opaque

@MPV
Copy link
Contributor Author

MPV commented Oct 26, 2020

#4600 talks about making ExecProvider available in the Cluster config.

Maybe it could become a way/workaround to solve this (by running something that gets the credentials from the existing secret and outputs it to stdout, as per default ExecConfig behaviour).

@ncdc
Copy link

ncdc commented Oct 26, 2020

Hi @alexmt! I'm one of the Cluster API maintainers. I'm curious about your statement:

How do you think, is it possible to do something similar in this case? E.g. Argo CD might support a new secret like kube.config and "Cluster API" would create secret with that field.

Cluster API is currently creating a secret that contains a kubeconfig. However, this secret is only created after you declaratively request a new cluster (e.g. you create a Cluster, AWSCluster, AWSMachineTemplate, KubeadmControlPlane, KubeadmConfigTemplate, and MachineDeployment - this gets you a real cluster on AWS).

I think what @MPV is asking for is the ability for Argo CD to recognize, support, and use the Cluster API secret containing the kubeconfig. It sounds like you're perhaps ok with Argo CD supporting a secret containing a kubeconfig, based on what you wrote above? Would you be open to supporting the Cluster API kubeconfig secret? Argo CD would have to accept that the secret will not exist for a period of time, but it will eventually get created as Cluster API reconciles things.

@peterrosell
Copy link

The task that we would like to automate is this step (https://argoproj.github.io/argo-cd/getting_started/#5-register-a-cluster-to-deploy-apps-to-optional)

argocd cluster add cluster-created-by-cluster-api

ArgoCD shall monitor for secrets that is created by cluster-api. There is a label/annotation that can be used and the secret name ends with -kubeconfig.
When it finds a matching secret it shall extract the info just like the CLI.
When it notice a change/removal it shall remove the cluster from ArgoCD.

I guess it's possible to create a hack with the shell operator that do this, but it would of course be much nicer if ArgoCD had a built in support for this cluster-api kubeconfig secret.

@janwillies
Copy link

crossplanes kubeconfig secret look like this:

apiVersion: v1
kind: Secret
type: connection.crossplane.io/v1alpha1
metadata:
  name: foo
  namespace: crossplane-system
  ownerReferences:
  - apiVersion: eks.aws.crossplane.io/v1beta1
    controller: true
    kind: Cluster
    name: foo
    uid: cd04c529-8f91-4a1a-a827-2f6138ec4087
data:
  clusterCA: LS0tL...
  endpoint: aH...
  kubeconfig: YXB...

@abdennour
Copy link

guys! i can see only 2 in CRDs list :

k get crd | grep arg
applications.argoproj.io                             2020-09-24T14:52:28Z
appprojects.argoproj.io                              2020-09-24T14:52:29Z

there is nothing called clusters.argoproj.io.. We expect to have that.

@exocode
Copy link

exocode commented Jan 6, 2022

Hi there, maybe someone cen help me:

Is there a way how to

  1. create a cluster via `kubectl apply -f cluster.yml``
  2. "connect" or "point" a kubec-config to the ArgoCD created cluster? Something like kubeConfigSecretKeyRef: "cluster-details-my-cluster-kube-config"?

Details:

I have the ArgoCD server running and wanna define a Cluster without the CLI. I wanna practice GitOps, so I wanna declare my ArgoCD-cluster config in Git.

In the CLI I could do: argocd cluster add but how to do that with a Kubernetes manifest?

I didn't found how to create that Cluster declarative. I found how to create Repositories, and Projects. But nothing for something like kind: cluster.

I am creating my clusters with Crossplane. Crossplane saves the kubeconfig of it's created cluster in a Secret which looks like this:

apiVersion: v1
kind: Secret
metadata:
  name: cluster-details-my-cluster
  namespace: default
  uid: 50c7acab-3214-437c-9527-e66f1d563409
  resourceVersion: '12868'
  creationTimestamp: '2022-01-06T19:03:09Z'
  managedFields:
    - manager: crossplane-civo-provider
      operation: Update
      apiVersion: v1
      time: '2022-01-06T19:03:09Z'
      fieldsType: FieldsV1
      fieldsV1:
        f:data:
          .: {}
          f:kubeconfig: {}
        f:type: {}
  selfLink: /api/v1/namespaces/default/secrets/cluster-details-my-cluster
data:
  kubeconfig: >-
    YXBpVmVyc2lvbjogdjEKY2x1c3RlcnM6Ci0gY2x1c3RlcjoKICAgIGNlcnRpZmljYXRlLWF1dGhvcml0eS1kYXRhOiBMUzB0TFMxQ1JVZEpUaUJEUlZKVVNVWkpRMEZVUlMwdExTMHRDazFKU1VKbFJFTkRRVkl5WjBGM1NVSkJaMGxDUVVSQlMwSm5aM0ZvYTJwUFVGRlJSRUZxUVdwTlUwVjNTSGRaUkZaUlVVUkVRbWh5VFROTmRHTXlWbmtLWkcxV2VVeFhUbWhSUkVVeVRrUkZNRTlVVlROT1ZFbDNTR2hqVGsxcVNYZE5WRUV5VFZScmQwMXFUWGxYYUdOT1RYcEpkMDFVUVRCTlZHdDNUV3BOZVFwWGFrRnFUVk5_SHORTENED
type: Opaque

The data.kubeconfig content is a regular bas64 encoded kubeconfig, so it's easy to decode, like this:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJlRENDQVIyZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWpNU0V3SHd_SHORTENED
    server: https://MY.IP.TO.K8S:6443
  name: my-cluster
contexts:
- context:
    cluster: my-cluster
    user: my-cluster
  name: my-cluster
current-context: my-cluster
kind: Config
preferences: {}
users:
- name: my-cluster
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJrVENDQVRlZ0F3SUJBZ0lJQS9adEZFT1Avcnd3Q2dZSUtvWkl6ajBFQXdJd0l6RWhNQjhHQTFVRUF3d1kKYXpOekxXTnNhV1Z1ZEMxallVQXhOalF4TkRrMU56VXlNQjRYRFRJeU1ERXdOakU1TURJek1sb1hEVEl6TURFdwpOakU1TURJek1sb3dNREVYT_SHORTENED
    client-key-data: LS0tLS1CRUdJTiBFQyBQUklWQVRFIEtFWS0tLS0tCk1IY0NBUUVFSUpJNlVhTDlLem9yL1VpdzlXK1NNUTAxV1BES2ZIK_SHORTENED

Thank you very much in advance

@Moglum
Copy link

Moglum commented Jan 7, 2022

@exocode I don't have a GitOps ready declarative solution for this - currently using Terraform + k8s provider talking to both clusters to do it - but I can point you in the right direction, maybe you'll find a more elegant way.

argocd cluster add does 2 things

  1. it creates a ServiceAccount, ClusterRole, and ClusterRoleBinding in the target cluster (it uses your local kubeconfig context to do that) and reads the BearerToken of this newly created ServiceAccount. That is used by ArgoCD to sync stuff into this cluster.
  2. it creates a Secret in the ArgoCD cluster which contains the k8s API server endpoint and the ServiceAccount token. - more about the syntax of the "cluster" secret here. That adds the Cluster to ArgoCD and it can be referenced from the UI, CLI, or by ArgoCD Application CRDs

You can read a bit more about how to add a new cluster to ArgoCD without argocd CLI in this article

If you do solve it, please, let us know how.

@a1tan
Copy link

a1tan commented Apr 11, 2022

I have implemented an experimental operator to synchronize generated secrets to Argo CD cluster secret. It has some way to go and I am not sure if this is the best way to do this. But you can give it a try if it helps. https://github.com/a1tan/argocdsecretsynchronizer

@the-technat
Copy link
Contributor

the-technat commented Sep 17, 2022

From what I see the easiest thing would be if the argocd-controller would watch the cluster-api's Cluster resource and add clusters that are created there (maybe with matching labels) to Argo CD by fetching the secret with the kubeconfig that cluster-api creates and generating it's own secret out of it.

The other option would be to extend cluster-api to write secrets in the form Argo CD accepts.

Would that be an enhancement that the Argo CD codebase accepts? Because it would mean a 'hard' integration into the way cluster-api behaves.

Or delegate it to the Argo CD Operator to generate the Argo CD cluster secrets based on the cluster-api secrets.

Other than that I think to add the functionality it wouldn't require a lot of effort, so I'd be happy to try myself contributing this, when there is a need for it.

@alexellis
Copy link
Contributor

@Moglum thanks for linking to the inlets blog post Learn how to manage apps across multiple Kubernetes clusters where we gave the example on adding a remote cluster delcaratively.

This doesn't seem to work with the newest change in K8s 1.24 where service accounts and tokens are not created in the same way as before.

The LegacyServiceAccountTokenNoAutoGeneration feature gate is beta, and enabled by default. When enabled, Secret API objects containing service account tokens are no longer auto-generated for every ServiceAccount. Use the TokenRequest API to acquire service account tokens, or if a non-expiring token is required, create a Secret API object for the token controller to populate with a service account token by following this guide. (kubernetes/kubernetes#108309, @zshihang)

https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md#urgent-upgrade-notes

I'm not sure what the workaround is, but don't know if this PR is related? #9546

@alexellis
Copy link
Contributor

Edit: Looks like we may have an alternative now for K8s 1.25:

#!/bin/bash
export TARGET_CTX=k3s-ontario-edge-1
export ARGOCD_CTX=argo-hub

cat <<EOF | kubectl apply --context $TARGET_CTX -n kube-system  -f -
apiVersion: v1
kind: Secret
metadata:
  name: argocd-manager-token
  namespace: kube-system 
  annotations:
    kubernetes.io/service-account.name: argocd-manager
type: kubernetes.io/service-account-token
EOF

name="argocd-manager-token"
token=$(kubectl get --context $TARGET_CTX -n kube-system secret/$name -o jsonpath='{.data.token}' | base64 --decode)
namespace=$(kubectl get --context $TARGET_CTX -n kube-system secret/$name -o jsonpath='{.data.namespace}' | base64 --decode)


cat <<EOF | kubectl apply --context $ARGOCD_CTX -n argocd -f -
apiVersion: v1
kind: Secret
metadata:
  name: ontario
  labels:
    argocd.argoproj.io/secret-type: cluster
type: Opaque
stringData:
  name: ontario-tunnel-inlets-pro-data-plane
  server: https://ontario-tunnel-inlets-pro-data-plane:443
  config: |
    {
      "bearerToken": "${token}",
      "tlsClientConfig": {
        "serverName": "kubernetes.default.svc"
      }
    }
EOF

@pixiono
Copy link

pixiono commented Oct 17, 2022

Hi together,

I also have the case that I like to "automatically" add clusters to ArgoCD that I provisioned with cluster-api using the execProviderConfig from #4600.

I managed to get the token from the execProviderConfig (as in the GCP example).

The problem I have now is that I need to set server and certificate-authority-data, otherwise ArgoCD will not recognize the cluster.

For further testing, I built a local kubeconfig and tried to test that behavior, with the following KubeConfig. For testing I just use cat on a static credentials file:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: <base64_encoded_cert>
#    server: <server> # removed for testing
  name: cluster1
contexts:
- context:
    cluster: cluster1
    user: cluster1-admin
  name: cluster1-admin@ cluster1
current-context: cluster1-admin@ cluster1
kind: Config
preferences: {}
users:
- name: cluster1-admin
  user:
    exec:
      command: "cat"
      args:
      - "credentials"
      apiVersion: "client.authentication.k8s.io/v1"
      interactiveMode: Never

My credentials file looks like this from the k8s-documentation:

{
  "apiVersion": "client.authentication.k8s.io/v1",
  "kind": "ExecCredential",
  "status": {
    "clientCertificateData": "-----BEGIN CERTIFICATE-----\n...\n-----END CERTIFICATE-----",
    "clientKeyData": "-----BEGIN RSA PRIVATE KEY-----\n...\n-----END RSA PRIVATE KEY-----"
  }
}

When I run this, I get the following message:

$kubectl --kubeconfig kubeconfig.test get pods
The connection to the server localhost:8080 was refused - did you specify the right host or port?

It seems that you can't let the Kubernetes client get hostname and certificate of the api-server from the exec plugin.

-> Based on this, I would say that it is not possible to create a generic secret using only the providerExecConfig, which would allow argocd to automatically use a cluster.

Please let me know if you have other ideas.

@ron1
Copy link

ron1 commented Oct 17, 2022

You might consider using a Kyverno Generate policy to create the ArgoCD cluster secrets as is done here specifically for Rancher CAPI.

@cilindrox
Copy link
Contributor

Tangential, but in trying to mimic this behavior using Terraform, I noticed by default the secret name seems to be cluster- + the cluster's api endpoint without the protocol extension, and that's passed via the metadata.generateName. On EKS at least, this creates a name that exceeds the 63 char limit, eg:

cluster-f00barbazfooquuxbazfoobarbazfoobaz.gr1.us-east-1.eks.amazonaws.com-12345678 - we can always use whatever secret name, but it seems odd that the default behavior can't be carried over.

@giepa
Copy link

giepa commented Nov 18, 2022

We managed to achieve automated vcluster registration with a kyverno policy:

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: argo-cluster-generation-from-vcluster-secret
  annotations:
    policies.kyverno.io/title: Argo Cluster Secret Generation From VCluster secrets
    policies.kyverno.io/category: Argo
    policies.kyverno.io/severity: medium
    policies.kyverno.io/subject: Secret
    kyverno.io/kyverno-version: 1.7.1
    policies.kyverno.io/minversion: 1.7.0
    kyverno.io/kubernetes-version: "1.23"
    policies.kyverno.io/description: >-
      This policy generates and synchronizes Argo CD cluster secrets from VCluster secrets.
spec:
  generateExistingOnPolicyUpdate: true
  rules:
    - name: source-vc-secret
      match:
        all:
          - resources:
              kinds:
                - v1/Secret
              names:
                - "vc-*"
      exclude:
        all:
          - resources:
              kinds:
                - v1/Secret
              names:
                - "vc-*-token-*"
      context:
        - name: clusterName
          variable:
            value: "{{request.object.metadata.namespace}}-{{request.object.metadata.name}}"
            jmesPath: 'to_string(@)'
        - name: metadataLabels
          variable:
            value:
              argocd.argoproj.io/secret-type: cluster
              clusterId: "{{ clusterName }}"
        - name: kubeconfigData
          variable:
            jmesPath: 'request.object.data.config | to_string(@)'
        - name: serverName
          variable:
            value: "{{ kubeconfigData | base64_decode(@) | parse_yaml(@).clusters[0].cluster.server }}"
            jmesPath: 'to_string(@)'
        - name: caData
          variable:
            value: "{{ kubeconfigData | base64_decode(@) | parse_yaml(@).clusters[0].cluster.\"certificate-authority-data\" }}"
            jmesPath: 'to_string(@)'
        - name: keyData
          variable:
            value: "{{ kubeconfigData | base64_decode(@) | parse_yaml(@).users[0].user.\"client-key-data\" }}"
            jmesPath: 'to_string(@)'
        - name: certData
          variable:
            value: "{{ kubeconfigData | base64_decode(@) | parse_yaml(@).users[0].user.\"client-certificate-data\" }}"
            jmesPath: 'to_string(@)'
        - name: dataConfig
          variable:
            value: |
              {
                "tlsClientConfig": {
                  "insecure": false,
                  "caData": "{{ caData }}",
                  "keyData": "{{ keyData }}",
                  "certData": "{{ certData }}"
                }
              }
            jmesPath: 'to_string(@)'
      generate:
        synchronize: true
        apiVersion: v1
        kind: Secret
        name: "{{ clusterName }}"
        namespace: argocd
        data:
          metadata:
            labels:
                "{{ metadataLabels }}"
          type: Opaque
          data:
            name: "{{ clusterName | base64_encode(@) }}"
            server: "{{ serverName | base64_encode(@) }}"
            config: "{{ dataConfig | base64_encode(@) }}"

Thanks @ron1 for the rancher example

@22RC
Copy link

22RC commented Feb 17, 2023

Hi everyone,
I am using CAPI to create cluster with argoCD and I fell into the same problem.

I think this problem can be solved using helm lookup function as follows:

{{- $secret := (lookup "v1" "Secret" .Release.Namespace (printf "%s-kubeconfig" .Values.cluster.name)) -}}
apiVersion: v1
kind: Secret
metadata:
  labels:
    argocd.argoproj.io/secret-type: cluster
  name: argocd-cluster-{{ .Values.cluster.name }}
  namespace: argocd
stringData:
  {{- if $secret.value }}
  ....
  # parse field in secret.value and add it to "argocd secret kubeconfig"
  ....
  {{- end }}
type: Opaque

But I haven't been able to get it work yet.
IMO the problem is that argo executes helm template ... | kubectl apply -f - to install helm chart.
As a reported to helm documentation (https://helm.sh/docs/chart_template_guide/functions_and_pipelines/#using-the-lookup-function):

Keep in mind that Helm is not supposed to contact the Kubernetes API Server during a helm template or a helm install|upgrade|delete|rollback --dry-run, so the lookup function will return an empty list (i.e. dict) in such a case.

Did anyone use the lookup function to solve this issue on argo?

@prtkdave
Copy link

Is there a way to give kubeconfig instead of bearer token either secret format or base64 format to register cluster?

@maaft
Copy link

maaft commented Nov 16, 2023

I'm currently trying the same:

  1. deploying Karmada (multi-cluster apiserver) to host-cluster
  2. Karmada generates secret with kubeconfig
  3. I want to instruct ArgoCD in declarative way to use that existing secret for cluster registration

The usecase here is to have as few as possible manual steps when bootstrapping a new app-of-apps cluster.

@joebowbeer
Copy link
Contributor

joebowbeer commented Mar 31, 2024

Close this issue in favor of newer more targeted issues?

Since this issue was created, several solutions have arisen:

The following issues are specifically about registration of CAPI clusters:

#9033
#9007

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests