Skip to content

Conversation

@jsiebens
Copy link
Contributor

Signed-off-by: Johan Siebens johan.siebens@gmail.com

@jsiebens jsiebens force-pushed the post_argocd_inlets branch from 9137968 to 8e1bb06 Compare May 19, 2021 11:08
@jsiebens jsiebens requested a review from alexellis May 19, 2021 11:12

The biggest challenge lies in the communication between Argo CD and the Kubernetes API services of your highly secured private clusters.

![argocd](/images/2021-05-18-argocd-private-clusters/diagram.png)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

DOK should be DOKS, or perhaps "Kubernetes on Digital Ocean"?

Could you centralise the image too.

@alexellis
Copy link
Member

I noticed the term "CI/CD cluster", but I think there might be a less confusing term. ArgoCD just does CD, and it's likely CI would be done somewhere else like Google Cloud Build or GitHub Actions.

How about the term "Control cluster" or "Central cluster", or even better "Management cluster"? Then that terminology should be used consistently across the doc and also in the diagram(s)


Two of the most popular open-source projects for GitOps are Flux, which was created at Weaveworks. Intuit, an American payroll company, created Argo CD. Both projects were donated to the Cloud Computing Foundation (CNCF) to encourage broader use and contributions.

Like our [monitoring use case](https://inlets.dev/blog/2020/12/15/multi-cluster-monitoring.html), there are many reasons you may have multiple Kubernetes clusters where you want to deploy applications on with the GitOps way of working. Even more, some of the target clusters may be running in a tightly controlled private network.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the term "target cluster" - that works very well with "management cluster" being the opposite term.


In this post, you'll learn how to apply GitOps on multiple private Kubernetes cluster with a single Argo CD installation.

## Continuous Deployment on Kubernetes with GitOps
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This whole section is a lot for a reader to get through.

What if you started with a brief introduction about what the reader will get in some bullets before going into depth about GitOps?

EOF
```

> This example is creates a client with a single yaml configuration. There is also an Helm [chart](https://github.com/inlets/inlets-pro/tree/master/chart/inlets-pro-client) available for advanced usages.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A helm chart is also provided for the inlets-pro client for easy configuration.


## Adding the cluster to Argo CD

With the tunnel up and running, we can now register the private cluster to Argo CD.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The terminology used earlier was "target cluster"


Typically you could add a cluster using the Argo CD CLI command `argocd cluster add CONTEXTNAME`, where the context name is a context available in your current kubectl config. That command installs a ServiceAccount (`argocd-manager`) into the `kube-system` namespace of that kubectl context, and binds the service account to an admin-level ClusterRole.

Unfortunately, adding a cluster this way will fail in our scenario. Besides creating the service account in the target cluster, the command will also try to register the cluster in Argo CD with the endpoint in your context and will validate if Argo CD can communicate with the API service. It should be obvious the latter will not succeed.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just as a note: Unfortunately, adding a cluster this way will fail in our scenario. - I managed to get it working this way, with the inlets-client and also with your hosts file workaround.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It should be obvious the latter will not succeed. - I would remove that


Unfortunately, adding a cluster this way will fail in our scenario. Besides creating the service account in the target cluster, the command will also try to register the cluster in Argo CD with the endpoint in your context and will validate if Argo CD can communicate with the API service. It should be obvious the latter will not succeed.

Luckily for us, we can configure everything in a declarative way.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can simulate the steps that the CLI takes to onboard a new cluster. (Insert brief summary of the steps that will follow)

Signed-off-by: Johan Siebens <johan.siebens@gmail.com>
@jsiebens jsiebens force-pushed the post_argocd_inlets branch from 8e1bb06 to 14f3e0a Compare May 20, 2021 05:31
@jsiebens jsiebens requested a review from alexellis May 20, 2021 05:58
alexellis pushed a commit that referenced this pull request Jun 2, 2021
* Minor edits to the introduction, added links and changed
the date to June.
* Closes: #26

Signed-off-by: Johan Siebens <johan.siebens@gmail.com>
Signed-off-by: Alex Ellis (OpenFaaS Ltd) <alexellis2@gmail.com>
@alexellis alexellis closed this in #27 Jun 2, 2021
alexellis pushed a commit that referenced this pull request Jun 2, 2021
* Minor edits to the introduction, added links and changed
the date to June.
* Closes: #26

Signed-off-by: Johan Siebens <johan.siebens@gmail.com>
Signed-off-by: Alex Ellis (OpenFaaS Ltd) <alexellis2@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants