Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support GitOps for cluster provisioning and configuration #35631

Open
janeczku opened this issue Nov 22, 2021 · 5 comments
Open

Support GitOps for cluster provisioning and configuration #35631

janeczku opened this issue Nov 22, 2021 · 5 comments
Labels
internal kind/enhancement Issues that improve or augment existing functionality

Comments

@janeczku
Copy link
Contributor

janeczku commented Nov 22, 2021

Use case:

Automate the provisioning and configuration of downstream clusters with declarative configuration (GitOps)

Expectation:

Clusters can be provisioned by applying YAML manifests to the management cluster Kubernetes API using GitOps agents such as fleet, flux, Argo CD etc.
Cluster addons, projects, project members and policies can be managed in the same way.

Eventually Rancher Management Server should be fully manageable in the GitOps way.

Related issues:

@janeczku janeczku added kind/enhancement Issues that improve or augment existing functionality internal labels Nov 22, 2021
@zbialik
Copy link

zbialik commented Jun 6, 2022

This would be extermeley useful and very new, as far as market goes. I don't think I've found a tool out there that allows you to provision/configure/manage kubernetes clusters in a GitOps fashion.

We rely on ArgoCD for all our k8s deploys, including rancher itself. It'd be really cool if we could start thinking about IaC in GitOps style fashion - rather than a classic CI/CD pipeline.

I am pretty sure Rancher now manages all of the cluster/project/app configs via the suite of CRD's under cattle.io - so everything persists in ETCD as CRD objects which is how it provides HA.

With that said, the docs only show how to provision and import clusters using the UI, API, or the new terraform provider.

If behind the scenes, all rancher is doing is creating instances of these CRDs and then operating against these objects, then I think it should be possible for rancher to document what CRD's must be created to build/configure X thing.

This would allow folks like me to manage our clusters as objects in a git repo and leverage GitOps tools to ultimately manage our infrastructure via GitOps-style pattern.

  • e.g. baselining a clusters.management.cattle.io object in a git repo, ArgoCD auto-applying it and then rancher recognizing the CRD and building the cluster in the cloud :)

@c4m4
Copy link

c4m4 commented Jun 22, 2022

It could be nice to have documentation and examples about the rancher crd.

@dormullor
Copy link

Any updates regarding this issue ? we would like to provision clusters using yaml files ( same as crossplane offers )

@X4mp
Copy link

X4mp commented May 15, 2023

There are some basic examples of how to achieve this via Cluster and NodeConfig CRDs here. You can decide on going the helm way with configuring node-templates in Rancher, or, as I did, you can deploy the CRDs directly to the fleet-default namespace of your RMC.
It's very bad documented in the offical docs as well, but if you run into constraints not covered by the examples, it will get very complicated.

I can also recommend looking at these examples for some insight, but they are very limited as well.

@yuriy-yarosh
Copy link

@X4mp Fleet is great, but there are much more popular alternatives - it makes more sense to make an abstract CD interface for OAM KubeVela, ArgoCD and Fleet, and add the respective operators that will sync the state between different CD providers and project Auth/Authz policies.

I'm considering implementing a rancher ArgoCD integration myself.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
internal kind/enhancement Issues that improve or augment existing functionality
Projects
None yet
Development

No branches or pull requests

6 participants