Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Queries on Crossplane, Unable to find GcePersistentDiskCsiDriver addon #244

Open
tvvignesh opened this issue May 15, 2020 · 4 comments
Open

Comments

@tvvignesh
Copy link

Hi. I was trying out Crossplane but I was not able to find the addon GcePersistentDiskCsiDriver in the docs - https://doc.crds.dev/github.com/crossplane/provider-gcp/config/crd/container.gcp.crossplane.io_gkeclusters.yaml

The CSI driver is documented here: https://cloud.google.com/kubernetes-engine/docs/how-to/gce-pd-csi-driver

Also while there is shieldedInstanceConfig, I am not able to find an alternative for --enable-shielded-nodes

Since I am just trying out Crossplane now, I had a few doubts which I thought of asking. Sorry if its naive.

What is the difference between writing a gcloud command like this and writing the same with GKEClusterClass in yaml? Isn't it easier to do it via scripts like the one below?

gcloud beta container clusters create ${CLUSTER_NAME} --project ${GCP_PROJECT} --zone ${CLUSTER_ZONE} \
--release-channel rapid --addons=GcePersistentDiskCsiDriver --enable-network-policy \
--disk-type pd-ssd --disk-size ${DISK_SIZE} --enable-autorepair --enable-autoupgrade \
--enable-stackdriver-kubernetes --enable-vertical-pod-autoscaling --image-type ubuntu_containerd \
--machine-type n1-standard-4 --max-surge-upgrade 1 --max-unavailable-upgrade 0 --num-nodes 5 \
--enable-autoscaling --enable-shielded-nodes --min-nodes ${MIN_NODES} --max-nodes ${MAX_NODES} \
--maintenance-window-start=2000-01-01T22:00:00Z --maintenance-window-end=2000-01-02T04:00:00Z --maintenance-window-recurrence=FREQ=DAILY \
--scopes default,bigquery,cloud-platform,compute-rw,datastore,storage-full,taskqueue,userinfo-email,sql-admin \
--metadata disable-legacy-endpoints=true --service-account ${SERVICE_ACCOUNT}@${GCP_PROJECT}.iam.gserviceaccount.com

Is there any downside to this? Even if I want to port this cluster to AWS, I have to create a separate yaml with AWS config where I can essentially use the CLI offerred by AWS.

The website says crossplane is self-healing - isn't it a kubernetes feature by itself? Does crossplane do something on top?

Also, crossplane talks about portable workloads. Isn't Kubernetes workloads portable by default? Or am I missing something?

So, my question here is what is crossplane doing for me here in addition to what Kubernetes already does for me?

PS: I have read this document completely: https://docs.google.com/document/d/1whncqdUeU2cATGEJhHvzXWC9xdK29Er45NJeoemxebo/edit

Thanks.

@negz
Copy link
Member

negz commented May 20, 2020

Hi @tvvignesh! It looks like @muvaf is updating our GCP provider with the functionality you need in #229, so I'll focus on addressing your questions around Crossplane's purpose.

What is the difference between writing a gcloud command like this and writing the same with GKEClusterClass in yaml? Isn't it easier to do it via scripts like the one below?

I believe what is easier will be different for different people, so whether a gcloud CLI or YAML is your preferred language to describe infrastructure is a little subjective. Functionality wise, one key difference of using Crossplane compared to a shell script is that Kubernetes will constantly drive the actual state of your infrastructure toward your desired state. It's similar to the difference between using docker run to run a container and using a Kubernetes Pod to run that container. We provide more automation.

We also provide what we consider a "separation of concerns" for teams with different roles. You're right that in Crossplane someone still needs to write an RDSInstanceClass (or in v0.11, a Composition) to switch from one cloud to another, but we allow that to be done by an infrastructure operator - for example an SRE. The folks actually running the apps can use portable abstractions (claims or requirements).

The website says crossplane is self-healing - isn't it a kubernetes feature by itself? Does crossplane do something on top?

Right - we bring the self healing functionality of Kubernetes controllers to managing infrastructure. This is one reason folks might prefer Crossplane to using shell scripts.

Isn't Kubernetes workloads portable by default? Or am I missing something?

The pods themselves usually are, but their infrastructure dependencies might not be. For example if my application uses a managed database and I want to switch from GCP to Azure (or vice versa) I need to go figure out how to switch from CloudSQL to Azure SQL Server. As I mentioned above, Crossplane makes it simpler for application operators to do this in a self service fashion by moving that responsibility over to infrastructure operators.

So, my question here is what is crossplane doing for me here in addition to what Kubernetes already does for me?

Essentially, it's taking the good things about Kubernetes and bringing them to managed cloud infrastructure. Does that make sense?

@tvvignesh
Copy link
Author

@negz Thanks a lot for your elaborate reply to my questions. Really appreciate it.

While it does clarify them and am convinced that it will add a lot of value for people who are not using K8s and are using a lot of managed services, I am still not that convinced when I have everything (including the DB, monitoring, etc) self-hosted in my K8 cluster and the only managed services I use are GKE, Cloud DNS, Cloud KMS.

One challenge I feel you guys will have is that since every cloud provider has their own construct and conventions, you will have to be constantly in sync with all of them updating the yaml (eg. GkeClusterclass like in the case above) as and when they add, remove or change flags/options and while they have been moved out as plugins, this might still be hectic to maintain for every cloud provider creating a bottleneck.

The main portability challenge I feel people like me usually face when migrating across providers are with the storage. While we are solving it, I feel it's still a pain.

Anyways, let me try it out else I will never know. Thanks again. Will get back to this thread after I try it out (once CSI support is added)

@negz
Copy link
Member

negz commented Jun 30, 2020

I believe the main thing left to do here is close out #229, so I'm going to move this issue to provider-gcp.

@negz negz transferred this issue from crossplane/crossplane Jun 30, 2020
@prasek
Copy link
Contributor

prasek commented Aug 6, 2020

One challenge I feel you guys will have is that since every cloud provider has their own construct and conventions, you will have to be constantly in sync with all of them updating the yaml (eg. GkeClusterclass like in the case above) as and when they add, remove or change flags/options and while they have been moved out as plugins, this might still be hectic to maintain for every cloud provider creating a bottleneck.

@tvvignesh note we are exploring multiple code generation options for the providers, including crossplane/crossplane#262, and are looking at different metadata sources to generate from, so that should help on this front.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants