Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[werft] Install gitpod in k3s ws cluster #4664

Merged
merged 2 commits into from
Jul 8, 2021
Merged

Conversation

princerachit
Copy link
Contributor

@princerachit princerachit commented Jun 30, 2021

What?

This PR does the following two things broadly:

  1. Disable support of launching ws cluster separately in core-dev project dev cluster preview env - This was a complex procedure where you would need to create two branches and have them rely on one another
  2. Support launching ws components in a separate K3s cluster

How ?

The code has been refactored in such a way that if you provide werft flag k3s-ws then the deployment will occur in two different clusters:

Dev cluster

The meta component will be deployed along with the ws components in the dev cluster. However, the static config which used to add the self cluster as a workspace target would be disabled, thus, the meta component would only work as meta clsuter.

K3s ws cluster

A workspace cluster deployment will occur in a dedicated namespace in the k3s cluster (similar to the dev cluster). In this deployment we use external IP for ws-proxy. Hence, there is no ingress involved whilst accessing the workspace. The external IP will be created by the werft using gcloud command . I have added relevant create, get and delete permissions to the gitpod-deployer SA.

The static external IP created is named same as the namespace.

Registration

Once the deployments have succeeded, we explicitly build the gpctl binary and then use it to register our workspace cluster to the meta cluster.

I have introduced the subdomain *.ws-k3s. for k3s ws cluster. For the meta i.e. dev cluster the subdomain *.ws-dev. remains the same.

What's next

Unlike the deployments of dev cluster, the k3s cluster deployments are not cleaned up. I will raise another PR for this.

--werft k3s-ws

@princerachit
Copy link
Contributor Author

princerachit commented Jun 30, 2021

/werft run

👍 started the job as gitpod-build-prs-install-gitpod-k3s.4

@princerachit
Copy link
Contributor Author

princerachit commented Jun 30, 2021

/werft run

👍 started the job as gitpod-build-prs-install-gitpod-k3s.5

@princerachit princerachit changed the title [wip][werft] Install gitpod in k3s ws cluster [werft] Install gitpod in k3s ws cluster Jun 30, 2021
@princerachit princerachit marked this pull request as ready for review June 30, 2021 13:17
@csweichel csweichel self-requested a review June 30, 2021 15:55
@csweichel
Copy link
Contributor

Prior to merging this PR I'd love to see it in action, i.e. have the aforementioned change in "Gitpod infra repo" merged/applied.
Also, I'm wondering how this change solves the DNS/ingress issue with the workspace cluster.

Lastly, I'm a bit surprised to see this hybrid workspace cluster approach. It strikes me that the design we had originally discussed (DNS entries for all preview-environments, just change kube context for k3s installation) is

  • easier to implement because it involves fewer exceptions: just choose the context you want to deploy in, the rest remains the same
  • solves the ingress problem** for all preview environments
  • paves the path for optional "one cluster per preview-env" deployments.

(** the ingress problem: because all our traffic goes through the core-dev ingress we're not seeing the same behaviour in core-dev as we'd see in prod, because the additional nginx imposes its own behaviour. We need this core-dev ingress nginx for some paths like authentication or payment, but the vast majority of requests doesn't).

@princerachit
Copy link
Contributor Author

princerachit commented Jul 1, 2021

Prior to merging this PR I'd love to see it in action, i.e. have the aforementioned change in "Gitpod infra repo" merged/applied.

I am manually editing the labels to check if the pods get scheduled. Will update the repo post that.

Also, I'm wondering how this change solves the DNS/ingress issue with the workspace cluster.

Lastly, I'm a bit surprised to see this hybrid workspace cluster approach. It strikes me that the design we had originally discussed (DNS entries for all preview-environments, just change kube context for k3s installation) is

  • easier to implement because it involves fewer exceptions: just choose the context you want to deploy in, the rest remains the same
  • solves the ingress problem** for all preview environments
  • paves the path for optional "one cluster per preview-env" deployments.

(** the ingress problem: because all our traffic goes through the core-dev ingress we're not seeing the same behaviour in core-dev as we'd see in prod, because the additional nginx imposes its own behaviour. We need this core-dev ingress nginx for some paths like authentication or payment, but the vast majority of requests doesn't).

ATM we only have workspace module for gitpod installation. Deploying only ws component on k3s cluster will give us same behaviour that we anticipate on staging/production cluster.

The core-dev preview env makes use of ingress to route traffic to ws components. We want to do the same here. The k3s cluster has a n ingress which will route traffic directly to the ws components.

About having one cluster per preview env, we need to test deploying meta component on k3s cluster. It is possible to try that out here but then it makes it more complex to figure out if there is a problem i.e. whether it is because how we have setup preview env or because on inherent issue of some compatibility of k3s with meta components.

@csweichel
Copy link
Contributor

csweichel commented Jul 1, 2021

Prior to merging this PR I'd love to see it in action, i.e. have the aforementioned change in "Gitpod infra repo" merged/applied.

I am manually editing the labels to check if the pods get scheduled. Will update the repo post that.

🙏

ATM we only have workspace module for gitpod installation.

I'm not sure I follow. Does that affect more than the labels on the node pool?

Deploying only ws component on k3s cluster will give us same behaviour that we anticipate on staging/production cluster.

That is a good point indeed, yet it adds considerable complexity in core-dev/during development and also increases the effort to get k3s working in core-dev.

The core-dev preview env makes use of ingress to route traffic to ws components. We want to do the same here. The k3s cluster has a n ingress which will route traffic directly to the ws components.

My point exactly: we do not want that ingress where it isn't strictly necessary (everywhere except for auth and payment). We don't run it in staging/prod and it has caused many a problem in the past.

About having one cluster per preview env, we need to test deploying meta component on k3s cluster. It is possible to try that out here but then it makes it more complex to figure out if there is a problem i.e. whether it is because how we have setup preview env or because on inherent issue of some compatibility of k3s with meta components.

Meta doesn't really care what Kubernetes cluster it runs on. In fact all meta components would do just fine without Kubernetes to begin with.

@princerachit princerachit requested a review from a team as a code owner July 2, 2021 08:57
@princerachit
Copy link
Contributor Author

@csweichel

I'm not sure I follow. Does that affect more than the labels on the node pool?

and

Meta doesn't really care what Kubernetes cluster it runs on. In fact all meta components would do just fine without Kubernetes to begin with.

I tried meta installation on the k3s and did encounter some issues wrt volume mounts (e.g. minio). I am skeptical to build an env which is not close to what we have in prod/staging.

My point exactly: we do not want that ingress where it isn't strictly necessary (everywhere except for auth and payment). We don't run it in staging/prod and it has caused many a problem in the past.

Ack. I will get rid of ingress in the k3s ws cluster. I in fact faced issues while doing this.

...yet it adds considerable complexity in core-dev/during development and also increases the effort to get k3s working in core-dev...

I have made a slow progress because of some complexity but I believe this complexity needs to be solved only once. This gets us closer to a prod like setup, thus, solving this would be a good idea IMHO.

@princerachit
Copy link
Contributor Author

princerachit commented Jul 5, 2021

/werft run

👍 started the job as gitpod-build-prs-install-gitpod-k3s.54

@princerachit princerachit marked this pull request as draft July 5, 2021 07:13
@princerachit princerachit marked this pull request as ready for review July 6, 2021 09:14
@meysholdt
Copy link
Member

I made a copy of this branch (for testing purposes; name is me/k3s) and Werft failed with the following error:

waiting for preview env namespace being re-created...
copying certificate from "certs/staging-me-k3s" to "staging-me-k3s/proxy-config-certificates"
Error: export KUBECONFIG= && kubectl get secret staging-me-k3s --namespace=certs -o yaml | yq d - 'metadata.namespace' | yq d - 'metadata.uid' | yq d - 'metadata.resourceVersion' | yq d - 'metadata.creationTimestamp' | sed 's/staging-me-k3s/proxy-config-certificates/g' | kubectl apply --namespace=staging-me-k3s -f - exit with non-zero status code

@meysholdt
Copy link
Member

When I run werft run github -a k3s-ws=true I got

waiting for preview env namespace being re-created...
copying certificate from "certmanager/staging-me-k3s" to "staging-me-k3s/proxy-config-certificates"
Error: export KUBECONFIG=/workspace/k3s-external.yaml && kubectl get secret staging-me-k3s --namespace=certmanager -o yaml | yq d - 'metadata.namespace' | yq d - 'metadata.uid' | yq d - 'metadata.resourceVersion' | yq d - 'metadata.creationTimestamp' | sed 's/staging-me-k3s/proxy-config-certificates/g' | kubectl apply --namespace=staging-me-k3s -f - exit with non-zero status code

and later during helm install:

secret/proxy-config-certificates configured
Error from server (NotFound): secrets "staging-me-k3s" not found
error: error validating "STDIN": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false

@princerachit
Copy link
Contributor Author

When I run werft run github -a k3s-ws=true I got

waiting for preview env namespace being re-created...
copying certificate from "certmanager/staging-me-k3s" to "staging-me-k3s/proxy-config-certificates"
Error: export KUBECONFIG=/workspace/k3s-external.yaml && kubectl get secret staging-me-k3s --namespace=certmanager -o yaml | yq d - 'metadata.namespace' | yq d - 'metadata.uid' | yq d - 'metadata.resourceVersion' | yq d - 'metadata.creationTimestamp' | sed 's/staging-me-k3s/proxy-config-certificates/g' | kubectl apply --namespace=staging-me-k3s -f - exit with non-zero status code

and later during helm install:

secret/proxy-config-certificates configured
Error from server (NotFound): secrets "staging-me-k3s" not found
error: error validating "STDIN": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false

I think this is because cert is taking time to get created. I am adding a wait to see if that is infact the case

@princerachit
Copy link
Contributor Author

princerachit commented Jul 6, 2021

/werft run

👍 started the job as gitpod-build-prs-install-gitpod-k3s.119

@princerachit
Copy link
Contributor Author

I have fixed all the issues wrrt circular dependency. I have also tested it in a separate branch which passed in one go: https://werft.gitpod-dev.com/job/gitpod-build-prs-test-2.0/results

I was able to create a ws too.

@princerachit
Copy link
Contributor Author

I have rebased the branch and tested both flags enabled and flag disabled cases. Refer to https://werft.gitpod-dev.com/job/gitpod-build-prs-rebased-k3s.1/raw and https://werft.gitpod-dev.com/job/gitpod-build-prs-rebased-k3s.0/raw

@princerachit
Copy link
Contributor Author

image

Copy link
Member

@meysholdt meysholdt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tried it and worked for me.

NIT: I needed to run the job twice, because on the first run ws-manager-bridge wasn't ready for the gpctl clusters register. Since you said you (@princerachit ) said you want to fix that in a folow-up-PR I'm approving this one.

@princerachit princerachit merged commit 9a0afff into main Jul 8, 2021
@princerachit princerachit deleted the prs/install-gitpod-k3s branch July 8, 2021 09:29
MatthewFagan pushed a commit to trilogy-group/gitpod that referenced this pull request Dec 5, 2021
…ate k3s cluster (gitpod-io#4664)

* Support workspace deployment in a separate k3s cluster using flag k3s-ws
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants