Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use service of type LoadBalancer instead of Ingress #264

Closed
wants to merge 11 commits into from

Conversation

robertvolkmann
Copy link
Contributor

@robertvolkmann robertvolkmann commented Feb 22, 2024

This decision was made due to the gardener's default denial of all traffic in the garden namespace, requiring network policies for Ingress controller use.

This aligns with the gardener operator's default behavior, as per official documentation:

The virtual-garden-kube-apiserver Deployment is exposed via a Service of type LoadBalancer with the same name. In the future, we will switch to exposing it via Istio, similar to how the kube-apiservers of shoot clusters are exposed.

For the virtual cluster, it is essential to provide a DNS domain via .spec.virtualCluster.dns.domain. The respective DNS record is not managed by gardener-operator and should be manually created and pointed to the load balancer IP of the virtual-garden-kube-apiserver Service. The DNS domain is used for the server in the kubeconfig, and for configuring the --external-hostname flag of the API server.

In order to align with Gardener, we remove the ingress resource of the Virtual Garden kube-apiserver. Instead, migrate to direct exposal of the kube-apiserver through service type LoadBalancer, which can be configured using the new `gardener_virtual_api_server_public_ip` role parameter. The DNS entry can be switched to the new IP address seamlessly.

@robertvolkmann robertvolkmann requested a review from a team as a code owner February 22, 2024 10:43
This decision was made due to the gardener's default denial of all traffic in the garden namespace, requiring network policies for Ingress controller use.

This aligns with the gardener operator's default behavior, as per official documentation:

> The virtual-garden-kube-apiserver Deployment is exposed via a Service of type LoadBalancer with the same name. In the future, we will switch to exposing it via Istio, similar to how the kube-apiservers of shoot clusters are exposed.

> For the virtual cluster, it is essential to provide a DNS domain via .spec.virtualCluster.dns.domain. The respective DNS record is not managed by gardener-operator and should be manually created and pointed to the load balancer IP of the virtual-garden-kube-apiserver Service. The DNS domain is used for the server in the kubeconfig, and for configuring the --external-hostname flag of the API server.
@robertvolkmann robertvolkmann force-pushed the loadbalancer-virtual-kube-api-server branch from 42849c6 to 39ad105 Compare February 22, 2024 10:44
@robertvolkmann robertvolkmann force-pushed the loadbalancer-virtual-kube-api-server branch from 69ce488 to a5dc635 Compare February 22, 2024 10:53
@Gerrit91
Copy link
Contributor

Needs rebase.

{{- if .Values.kubeAPIServer.loadBalancerIP }}
loadBalancerIP: {{ .Values.kubeAPIServer.loadBalancerIP }}
{{- end }}
type: LoadBalancer
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we move the type: LoadBalancer into the if-condition? I would like to maintain the original behavior and do not automatically acquire an IP address if someone forgets to set this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't set the IP in our setup.

@robertvolkmann robertvolkmann deleted the loadbalancer-virtual-kube-api-server branch March 21, 2024 08:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants