-
Notifications
You must be signed in to change notification settings - Fork 8.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GKE Internal LoadBalancer with a given reserved static LoadBalancerIP #9403
Comments
@fabioformosa: This issue is currently awaiting triage. If Ingress contributors determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I've just found in this google doc page that, when a loadbalancer IP is not passed, Google assigns to the internal load balancer an IP address belonging to the primary IP address range, which is the same used to allocate cluster nodes.
Perhaps it's not possible to assign whatever private IP to the helm value I'm looking for whether it's correct to reserve private IP address in my own, from a range passed to GKE for cluster nodes. |
Solved in my own.
So my mistake was to set an IP to the helm value I've just submitted a PR #9406 to enrich the documentation of the helm chart, adding the missing property |
kubernetes#9403 Add documentation for controller.service.internal.loadBalancerIP in Helm chart
/remove-kind bug |
/area documentation |
@longwuyuan: The label(s) In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/area docs |
removed a comment from an already supported helm value and added a doc line
Removed a manual added line in favour of helm doc
…to the value.yaml
/assign |
…cerIP (#9406) * Update README.md #9403 Add documentation for controller.service.internal.loadBalancerIP in Helm chart * Update README.md removed a duplicated row in the helm chart values * #9403 added a doc to the internal loadBalancerIP removed a comment from an already supported helm value and added a doc line * #9403 Reverted a manual added line Removed a manual added line in favour of helm doc * #9403 re-generated the README with the last doc line added to the value.yaml * #9403 removed trailing spaces * removed trail spaces
What happened:
I've seen it's possible to pass to the helm chart a pre-allocated private IP to the internal load balancer (as the same way as it's possible to do with the external load balancer) through the variable
controller.service.internal.loadBalancerIP
as shown in controller-service-internal.yamlI ran the helm chart in a GKE cluster. The Internal Loadbalancer is created, unfortunately with no private IP associated to that. In the GCP UI, I read "This load balancer has no frontend configured".
The error I found in the kubernetes event list of the nginx service is
Error syncing load balancer: failed to ensure load balancer: googleapi: Error 400: Invalid value for field 'resource.IPAddress': '192.168.195.38'. Requested internal IP is outside the network/subnetwork range., invalid
192.168.195.38 is an IP a early reserved creating a GCP private address. It's not part of the subnet for the cluster nodes.
(https://cloud.google.com/compute/docs/ip-addresses/reserve-static-internal-ip-address)
I've also associated an internal domain name to it.
What you expected to happen:
I would like the value
controller.service.internal.loadBalancerIP
have the effect to assign that IP to the internal load balancer in GCP.NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):
helm chart v4.4.0
nginx-ingress-controller 1.5.1
Kubernetes version (use
kubectl version
):1.24.4
Environment:
Cloud provider or hardware configuration: GCP
OS (e.g. from /etc/os-release):
Kernel (e.g.
uname -a
):Install tools:
Basic cluster related info:
kubectl version
v1.22.10kubectl get nodes -o wide
How was the ingress-nginx-controller installed:
helm ls -A | grep -i ingress
helm -n <ingresscontrollernamepspace> get values <helmreleasename>
kubectl describe ingressclasses
kubectl -n <ingresscontrollernamespace> get all -A -o wide
kubectl -n <ingresscontrollernamespace> describe po <ingresscontrollerpodname>
kubectl -n <ingresscontrollernamespace> describe svc <ingresscontrollerservicename>
Current state of ingress object, if applicable:
kubectl -n <appnnamespace> get all,ing -o wide
kubectl -n <appnamespace> describe ing <ingressname>
Others:
kubectl describe ...
of any custom configmap(s) created and in useHow to reproduce this issue:
Anything else we need to know:
If I don't set the internal load balancer IP then GCP creates an internal loadbalancer with an its random private IP.
The text was updated successfully, but these errors were encountered: