You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What steps did you take and what happened:
I try to implement allowedCIDRs for my clusters. doing this by setting allowedCIDRs in the apiServerLoadBalancer in the openstackcluster spec.
The problem I face is that we do not use the standard neutron implementation where all nodes are using the routers IP for snat. This means when trying to bootstrap the first node it fails to start the kubelet due to connections to the API being blocked by the loadbalancer.
I could manually add all our snat pools IPs and everything works. the problem with this is that the SNAT IPs are shared between multiple customers. and even If I use the new IPAM code that is under review I would still need to manually add those IPs to the allowedCIDRs list.
What did you expect to happen:
I expect all cluster nodes to use the internal LB endpoint for api traffic. It seems a bit odd when there is an internal endpoint to use the external IP for in-cluster traffic?
Anything else you would like to add:
Another alternative is to use the IPAM ippool, the problem for this is that we need to watch another object and trigger an lb reconcile upon. However this object contains a list of valid IPs that could simply be appended. I think it would make more sense to migrate to use the internal endpoint for in-cluster api traffic.
Environment:
Cluster API Provider OpenStack version (Or git rev-parse HEAD if manually built): latest master (commit: 5cc483b)
Cluster-API version: 1.6.1
OpenStack version: Ussuri
Minikube/KIND version: kind 0.20.0
Kubernetes version (use kubectl version): 1.29.1
OS (e.g. from /etc/os-release): ubuntu 22.04
The text was updated successfully, but these errors were encountered:
This is a CAPI limitation (which may in turn be based on a kubeadm limitation?): it is only possible to configure a single control plane endpoint, so it is the public one. I completely agree that it would be ideal to have separate internal and external endpoints, but I don't think there's currently anywhere to configure them.
Reading through the comments on that issue and also kubernetes-sigs/cluster-api#8500, it sounds like some other providers may have various degrees of workaround/hack for the issue which it might be worth investigating until we can implement it properly.
/kind bug
What steps did you take and what happened:
I try to implement
allowedCIDRs
for my clusters. doing this by settingallowedCIDRs
in theapiServerLoadBalancer
in the openstackcluster spec.Example:
The problem I face is that we do not use the standard neutron implementation where all nodes are using the routers IP for snat. This means when trying to bootstrap the first node it fails to start the kubelet due to connections to the API being blocked by the loadbalancer.
I could manually add all our snat pools IPs and everything works. the problem with this is that the SNAT IPs are shared between multiple customers. and even If I use the new IPAM code that is under review I would still need to manually add those IPs to the
allowedCIDRs
list.What did you expect to happen:
I expect all cluster nodes to use the internal LB endpoint for api traffic. It seems a bit odd when there is an internal endpoint to use the external IP for in-cluster traffic?
Anything else you would like to add:
Another alternative is to use the IPAM ippool, the problem for this is that we need to watch another object and trigger an lb reconcile upon. However this object contains a list of valid IPs that could simply be appended. I think it would make more sense to migrate to use the internal endpoint for in-cluster api traffic.
Environment:
git rev-parse HEAD
if manually built): latest master (commit: 5cc483b)kubectl version
): 1.29.1/etc/os-release
): ubuntu 22.04The text was updated successfully, but these errors were encountered: