You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There's actually a huge difference between the two parameters; k8s::manage_kube_proxy is used by k8s::server (through k8s::server::resources) to deploy kube-proxy as an in-cluster component.
While k8s::node::manage_proxy is used to deploy kube-proxy as an on-node component, with entirely different auth and configuration requirements.
Setting both to true would result in a broken cluster, as you'd have two separate proxy instances fighting over routing configuration. Similarly, setting both to false would result in a cluster entirely without the default kube-proxy component - which is used by clusters running network overlays that do their own proxying.
Perhaps k8s::manage_kube_proxy should be an enum variant instead, something like Variant[Enum['in-cluster', 'on-node'], Boolean] - with true being handled the same as in-cluster
Affected Puppet, Ruby, OS and module versions/distributions
What are you seeing
In
k8s
class:Boolean $manage_kube_proxy = true,
In
k8s::node
class:Boolean $manage_proxy = false,
What behaviour did you expect instead
Should we use the
k8s
manage_kube_proxy
parameter as default tok8s::node
manage_proxy
parameter?The text was updated successfully, but these errors were encountered: