-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
KEP: Extending Apiserver Network Proxy to handle traffic originated from Node network #2025
Conversation
Welcome @irozzo-1A! |
Hi @irozzo-1A. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
1.20 Enhancements Lead here 👋 does this KEP have an issue open? If not please open an issue in the Issues Tab and also add a link to the issue here. Also, this KEP is using the older format that is missing the Production Readiness Review Questionnaire, etc... so if you could please update that would be awesome (see for ref https://github.com/kubernetes/enhancements/tree/master/keps/NNNN-kep-template) |
I don't have any one issue yet. I'll take care.
I will also do that, thx for the pointer ;-) |
/assign |
/cc @anfernee |
/ok-to-test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@irozzo-1A thanks for starting this KEP! I asked for some clarifications.
|
||
## Motivation | ||
|
||
API server network proxy has been originally introduced to allow running the cluster nodes on distinct isolated networks with respect to the one hosting the control plane components. This provides a way to handle traffic originating from the Kube API Server and going to the node networks. When using this setup, there are no other options than to directly expose the KAS to the Internet or setting up a VPN to handle traffic originated from the cluster nodes (i.e. Kubelet, pods). This could lead to security risks or complicated setups. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This could lead to security risks or complicated setups.
Can you elaborate on why exposing the Konnnectivity Proxy Server to the cluster network solve the "security risks or complicated setups"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Regarding the "security risks": I think that exposing Konnectivity Proxy Server would add an additional layer of security, given that the channels with proxy agents are secured with mTLS or Token authentication.
The KAS is not directly exposed to the internet but only accessible from "secured" networks. This would protect it, for instance, from KAS misconfigurations or vulnerabilities that could expose sensitive information
and/or access to unauthenticated users.
IMHO on a higher level, if we believe that exposing the Kubelet to the internet brings security risks, the same should hold for the KAS.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Regarding the "complicated setups": You are right we did not elaborate enough on this point, and "complicated" is not the appropriate term to be used here. We will amend the proposal.
The point we want to make is that right now there is no standard solution for this kind of setup. It is possible to rely on VPNs for example to achieve a similar goal, but this requires specific implementations. What we propose here is to build on top of what we have, and having a consistent approach for master to node and node to master communications.
|
||
* `--bind-address=ip`: Local IP address where the Konnectivity Agent will listen for incoming requests. It will be bound to a dummy IP interface with IP x.y.z.v defined by the user. Must be used with the previous one to enable incoming requests. If not, and for backward compatibility, only the traffic initiated from the Control Plane will be allowed. | ||
|
||
### Handling the Traffic from the Pods to the Agent |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How does the agent authenticate the pods or the kubelet?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It doesn't, it acts as a TCP forwarder without terminating TLS.
``` | ||
|
||
The agent listens for TCP connections at a specific port for each configured destination. When a connection request is received by the Konnectivity Agent the following happens: | ||
1. A GRPC DIAL_REQ message is sent to the Konnectivity server containing the destination address associated with the current port. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Has the proxy agent finished the TCP handshake with the client before step 1?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, this is what we had in mind. I can make this explicit if you think it's necessary.
I'm not sure it would be easy to send the DIAL_REQ at SYN or SYN/ACK phase. How could we do it?
As far as I know TCPListener.Accept method returns the connection once the TCP handshake is over.
BTW do you foresee any advantage in sending the DIAL_REQ before the TCP handshake is over?
### Agent additional flags | ||
|
||
* `--target=local_port:dst_host_ip:dst_port`: We can have multiple of those in order to support multiple destinations on the Master Network. | ||
Dst_host_ip: end target IP (apiserver or something else). In case of IPv6 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm curious what the "something else" is. Can you give some examples?
(If we want to support proxying to things other than the apiserver, we should remove the "apiserver" from the repository name :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also how does the proxy server authenticate with the "something else"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't have any use-case in mind at the moment. We could limit the scope to the KAS only.
|
||
### Deployment Model | ||
|
||
The agent can be run as static pod or systemd units. In any case the agent should be started to give access to the KAS to the kubelet first and to the hosted pods later. This means that using DaemonSets or Deployments is not an option in this setup, because the kubelet would not be able to get the pod manifests from the KAS. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure if running as a static pod will solve the kubelet bootstrapping issue. Kubelet needs to access to KAS to watch for secrets/configmaps, and to register the node. I don't know if kubelet can bootstrap in this order:
- run the proxy agent as a static pod, waiting for it to establish tunnels to the KAS,
- start watching for secrets/configmaps/pods and register the node.
Also I'm not sure if other node components like node-problem-detector are ok with this bootstrap order.
I think @cheftako knows this better.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Definitely the proxy agent is required to be up and running in order for the kubelet to perform its bootstrap sequence. I was expecting the static pods to be running without the kubelet being registered, but I did not test it and I agree we should double check if this is feasible.
|
||
### Authentication | ||
|
||
Konnectivity agent currently support mTLS or Token based authentication. Note that API objects such as Secrets cannot be accessed either when a StaticPod or Systemd service deployment strategy is used. The authentication secret should be made available to the agent through a different channel (e.g. provisioned in the worker node file-system). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is talking about the authentication between the proxy agent and the proxy server, right? Can you point this out in the KEP?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes you are right, we'll make it clear.
@@ -0,0 +1,194 @@ | |||
--- | |||
title: Out-of-Tree Credential Providers |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copy paste error I'm guessing? :P
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, thx for pointing out.
|
||
## Summary | ||
|
||
The goal of this proposal is to allow traffic to flow from the Node Network to the Master Network. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"Cluster network to the Control Plane network"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We initially used that terminology, but we changed it later to be consistent with the definitions below. Of course we are open to change if you think it makes it more clear.
|
||
As mentioned above, pods make use of the Kubernetes default service to reach the KAS. To keep things transparent from a Pod perspective, they will hit the Konnectivity Agent using the Kubernetes default service. The endpoint will be the Konnectivity Agent instead of the KAS. | ||
The configuration part of the Kubernetes default service will be done using the Apiserver flag `--advertise-address ip` on the Control Plane side. | ||
`--advertise-address ip` should match the `--bind-address ip` of the Konnectivity Agent described above. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would the serving port used in updating the default kubernetes
Service be updated as well? And if so that would imply kube-apiserver and Konnectivity Agent listen on the same ports?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, the Agent should listen on the secure port used by the KAS. I think that we should make this more clear. Thx for the hint.
|
||
### Handling the Traffic from the Pods to the Agent | ||
|
||
As mentioned above, pods make use of the Kubernetes default service to reach the KAS. To keep things transparent from a Pod perspective, they will hit the Konnectivity Agent using the Kubernetes default service. The endpoint will be the Konnectivity Agent instead of the KAS. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mean the cluster-ip service (something like 10.96.0.1)? does that mean kube-proxy has dependency on the konnectivity? It looks like it's not, but want to make sure it's accurate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that is correct.
Kube-proxy has no dependency on the Konnectivity agent. As we mentioned below, when configuring the KAS it is necessary to make sure that the IP used by the Agent is the same as the one advertised by the KAS.
|
||
Currently the Konnectivity Server is accepting requests from the KAS either with the gRPC or the HTTP Connect interfaces and is taking care of forwarding the traffic to the Konnectivity Agent using the previously established connections (initiated by the Agents). | ||
|
||
In order to enable traffic from Kubelets and Pods running on Master Network, the Konnectivity Agents have to expose an endpoint that will be listening on a specific port for each of the destinations on the Master Network. As opposed to the traffic flowing from the Master Network to the Node Network, the Konnectivity Agent should act transparently: From a Kubelets or Pods standpoint, the Konnectivity Agent should be the final destination instead of acting as a proxy. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a little bit confusing. It's still a proxy that forwards the traffic to KAS, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, you can see this as an equivalent to ssh remote port forwarding. The client is not aware of interacting with a proxy but from his standpoint, he is sending the request to his final destination.
The agent listens for TCP connections at a specific port for each configured destination. When a connection request is received by the Konnectivity Agent the following happens: | ||
1. A GRPC DIAL_REQ message is sent to the Konnectivity server containing the destination address associated with the current port. | ||
2. Upon reception of the DIAL_REQ the Konnectivity Server opens a TCP connection with the destination host/port and replies to the Konnectivity Agent with a GRPC DIAL_RES message. | ||
3. At this point the tunnel is established and data is piped through it, carried over GRPC DATA packets. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess this has big performance impact. Because the other end is always KAS:6443
, It seems a reverse proxy on konnectivity server side could do the same job without agent.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed, we have the same performance penalty that we are paying with traffic going from the Konnectivity server to the nodes.
When using the reverse proxy, we don't have the additional layer of security provided by the authentication between the Konnectivity Agent and the Konnectivity Server but also we lose the ability to use SNI for load balancing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess a whitelist is necessary this sort of proxy in reserve order. master network has access everything in node network, but I am not sure the reverse is true.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 to having an explicit allow list on the Konnectivity Server which controls where it will allow traffic to be sent on the control plane.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@anfernee @cheftako
Indeed, that is correct and we have taken this into account in the Risks and Mitigations / Allow list section.
### Traffic Flow | ||
|
||
``` | ||
client =TCP=> (:6443) agent GRPC=> server =TCP=> KAS(:6443) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since the control plane and cluster networks are disjoint, can you elaborate on how to agent -> server tunnel is established? Since the agent shares the same network as other pods on the cluster network, can other pods (eg: kubelet) not directly tunnel to the konnectivity server as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is established by exposing the Konnectivity server. The only requirement is that the Agent must be able to route traffic to the Konnectivity Server (equivalent to what is required today).
Regarding the second question, as we don't have the hands-on on the clients reaching the KAS (apart from the kubelet), we cannot force them to establish tunnels with the Konnectivity Server.
### Handling the Traffic from the Kubelet to the Agent | ||
|
||
Kubelet does not use the Kubernetes default service to reach the KAS. Instead it relies on a bootstrap kubeconfig file that is used to connect to the KAS. It then generates a proper kubeconfig file that will be using the same URL. | ||
Instead of specifying the KAS FQDN/address in the bootstrap kubeconfig file, we will be using the local IP address of the Konnectivity agent (`--bind-address ip`). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are pods only able to communicate to the agent on the same machine?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, because it will bind to a local scoped interface.
@cheftako we have addressed the comments. Would be great to get another review :) |
@cheftako what is the status update on this thread? do you plan to put it |
keps/sig-cloud-provider/2025-extend-konnectivity-for-both-directions/README.md
Outdated
Show resolved
Hide resolved
As mentioned before, we will be using the Kubernetes default service to route traffic to the agent. The service in itself has a couple of limitations: it can’t be used as a type externalName, thus preventing usage of DNS names. But also, some general services limitations apply: endpoints can't use the link-local range and the localhost range. This means that we are left with the 3 private IPs ranges (10.0.0.0/8, 172.16.0.0/12 and 192.168.0.0/16). | ||
|
||
The agent will create a dummy interface, assign it the ip provided with the `bind-address` flag, using `host` scope, and will start listening on this IP:local_port (local_port is defined with the `target` flag). This will allow all agents to bind to the ip address advertised by the KAS, that will be valid only inside the node. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We thought about a possible shortcoming with this approach. The idea is to redirect the KAS traffic generated from PODs on the node network to Konnectivity agents, by setting the IP used by the agents with advertise-address
flag of the KAS. This IP will be set in the Endpoints of the kubernetes
service in the default namespace, which is used by k8s clients.
On the other hand, this could be limiting if PODs deployed on the master network rely on the kubernetes default service as well (e.g. CNI pods). As we won't have agents deployed on master nodes, this IP will be unreachable. To circumvent this, we could use the IP address of a load-balancer targeting the KAS instances in an HA setup.
Alternatively, we could consider another approach. The agent could use iptables and "dnat" the traffic going to the Kubernetes default service to the local agent. In this case, the configuration will be probably easier, but the reason we initially discarded this option is that the implementation is more complex as we would need to take care of possible conflicts with other rules (e.g. kube-proxy in iptables mode).
@cheftako @anfernee @timoreimann any thoughts about this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure if I understand the load-balancer scenario correctly.
If you set the KAS LB IP via advertise-address
to set it as endpoint for the kubernetes
svc, you wouldn't need the agents in the first place, would you?
On the LB side there's another potential downside for providers that try to host multiple KAS instances from different clusters behind one LB. There's no way to differentiate to which cluster the traffic belongs. SNI information will yield kubernetes
for all. The SNI info from the agent connections makes that possible.
Just using the load-balancer approach on the master could also lead to a problem. IIRC on some cloud providers services behind the LB cannot directly talk to the external LB IP. You'd need to "dnat" this again on the master node, which we could do anyway (also in other scenarios) on the master node.
Personally I'd be fine to not be able to use the kubernetes svc on the master and be forced to configure the services with the k8s endpoint.
With regards to network ranges that we could use: Shouldn't we also be able to make use of the "carrier grade NAT" range (100.64.0.0 - 100.127.255.255). I'm assuming this has less chance of conflicting with any existing cluster ranges.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @gottwald, thanks for reacting!
If you set the KAS LB IP via
advertise-address
to set it as endpoint for thekubernetes
svc, you wouldn't need the agents in the first place, would you?
I think I did not express clearly enough my idea. What I meant is that we could use a private LB with an IP that is not routable from the node network, and will be used from within the master network only, to allow the PODs to reach the KAS via the default kubernetes service. On the node network, things would be unchanged.
Personally I'd be fine to not be able to use the kubernetes svc on the master and be forced to configure the services with the k8s endpoint.
If no one has any objection, I would be fine to start with this limitation too.
With regards to network ranges that we could use: Shouldn't we also be able to make use of the "carrier grade NAT" range (100.64.0.0 - 100.127.255.255). I'm assuming this has less chance of conflicting with any existing cluster ranges.
Thx for the hint. I did not know about this range. Using link-local range would be probably the safest/cleanest solution, but as we cannot because of endpoints limitations I have nothing against mentioning this range too, @youssefazrak Any thoughts about this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nothing against it. And actually, that's a good idea.
The range is private to the cluster and supposedly a host reaches from a CGN network, there will be no routing conflict as it is NATed to another range.
@@ -0,0 +1,694 @@ | |||
<!-- | |||
**Note:** When your KEP is complete, all of these comment blocks should be removed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like the metadata has been moved to the kep.yaml, would be nice to get rid of these big comment blocks.
--> | ||
# KEP-2025: Extending Apiserver Network Proxy to handle traffic originated from Node network | ||
|
||
<!-- |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please 😸
[documentation style guide]: https://github.com/kubernetes/community/blob/master/contributors/guide/style-guide.md | ||
--> | ||
|
||
The goal of this proposal is to allow traffic to flow from the Node Network to the Master Network. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we elaborate here? In many environments this already works, so can we attempt a description of when you would need this solution?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"The goal of this proposal is to provide a mechanism which allows traffic to flow from the Node Network to the Master Network, when those networks are otherwise isolated and there is a desire not to expose the Kubernetes API Server publicly"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes indeed, that's more explicit
List the specific goals of the KEP. What is it trying to achieve? How will we | ||
know that this has succeeded? | ||
--> | ||
* Handle requests from the nodes to the control plane. Enable communication from the Node Network to the Master Network without having to expose the KAS to the Node Network. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it just Nodes or is it any KAS client running in the Node Network? (So operators and the like)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's any KAS client running on node network
--> | ||
* Define a mechanism for exchanging authentication information used for establishing the secure channels between agents and server (e.g. certificates, tokens). | ||
* Define a solution involving less than one agent per node. | ||
* Being able to reach arbitrary destinations on the master network, this could be considered in the future if some use-cases arise. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For now can we restrict it to just the KAS? (i.e. non-goal to talk to anything other than the KAS on the master network)
--> | ||
Currently the Konnectivity Server is accepting requests from the KAS either with the gRPC or the HTTP Connect interfaces and is taking care of forwarding the traffic to the Konnectivity Agent using the previously established connections (initiated by the agents). | ||
|
||
In order to enable traffic from Kubelets and pods running on Node Network, the Konnectivity Agents have to expose an endpoint that will be listening on a specific port and forward the traffic to the KAS on the Master Network. As opposed to the traffic flowing from the Master Network to the Node Network, the Konnectivity Agent should act transparently: From a Kubelets or pods standpoint, the Konnectivity Agent should be the final destination instead of acting as a proxy. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Anything capable of sending to that port will be able to send traffic to the KAS. We may want to think about options like listening on localhost or firewalling of that port for non node network traffic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, we plan to use a host
scope address, so that it will possible to use it from within the host itself only.
### Traffic Flow | ||
|
||
``` | ||
client =TCP=> (:6443) agent GRPC=> server =TCP=> KAS(:6443) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm fine with port 6443 being the default but I would suggest that be configurable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In a HA setup there could be multiple servers that the agent is connected to. Do we have a reason to care where the traffic goes? (Eg. matching failure zone?) Or do we just pick a random server?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it is actually, I put 6443 just as an example here as it is the default used by KAS (if no secure_port
flag is specified)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In a HA setup there could be multiple servers that the agent is connected to. Do we have a reason to care where the traffic goes? (Eg. matching failure zone?) Or do we just pick a random server?
@cheftako I would say that we pick a random server, but if we can think about evolving this later in case of need.
|
||
* `--allowed-destination=dst_host:dst_port`: The address and port of the KAS. | ||
|
||
Note: if this feature will be extended to allow reaching arbitrary destinations in the master network, this can be easily generalized by allowing multiple occurrences of this flag and maintaining a list of allowed destinations. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we explicitly specify that if this flag is absent the server will no allow any node network initiated requests (traffic) to be placed on the master network.
db194f0
to
560844c
Compare
owning-sig: sig-cloud-provider | ||
participating-sigs: | ||
- sig-network | ||
status: provisional |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi there, 1.21 Enhancements Lead here.
Please make sure to change the status to implementable
to meet one of the requirements for all KEP tracking for the release.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @annajung, it's done.
/assign @johnbelamaric |
|
||
* **Can the feature be disabled once it has been enabled (i.e. can we roll back | ||
the enablement)?** | ||
Yes, it can be disabled by simply changing the KAS `--advertise-address` and |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like you'd need to update the workers to remove the konnectivity agent that's configured locally.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
indeed @deads2k the agents should be removed. I'll add this step
* **What are the SLIs (Service Level Indicators) an operator can use to determine | ||
the health of the service?** | ||
- [ ] Metrics | ||
- Metric name: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This isn't required for alpha, but if/when you move to beta, I'd like to see metrics from the konnectivity agent.
the PRR looks good for alpha. Please keep the comment about metrics in mind for beta. /approve |
591c44a
to
be94d62
Compare
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: cheftako, deads2k, irozzo-1A The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Enhancement issue: #2347
Rendered version: https://github.com/kubernetes/enhancements/tree/master/keps/sig-cloud-provider/2025-extend-konnectivity-for-both-directions