Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SDN-2481] Enhancement proposal to migrate Multiple External Gateways to use a CRD #1338

Conversation

jordigilh
Copy link
Contributor

This PR contains a proposal to migrate the current annotation based implementation of the multiple external gateways to use a custom resource paired with its own controller. The main motivation is to provide better control over the security aspect of the workloads that can be labeled as gateways by moving the configuration out of the pod and namespace metadata and onto a CR.

@trozet PTAL

@openshift-ci openshift-ci bot requested review from abhat and dcbw February 6, 2023 15:28
@jordigilh jordigilh force-pushed the sdn_2481_multiple_external_gateways_crd branch 4 times, most recently from 235978e to 3d80586 Compare February 7, 2023 08:22

## Summary

When it comes to ingress traffic, load balancers and reverse proxies take care of redirecting the traffic to the securely destinations. Egress traffic is no different. Although it is already possible to use egress gateways, telco customers require the ability to finetune this traffic per namespace.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should drop the word securely here. To me this implies some type of ipsec/encryption of traffic. You can just say "redirecting traffic to one of potentially multiple destinations".

"telco customers require the ability to dynamically configure one or more egress next hops per namespace via a common and secure API"?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Understood. Will update.

* `k8s.ovn.org/routing-network`
* `k8s.ovn.org/bfd-enabled`

However, to ensure only trusted deployments use these annotations the solution is to leverage on an OPA rule that filter which pods can use these annotations and in which namespaces.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what does OPA mean?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Open Policy Agent it's a policy engine that uses rego language to define the policies. The example I am a bit familiar is gatekeeper. Kyverno in turn uses k8s manifests to define the rules of admission. In essence both are admission webhooks. This link details a bit more about each approach.
https://devopslearning.medium.com/day-6-open-policy-agent-opa-vs-kyverno-7604b5ecddd9

My point here was to give a generic example that would fit the narrative without specifying an implementation, such as kyverno or gatekeeper.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will update to mention an admission webhook instead. It is less biased towards any of the implementations mentioned earlier.

- Migrate the current pod/namespace annotation based for the Multi External Gateway configuration to a new CRD.
- Define a controller that reimplements the existing logic that handles the annotations but on the contents of the CRD.
- Extend test coverage to include the new CRD use cases, matching the same use cases covered in the annotation tests.
- Support for CRD configuration takes precedence over the mentioned annotations in OpenShift 4.14 and onwards. Annotations will still be the primary source of truth in 4.13, regardless of the existence of a CR in the namespace. The current annotation logic remains in place and will need to be enhanced to implement this distinction.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe you can just mention that annotations will be removed in 4.14 but left in 4.13 as a migration path for any existing users? The only officially supported and documented method of configuration will be the CRD in 4.13 and beyond.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Understood. I will update

enhancements/network/ovn-multi-external-gateway.md Outdated Show resolved Hide resolved

A new namespaced-scoped CRD is created to represent the multiple external gateways configuration. The annotations are mapped as following:
* `k8s.ovn.org/routing-external-gws` is a namespace specific annotation and is represented in the CRD as `routingExternalGateways`. The values represent a slice of strings containing the IPs used for external traffic.
This annotation works in conjunction with `k8s.ovn.org/external-gw-pod-ips` and is used to capture the IPs that are stored in conntrack.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

external-gw-pod-ips tracks IPs of pods that are acting as external gateways. It does not store IPs in routing-external-gws.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the clarification. I will update

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As per @tssurya 's comment, this field is no longer relevant at this point, so I have not included it in the spec.

Copy link
Contributor

@tssurya tssurya Feb 18, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jordigilh : but this can be removed only if like I mentioned someone tests the OVN fix and removes these bits form OVNK if the OVN fix works. Until that is done, we must support this annotation.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@trozet is there a plan to test the fix before this enhancement is implemented? If not, we'll have to update the annotation in both cases.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jordigilh : my two cents is you can do this as pre-work for the epic? I bet it will only simplify your epic so it will be an advantage for you, we could create a bz/card for this work and you can own it, outside of that it is currently not in the team's set of priorities AFAIK, I'll let @trozet chime in.

A new namespaced-scoped CRD is created to represent the multiple external gateways configuration. The annotations are mapped as following:
* `k8s.ovn.org/routing-external-gws` is a namespace specific annotation and is represented in the CRD as `routingExternalGateways`. The values represent a slice of strings containing the IPs used for external traffic.
This annotation works in conjunction with `k8s.ovn.org/external-gw-pod-ips` and is used to capture the IPs that are stored in conntrack.
* `k8s.ovn.org/routing-namespaces` is mapped as `routingNamespaces` and it also contains a slice of the namespaces linked to this gateway configuration.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we can drop this because the CRD is namespace scoped. So it is implied that this CRD would affect pods in the same namespace. Alternatively, we could make the CRD global and then keep this field. The advantage would be one one CR to serve multiple namespaces, but then that would mean less granular RBAC between namespace traffic. I'm kind of inclined to keep it namespace scoped, but open to other's opinions.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My understanding is that the current implementation uses this annotation to allow gw pods to accept traffic from multiple namespaces. I'm fine with namespace scoped and remove this field as you suggested, but I worry if doing so we would be not providing the same functionality that exists with annotations. Thoughts?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, the user would have to make several CRDs, one for each namespace they want to route traffic from. That is a bit different than today as they can declare in a single place they want traffic from all namespaces. That seems less secure though, which is why I was thinking namespaced scope would be better.

@fedepaol wdyt?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Talking more with @jordigilh I think there is another angle we need to consider. If the object is namespace scoped then a tenant in that namespace would be able to control how their traffic egresses. In the real world use case I would imagine cluster admins want to control the traffic of their tenants and do not want to give this kind of access out. So maybe that is a good reason to make it cluster scoped and require cluster admin RBAC.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So talking in the OVNK community call yesterday with, Nvidia showed us their downstream policy based routing CRD, which has some overlap with what we are trying to do. So rather than having 2 different CRDs for routing, we discussed unifying into a single CRD. However, we would only implement pieces that we cared about (multiple external gateways) and Nvidia would bring their corresponding implementation for other cluster routing upstream later.

Example of their CRD:

apiVersion: k8s.ovn.org/v1beta1
kind: AdminPolicyBasedRoute
metadata:
  name: honeypotting
spec:
  networkAttachmentName: default/ovn-primary
  policies:
  - from:
      podSelector:
        matchLabels:
          under_attack: “true”
    to:
      cidr: 0.0.0.0/0
    nexthop:
      ips:
      - 10.192.14.9

Nvidia is using this to match certain traffic to a particular destination, and set the next hop. They do this by Logical Route Policy in OVN on ovn_cluster_router, while we are using Logical Router Static Routes in the gateway router (GR). The reason for using logical route policy is they can match on both source and dest in a route (as well as potentially other criteria), while if they were to use only source or dest they could use regular static routes. However, policy routing takes precedence over static routing in OVN, so we need to be careful here that no policy should ever overlap if we mix policy and static routes.

Here is my first stab at something that we could use as unified CRD for both cases (assume cluster CIDR of 10.244.0.0/24, external network of 172.18.0.0/24):

apiVersion: k8s.ovn.org/v1beta1
kind: AdminPolicyBasedRoute
metadata:
  name: honeypotting
spec:
  policies:
  ##intra cluster example
-  trafficType: internal
   from:
      podSelector:
        matchLabels:
          under_attack: “true”
    to:
      cidr: 9.0.0.0/24
    nexthops:
      - ip: "10.244.0.5"
        bfdEnabled: false
      - ip: "10.244.0.6"
        bfdEnabled: false
      - host
    networkAttachmentName: default/ovn-primary

  ## gateway example
- trafficType: external
  from:
    namespaceSelector:
       matchLabels:
          multiple_gws: true
    nexthops:
      - ip: "172.18.0.2"
        bfdEnabled: true
      - ip: "10.244.0.3"
        bfdEnabled: false
      - podSelector:
          matchLabels:
            external-gateway: true
        bfdEnabled: true
        networkAttachmentName: sriov1

status:
  externalGatewayIPs: ["172.18.0.2", "172.18.0.3", "172.18.0.4", "172.18.0.5"]
  internalGatewayIPs: ["10.244.0.5", "10.244.0.6"]

From the above we get can define a few behaviors (a few of these ideas are very long term, but just thinking outside the box):

  1. CRD is cluster scoped for security.
  2. If traffic type is internal, the rules are applied to ovn_cluster_router. If external, then applied to GR.
  3. If a policy consists of both from and to, then logical route policy will be used. If only source or dest are provided, then logical static routes will be used.
  4. Policies should not overlap.
  5. In the first example there are 3 nexthops provided. Two are static ip addresses inside the cluster subnet, while a third nexthop "host" will indicate to route packets inside the local host where the pod lives (aka mgmt port). This allows users to redirect traffic to the kernel in case they want to use the host ip routing to egress a different interface.
  6. networkAttachmentName defines either the network of the selected pods (primary or secondary) to apply policy to or choose IP from a pod acting as an external gateway.
  7. networkAttachmentName is not applicable for trafficType external on selected pods to apply policy to as secondary networks have no ingress/egress.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure the status above makes sense, or at least we need to provide a status per namespace.
Not sure also that the internal / external naming is appropriate here as (iiuc) even the internal one might affect egress traffic.

Re - the host bit. It'd be nice if we could specify the next hop on the host. That would basically address what we are trying to achieve with services.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Two things here,

  1. I feel the MEG CRD should be cluster-scoped, not namespace scoped - a namespace scope doesn't make sense here; the feature is more for admins not users, similar to egressIPs...

  2. I am slightly skeptical about merging this CR with what Nvidia is doing for another feature? Reason being we can't really change/expand CRDs like that over releases right, they are API breaking changes? Its going to be hard to get all the fields right in such a way that we can only expand in the future and not break the type of fields..., maybe we keep this v1alpha1 and hoping this feature is dev preview until Nvidia also brings its things in. OR I am missing details and maybe we have a doc from Nvidia that explains its feature in depth so that we are covered? -> example is their feature also for egress traffic or is it for internal traffic? Mixing those things up into a single CRD could get confusing if these two features are doing opposite things on a high level.

3rd extra point which is food for thought - how would the controller know whether its an internal v/s external gw thing here depending on which we'd need to do things on GR v/s ovn_cluster_router? - we'd need that signal somehow from the CRD i guess..

Copy link
Contributor Author

@jordigilh jordigilh Feb 13, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Two things here,

1. I feel the MEG CRD should be cluster-scoped, not namespace scoped - a namespace scope doesn't make sense here; the feature is more for admins not users, similar to egressIPs...

Yes, the CRD is cluster scoped. My initial proposal had a mix of cluster and namespaced but that's no longer the case.

2. I am slightly skeptical about merging this CR with what Nvidia is doing for another feature? Reason being we can't really change/expand CRDs like that over releases right, they are API breaking changes? Its going to be hard to get all the fields right in such a way that we can only expand in the future and not break the type of fields..., maybe we keep this v1alpha1 and hoping this feature is dev preview until Nvidia also brings its things in. OR I am missing details and maybe we have a doc from Nvidia that explains its feature in depth so that we are covered? -> example is their feature also for egress traffic or is it for internal traffic? Mixing those things up into a single CRD could get confusing if these two features are doing opposite things on a high level.

What I'm proposing is to use 2 CRDs, one for internal and another one for external traffic. We can keep the CRD structures similar but focus on what matters to us.

3rd extra point which is food for thought - how would the controller know whether its an internal v/s external gw thing here depending on which we'd need to do things on GR v/s ovn_cluster_router? - we'd need that signal somehow from the CRD i guess..

See my comment above.

* `k8s.ovn.org/routing-external-gws` is a namespace specific annotation and is represented in the CRD as `routingExternalGateways`. The values represent a slice of strings containing the IPs used for external traffic.
This annotation works in conjunction with `k8s.ovn.org/external-gw-pod-ips` and is used to capture the IPs that are stored in conntrack.
* `k8s.ovn.org/routing-namespaces` is mapped as `routingNamespaces` and it also contains a slice of the namespaces linked to this gateway configuration.
* `k8s.ovn.org/routing-network` is mapped as `routingNetwork` and defines the multus network configuration that matches the same name in the pod that will be used for external traffic. If the field is empty or undefined, the pod's IP will be used instead. This matches the current behavior used with annotations.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So today each exgw pod can configure this field to indicate to OVNK which IP to use from their multus network attachment. That means if we have 2 exgws, Pod A and Pod B, they could potentially have different routing-network values and still serve the same client namespace/pods. If we only allow for a singular value here, it means any pods matching the selector (lets assume Pod A and Pod B) would have to use the same network. Should we make this a plural slice of potential networks, and OVNK will just search each pod for one of the values? Or should we make it more fine grained with multiple selectors like:

spec:
  podGateways:
    - network: blue
      podSelector:
        - matchLabels:
          - external-gateway: true
     - network: red
       podSelector:
         - matchLabels:
           - external-gateway2: true

Similarly I think we allow bfd to be enabled on a per pod basis (but not on a per static IP defined in the namespace basis). So maybe we need to have granularity there as well and separate it from statically assigned IPs. Like:

kind: MultipleExternalGateway
apiVersion: k8s.ovn.org/v1
metadata:
  name: multiple-external-gateways-configuration
  namespace: default
spec:
  staticGateways:  ["9.0.0.1", "9.0.0.2"]
  staticGatewaysBFD: true
  podGateways:
    - network: blue
      bfdEnabled: true
      podSelector:
        - matchLabels:
          - external-gateway: true
     - network: red
       podSelector:
         - matchLabels:
           - external-gateway2: true
status:
  externalGatewayPodIPs: ["10.10.1.2","10.10.1.5"]

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense about BFD being per pod and being able to have multiple selectors.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the purpose of staticGatewaysBFD in the manifest? If it's a deployment/pod specific feature, does it make sense to apply to the whole manifest?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so that is there to signal to use BFD or not with the gateways specified in the staticGateways. Today we only allow you to configure BFD on the namespace for the entirety of all static IPs defined in the namespace annotation (not on an individual basis). But for pods acting as external gateways, we allow you to annotate those individually. In hindsight this wasn't a very consistent design decision :) So, since we do not want to change the functional behavior I think we need to still allow for individual BFD selection for pod external gateways, but not for statically assigned IPs (previously done in the namespace annotation).

Does that make sense?

@fedepaol can you double check me here please.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, checking the code it does what you mentioned. On a second thought, it might make more sense to allow defining the BFD per external IP rather than as a whole as it is currently done

staticExternalGateways:
  - ip: "192.168.1.1"
    bfdEnabled: true
  - ip: "10.1.1.1"

I don't mind keeping it for backwards compatibility if it's worth keeping it, but I'm happy to rework/drop stuff that doesn't make sense or is not used/deprecated.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah it would be nice to allow that, i was just trying to avoid any change in functionality so that there would not be extra work to do there. I don't think it is a huge effort though to change this behavior.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm proposing to define 2 CRDs, one for each type of traffic, but keeping the similar high level fields (from,nextHops). Here is an update of the example posted earlier in this thread:

apiVersion: k8s.ovn.org/v1beta1
kind: AdminPolicyBasedExternalRoute
metadata:
  name: honeypotting
spec:
  policies:
  ## gateway example
  - from:
      namespaceSelector:
        matchLabels:
            multiple_gws: true
    nexthops:
      - ip: "172.18.0.2"
        bfdEnabled: true
      - ip: "10.244.0.3"
        bfdEnabled: false
      - podSelector:
          matchLabels:
            external-gateway: true
        bfdEnabled: true
        namespaceSelector:
          matchLabels:
            gateway: true
        networkAttachmentName: sriov1

enhancements/network/ovn-multi-external-gateway.md Outdated Show resolved Hide resolved
@trozet
Copy link

trozet commented Feb 7, 2023

@tssurya @fedepaol can you please also review?

@jordigilh jordigilh force-pushed the sdn_2481_multiple_external_gateways_crd branch from cf80183 to 433d23e Compare February 8, 2023 10:33

The following yaml examples an instance of the CRD:

```yaml
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am torn about having a one - in - all crd for this.
Here in one place we define the gateways (two types of them) and their relationship to the namespaces.

If we add a crd representing the gateways, it will be easier to name them properly, and have another crd that establishes the relationship between a namespace and the gateways.

Something like "PodExternalGateway" for the pod part, "StaticExternalGateway" for the static part and "NamespaceGateways" to set the rules.

This sounds better also from a responsibility point of view: the gateways are part of the infrastructure while the choosing which namespace goes through which set of gateways can be done by a separate entity.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the idea. It makes sense since the definition of the gateways don't necessarily need to be linked to the namespaces. Linking them in the same CRD makes them less flexible.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@trozet what do you think?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can understand why having 2 different CRDs for statically defined gateways and pod gateways could make sense. I don't understand NamespaceGateways part, wouldn't that just be the namespace scope of the other 2 CRDs? Or are you talking about making a globally scoped NamespaceGateway CRD to select the other 2 CRDs and apply them to multiple namespaces?

If so, that seems like overkill doesn't it?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

my idea was, the gateway definition and how to apply them to a given namespace belong to two different concerns. One is "infrastructural", where you define the gateways, and another one is to choose which namespace should go to which more administrative.
And if you let the crds be namespaced, then they are under the user's control and it's more or less the same as annotations (which I think is what you wrote in #1338 (comment))

* `staticGateways` contains a slice of IPv4 or IPv6 addresses that are configured as static external gateways. It maps to the `k8s.ovn.org/routing-external-gws` annotation.
* `routingNamespaces` contains a slice of the namespaces linked to this gateway configuration. It maps to the `k8s.ovn.org/routing-namespaces` annotation.
* `podGateways` allows defining the selector and configuration criteria for the gateway pods. This configuration contains the following fields:
* `network` references the multus network configuration that matches the same name in the pod that will be used for external traffic. If the field is empty or undefined, the pod's IP will be used instead. This matches the current behavior used with annotations. This field maps to the annotation `k8s.ovn.org/routing-network`.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it make sense to have a flag that defines the behaviour of using the pod's ip (as opposed to implicitly using it when the network is not defined) ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed, I need to think about it. In essence we're looking at two possibilities:

  • retrieve the IPs from a network reference from the multus config annotation.
  • retrieve the IPs from the node's network interface, assuming that the pod has hostNetwork is set to true.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you give an example of what you are thinking?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It was more about providing two fields:

  • One that determines if the source of the IPs was primary (host network) or secondary (multus managed).
  • For the multus case, it provided the network name. In case of the primary network, this field would not be used.


A new namespaced-scoped CRD is created to represent the multiple external gateways configuration. The `spec` structure exposes the following fields:
* `staticGateways` contains a slice of IPv4 or IPv6 addresses that are configured as static external gateways. It maps to the `k8s.ovn.org/routing-external-gws` annotation.
* `routingNamespaces` contains a slice of the namespaces linked to this gateway configuration. It maps to the `k8s.ovn.org/routing-namespaces` annotation.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know this comes from the previous iteration, but should this be routedNamespaces or just namespaces an now the purpouse of this crd is clear and self contained?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll rename it to namespaces, but if the idea of splitting the CRD into 3 different ones works out we will not need this field anymore 😉

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

see this discussion: #1338 (comment)

enhancements/network/ovn-multi-external-gateway.md Outdated Show resolved Hide resolved
routingNamespaces: ["default","stage"]
podGateways:
- network: external-gateway
bfdEnabled: true
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it was possible to set the bfdEnabled on both the gw pod and on the namespace. If we put it here, we are removing a degree of configuration.
It would make sense to raise it to the root level of the object.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will add the ability to define it at the spec level for now, but I'm more inclined to break this CRD into 3 different ones following your previous suggestion.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated the spec. I think it's getting close to what you're suggesting about 3 CRDs: there's one section for staticGateways, another one for podGateways and the namespaces.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

see my comment here: #1338 (comment)


### Test Plan

* Unit test coverage for CR based to at least match use cases in existing annotation based test cases.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we had e2e tests too covering external gateways (if somebody did not kill them :P )

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

😄 well the idea would be to clone these tests to use the CRs instead. The setup and flows should still apply as long as we are keeping the same feature parity to the annotation implementation.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah we still have those

@fedepaol
Copy link
Member

fedepaol commented Feb 8, 2023

@tssurya @fedepaol can you please also review?

left a few comments, hope they are relevant :P

enhancements/network/ovn-multi-external-gateway.md Outdated Show resolved Hide resolved
enhancements/network/ovn-multi-external-gateway.md Outdated Show resolved Hide resolved

## Summary

When it comes to ingress traffic, load balancers and reverse proxies take care of redirecting traffic to one of potentially multiple destinations. Egress traffic is no different. Although it is already possible to use egress gateways, telco customers require the ability to dynamically configure one or more egress next hops per namespace via a common and secure API.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: I don't see how this opening statement around ingress and proxies is related to this enhancement. We could just straight start with explaining "what is the MEG feature" briefly and that there is a need to allow for a common API to expose the feature in a more GA-ed fashion.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also the second part of the opening,
telco customers require the ability to dynamically configure one or more egress next hops per namespace via a common and secure API
let the stress be on the common and secure API not the dynamically configure part which is already possible today through annotations. IMHO, Goal of Summary should be that this is just an API KEP exposing an existing feature that has been in dev preview for long and will provide a controller that will do the current functionality as is, (its not adding/implementing a new feature ofc this need not be mentioned explicitly).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ack

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@tssurya updated, please take a look.

enhancements/network/ovn-multi-external-gateway.md Outdated Show resolved Hide resolved

## Motivation

The goal of this enhancement is to translate the need of an admission webhook to a custom resource controller.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This sentence is confusing to me. An admission webhook concept is being introduced in this KEP in the previous sentence.

But we aren't even remotely doing that right? Our solution is CRD and a controller to manage that. I'd think these are two different ways to solve the problem.

So why are we even mentioning a webhook model, are these solutions synonyms that the controller model somehow implements the webhook need? maybe I am missing something (sorry for the dumb question). But in the end I am left with why did we mention webhook when we are going for the second option...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The difference is that in the current implementation there is no mechanism to control the usage of the annotations in pods or namespaces. You need an admission webhook to provide a level of control over which pods and namespaces can have these annotations. Using a CRD you get the benefit that it's bound to have an RBAC associated to it. Hence, the control that was enforced by the admission webhook is not translated into a set of roles/rolebindings to a given set of users/groups.

Note that the admission webhook is not a functional part of the annotation based implementation, more of an enhancement to grant a certain level of security that needs to be deployed separately.

My understanding is that the main motivator for this enhancement is to improve the security aspect of this solution, as well as to improve the usability by leveraging on CRDs. I'm not familiar with the current usability issues of the existing annotation based approach, so please let me know if I'm missing anything.

- Define a controller that reimplements the existing logic that handles the annotations but on the contents of the CRD.
- Extend test coverage to include the new CRD use cases, matching the same use cases covered in the annotation tests.
- Support for CRD configuration will be the only mechanism supported in OpenShift 4.14 and onwards. Annotations will still be supported in 4.13, however the CRD will be the only officially supported and documented metohd of configuration.
- Extend the existing unit tests for the annotation based logic to accommodate for the scenario where CR and annotations exist in 4.13 and onwards.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Couple this point with point3? I don't see the need for a new redundant point to talk about tests.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see 2 cases here:

  • Only CRs exist, which is our ideal scenario. That's point 3.
  • CRs and annotations exist, which is an scenario that we'd rather avoid but still possible. For 4.13, we are defining the annotations as the preferred form,. This is not covered by the current test scenarios and requires to be included in some shape or form.

These are 2 distinct test objectives and I wanted to highlight the scope and the reason, rather than just cover them in a single requirement.

| 4.14 | No | Yes | Use CR information to configure external gateways |
| 4.14 | Yes | Yes | Use CR information to configure external gateways |
| 4.14 | Yes | No | Use annotations to configure external gateways |
| 4.14 | No | No | Nothing to do |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe this is an implementation detail, but how do we plan to do this upstream? We don't have a versioning system there. Assuming this feature is going to go upstream first and then downstream.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point, I don't know if there is some kind of configuration that we could use to enable/disable the CRs. I'm not aware but it would be the best approach. Any ideas?


## Proposal

Create a new CRD that contains a direct relationship to the annotations used in the pod/namespace to manage the external gateways configuration:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add more details on where this CR will live? I am hoping this is something similar to egressIP/egressFW and not egressRouter ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's planned to be cluster scoped, so I hope it will not be bound to a namespace. Do you still want me to include a reference to cluster scoped? Or maybe something else?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ack no I meant which repo will this live in, in OVNK or OCP/API , I'm guessing its OVNK.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes it's going to be in ovn-kubernetes, inside go-controller/pkg/crd with egressfirewall, egressip and egressqos


### API Extensions

A new namespaced-scoped CRD is created to represent the multiple external gateways configuration. The `spec` structure exposes the following fields:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this a namespace-scoped CRD? I'd assume this whole feature is not for users/app owners, more for admins.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

on second thoughts to make the below CRD neater, we can make this namespace-scoped but then the scoped namespace should be the one on which the config is applied to, that might mean a lot of crds, one per namespace.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's going to be cluster scoped at the end. Updating soon 😄

A new namespaced-scoped CRD is created to represent the multiple external gateways configuration. The annotations are mapped as following:
* `k8s.ovn.org/routing-external-gws` is a namespace specific annotation and is represented in the CRD as `routingExternalGateways`. The values represent a slice of strings containing the IPs used for external traffic.
This annotation works in conjunction with `k8s.ovn.org/external-gw-pod-ips` and is used to capture the IPs that are stored in conntrack.
* `k8s.ovn.org/routing-namespaces` is mapped as `routingNamespaces` and it also contains a slice of the namespaces linked to this gateway configuration.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Two things here,

  1. I feel the MEG CRD should be cluster-scoped, not namespace scoped - a namespace scope doesn't make sense here; the feature is more for admins not users, similar to egressIPs...

  2. I am slightly skeptical about merging this CR with what Nvidia is doing for another feature? Reason being we can't really change/expand CRDs like that over releases right, they are API breaking changes? Its going to be hard to get all the fields right in such a way that we can only expand in the future and not break the type of fields..., maybe we keep this v1alpha1 and hoping this feature is dev preview until Nvidia also brings its things in. OR I am missing details and maybe we have a doc from Nvidia that explains its feature in depth so that we are covered? -> example is their feature also for egress traffic or is it for internal traffic? Mixing those things up into a single CRD could get confusing if these two features are doing opposite things on a high level.

3rd extra point which is food for thought - how would the controller know whether its an internal v/s external gw thing here depending on which we'd need to do things on GR v/s ovn_cluster_router? - we'd need that signal somehow from the CRD i guess..

### API Extensions

A new namespaced-scoped CRD is created to represent the multiple external gateways configuration. The `spec` structure exposes the following fields:
* `namespaces` contains a slice of the namespaces linked to this gateway configuration. It maps to the `k8s.ovn.org/routing-namespaces` annotation.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please clarify "linked" as being the set of namespaces for which this gateway configuration will be applied for.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will update. I've rewritten this part, but I will keep your description where it applies


A new namespaced-scoped CRD is created to represent the multiple external gateways configuration. The `spec` structure exposes the following fields:
* `namespaces` contains a slice of the namespaces linked to this gateway configuration. It maps to the `k8s.ovn.org/routing-namespaces` annotation.
* `staticGateways` allows defining the configuration for external gateways. It exposes the following fields:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should have a staticGatewayConfig and dynamicGatewayConfig (for a lack of a better word and somehow podGateways isn't working in my head) that both use the bfdEnabled field and we should be solid on the kubebuilder definitions of at least one of this must be defined if not both.
it would be good to mention for all these fields which are the mandatory ones and which are optional etc.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pending confirmation, but we might end up droping the 3 CRDs in favor or a single CRD. I will mention which fields are optional.


### API Extensions

A new namespaced-scoped CRD is created to represent the multiple external gateways configuration. The `spec` structure exposes the following fields:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

on second thoughts to make the below CRD neater, we can make this namespace-scoped but then the scoped namespace should be the one on which the config is applied to, that might mean a lot of crds, one per namespace.

enhancements/network/ovn-multi-external-gateway.md Outdated Show resolved Hide resolved
* `podGateways` allows defining the selector and configuration criteria for the gateway pods. This configuration contains the following fields:
* `network` references the multus network configuration that matches the same name in the pod that will be used for external traffic. If the field is empty or undefined, the pod's IP will be used instead. This matches the current behavior used with annotations. This field maps to the annotation `k8s.ovn.org/routing-network`.
* `bfdEnabled` determines if Bi-Directional Forwarding Detection (BFD) is supported in this network and follows the same behavior as with the annotation: it's enabled when set to true and disabled when false or omited in the manifest. It maps to the `k8s.ovn.org/bfd-enabled` annotation in the pod.
* `podSelector` implements a [set-based](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#set-based-requirement) label selector to filter the pods in the namespace that match this network configuration.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not following the purpose of this new podSelector field and what is this translating to from the annotations world, help needed in understanding and sorry for being dumb.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, no problem 😄 : The current implementation uses pod's annotations to identify the pods that perform the role of gateways. Moving to a CRD, these annotations are no longer expected to be found in the pod's manifest. Instead, we have to resort using filtering criteria to identify the pods that are meant to be gateways. This label selector is exactly that: defines the filtering criteria via label matching that identifies the running pods to be processed as gateways.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah thanks for explaining! makes sense...

N/A

#### Dev Preview -> Tech Preview
This feature is GA in 4.13, no Dev or Tech previews are required.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would be good to have the "why they are not required" ? :) specially for an API change. I'd think the current MEG is not GA right?

This feature is GA in 4.13, no Dev or Tech previews are required.

#### Removing a deprecated feature
Deprecation of the annotation based logic to occur in OpenShift 4.14.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this in 4.14 or 4.15 ? i think this contradicts what's mentioned above? let's be consistent everywhere.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch. I will update to reflect that in 4.14 the annotations will no longer be available.

Deprecation of the annotation based logic to occur in OpenShift 4.14.

### Upgrade / Downgrade Strategy
N/A
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I mentioned this in another comment, but it would be good to outline how users currently using the annotations are expected to migrate to CRD? is it going to be manual?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, manual as far as I understand. I will mention that there is no migration tool or documentation planned.

enhancements/network/ovn-multi-external-gateway.md Outdated Show resolved Hide resolved
Copy link
Contributor

@tssurya tssurya left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See questions inline (I also responded to some outstanding questions on the outdated set of comments.)


When it comes to ingress traffic, load balancers and reverse proxies take care of redirecting traffic to one of potentially multiple destinations. Egress traffic is no different. Although it is already possible to use egress gateways, telco customers require the capability to use a common and secure API to dynamically configure one or more egress next hops per namespace.

Multiple External Gateway is an existing OVN-K feature that uses external gateways for ingress and egress traffic for all the pods in an annotated namespace, using the following pod and namespace. More information can be found in this [white paper](https://www.f5.com/pdf/white-papers/f5-telco-gateway-for-red-hat-openshift-wp.pdf).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

++ thanks for adding this.


* Namespace
* `k8s.ovn.org/routing-external-gws`
* `k8s.ovn.org/bfd-enabled`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pending discussion from #1338 (comment) ; based on that we might need to update the info on external-gw-pod-ips


A new namespaced-scoped CRD is created to represent the multiple external gateways configuration. The annotations are mapped as following:
* `k8s.ovn.org/routing-external-gws` is a namespace specific annotation and is represented in the CRD as `routingExternalGateways`. The values represent a slice of strings containing the IPs used for external traffic.
This annotation works in conjunction with `k8s.ovn.org/external-gw-pod-ips` and is used to capture the IPs that are stored in conntrack.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jordigilh : my two cents is you can do this as pre-work for the epic? I bet it will only simplify your epic so it will be an advantage for you, we could create a bz/card for this work and you can own it, outside of that it is currently not in the team's set of priorities AFAIK, I'll let @trozet chime in.

- Define a controller that reimplements the existing logic that handles the annotations but on the contents of the CRD.
- Modify the existing logic to make it CRD aware, so that it can watch for changes in an annotation, as it works today, or fallback to the CR, in case the annotation does not exist.
- Extend test coverage to include the new CRD use cases, matching the same use cases covered in the annotation tests.
- CRD configuration will be the only officially supported and documented mechanism in OpenShift >= 4.13. Annotations will still be available in OpenShift <= 4.13.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are one week away from feature complete deadline, we might have to update this to 4.14 and other places accordingly...


### API Extensions

A new cluster scoped CRD `AdminPolicyBasedExternalRoute` is created to represent the multiple external gateways configuration. The `spec` structure exposes the following fields:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are we going to call this AdminPolicyBasedExternalRoute ? I don't think its a good name to add "adminpolicybased" into the CRD's name since we are going to control its RBAC as for cluster-admins only right? IMHO, the name of the CRD should reflect the feature's high-level goal.
If nvidia's use cases for "internalroute" is not going to be part of this CRD (since name is "externalroute") and its another one, then ExternalGateway or EgressGateway is enough right? cc @trozet @fedepaol WDYBT?
Maybe even PolicyBasedRouting ?

PS: I am bad at naming around APIs so if other reviewers are ok with the existing name, its fine to ignore this comment.

* `nexthops`: defines the destination where the packets should be forwarded to. There are two types of references: static and dynamic. Static IPs usually refer to interfaces that hold an IP which doesn't change overtime. Dynamic IPs link to pod IPs and can potentially change over time as they are linked to the pod's lifecycle.
*`static`: is a slice of static IPs, each item containing the following fields:
* `ip`: IPv4 or IPv6 of the next destination hop. This field is mandatory.
* `skipHostSNAT`: When true, it disables applying the Source NAT to the host IP.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we need this skipHostSNAT ? Don't we have that info on the from field? all pods in that namespace won't have the SNAT configured right?

Does this feature actually support having a SNAT towards hostIP? -> I thought this is something that can be deciphered from the disableSNATMultipleGWs option. Wondering why we need this flag here on a per externalgw basis? cc @trozet ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another thought that just came in was both skipHostSNAT and bfdEnabled, these seem to be set for the whole set of static exgw pods or dynamic exw pods.. should this be allowed on a per exgw basis? -

because for the namespace annotations its true we behave this way but for pod annotations, we actually provide a way to do this per exgw pod.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

disableSNATMultipleGWs was a poor naming choice by me. Really all it should mean is "perPodSNAT". Then with this granularity a user can decide whether or not the the pod should be SNAT'ed on a per route basis. This is an enhancement compared to the previous behavior.

* `policies`: slice of policies:
* `from`: determines where the routing polices should be applied to. Only `namespaceSelector` is supported for external traffic. This field is mandatory.
* `nexthops`: defines the destination where the packets should be forwarded to. There are two types of references: static and dynamic. Static IPs usually refer to interfaces that hold an IP which doesn't change overtime. Dynamic IPs link to pod IPs and can potentially change over time as they are linked to the pod's lifecycle.
*`static`: is a slice of static IPs, each item containing the following fields:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: staticGateways ?

* `ip`: IPv4 or IPv6 of the next destination hop. This field is mandatory.
* `skipHostSNAT`: When true, it disables applying the Source NAT to the host IP.
* `bfdEnabled`: determines if Bi-Directional Forwarding Detection (BFD) is supported in this network and follows the same behavior as with the annotation: it's enabled when set to true and disabled when false or omited in the manifest. It maps to the `k8s.ovn.org/bfd-enabled` annotation in the pod. This field is optional.
* `dynamic`: references contain the following fields:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: dynamicGateways ?

* `namespaceSelector` It defines a `set-based` selector that is used to filter the namespaces where to apply the `podSelector`. This field is optional.
* `skipHostSNAT`: When true, it disables applying the Source NAT to the host IP.
* `bfdEnabled`: determines if Bi-Directional Forwarding Detection (BFD) is supported in this network and follows the same behavior as with the annotation: it's enabled when set to true and disabled when false or omited in the manifest. It maps to the `k8s.ovn.org/bfd-enabled` annotation in the pod. This field is optional.
* `networkAttachmentName`: Name of the network attached definition. The name needs to match from the list of logical networks generated by Multus and associated to the pod via annotation. This field is optional. When this field is empty or not defined, the IP from the pod's host network will be used instead. The pod must have `hostnetwork` enabled in order for this feature to work.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what if the provided networkAttachmentName is not found in the podSelector provided? Will the behaviour be an error or will you fallback to the hostNetwork IP?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it will return an error.

- podSelector:
matchLabels:
secondary-gateway: true
skipHostSNAT: true
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this optional field? Does that mean the value is going to be false by default?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correct. It's optional and defaults to false.

#### Support Procedures
N/A
## Implementation History
First implementation targeting OpenShift 4.13 in GA capacity.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: probably good to add upstream docs for this feature in its goals (I know downstream docs are a must for an API feature).. this can help other engineers for knowledge sharing.

@jordigilh jordigilh force-pushed the sdn_2481_multiple_external_gateways_crd branch from 8951438 to 275d77b Compare March 29, 2023 18:15
Copy link

@trozet trozet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/approve

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Mar 30, 2023
Copy link
Contributor

@tssurya tssurya left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm thanks @jordigilh

- Modify the existing logic to make it CRD aware, so that it can watch for changes in an annotation, as it works today, or fallback to the CR, in case the annotation does not exist.
- Extend test coverage to include the new CRD use cases, matching the same use cases covered in the annotation tests.
- CRD configuration will be the only officially supported and documented mechanism in OpenShift >= 4.14. Annotations will still be available in OpenShift <= 4.14.
- Extend the existing unit tests for the annotation based logic to accommodate for the scenario where CR and annotations exist in 4.13.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- Extend the existing unit tests for the annotation based logic to accommodate for the scenario where CR and annotations exist in 4.13.
- Extend the existing unit tests for the annotation based logic to accommodate for the scenario where CR and annotations exist in 4.14.


### Non-Goals

- Removal of the annotation based logic to configure multiple external gateways. This should be carrier out at least no later than OpenShift 4.14 to guarantee a stable deprecation path to existing deployments that leverage on the annotations to configure the external gateways.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- Removal of the annotation based logic to configure multiple external gateways. This should be carrier out at least no later than OpenShift 4.14 to guarantee a stable deprecation path to existing deployments that leverage on the annotations to configure the external gateways.
- Removal of the annotation based logic to configure multiple external gateways. This should be carrier out at least no later than OpenShift 4.15 to guarantee a stable deprecation path to existing deployments that leverage on the annotations to configure the external gateways.


The following yaml examples an instance of the CRD:

```yaml
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TODO: As discussed if any changes come in the API based on implementation, we can come back to update the fields, so loosely lgtm-ing on this initial one.

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 30, 2023

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: trozet, tssurya

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@tssurya
Copy link
Contributor

tssurya commented Mar 31, 2023

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Mar 31, 2023
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 31, 2023

@jordigilh: all tests passed!

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@openshift-merge-robot openshift-merge-robot merged commit 5e4bb59 into openshift:master Mar 31, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants