Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CFE-10: Add host-network-bind-options #1002

Merged

Conversation

Elbehery
Copy link
Contributor

This PR add a proposal for adding Binding Options for the IngressAPI

@Elbehery
Copy link
Contributor Author

/assign @alebedev87
/assign @lmzuccarelli
/assign @tjungblu

## Summary

This enhancement updates the Ingress Controller API to allow specifying binding options in the HostNetwork strategy and allows running many instances of the ingress controller on the same node using the HostNetwork strategy, therefore avoiding port binding conflicts.
## Motivation
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Maybe add a new line before Motivation

Copy link

@Amrita42 Amrita42 Mar 2, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I can make a suggestion, can we capitalize Ingress through the entire file where applicable . I do see instances of ingress.


While utilising the HostNetwork strategy for ingress controllers, the ports 80, 443, 10443, 10444, 1936 on the host are bound by HAProxy for http, https, no_sni, sni and stats correspondingly.

Those ports might be occupied by other processes which makes it unfeasible to run multiple ingress controllers on the same node with HostNetwork strategy.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe something like "These ports might be occupied by other processes which will make it impossible to run multiple..."


### User Stories

#### As a cluster administrator, I need to set binding ports for the IngressController
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe add "so that I can deploy multiple ingress controllers on the same node for the HostNetwork strategy"


Also the deployments should not have ip:port conflicts for `SNI_PORT` and `NO_SNI_PORT`. There are multiple ways to achieve this:

- The cluster ingress operator configures the router to use a unique loopback address for the sni/nosni frontends. This loopback address would be determined deterministically, for example by hashing the ingresscontroller’s name.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: not sure about "determined deterministically", I'm not an English grammar expert but maybe something like "the loopback address is configured automatically , for example ...

@lmzuccarelli
Copy link

@Elbehery - could you check and fix the lint problem ?

@Elbehery Elbehery changed the title CFE-12: Add host-network-bind-options CFE-10: Add host-network-bind-options Jan 11, 2022
@lmzuccarelli
Copy link

@Elbehery - lets discuss the test plan, and other topics not addressed in the PR

@Elbehery Elbehery force-pushed the Add-host-network-bind-options branch 2 times, most recently from 200806e to c4baec2 Compare January 11, 2022 13:27
Comment on lines 73 to 89
```go
// HostNetworkStrategy holds parameters for the HostNetwork endpoint publishing
// strategy.
type HostNetworkStrategy struct {
// ...
// bindingOptions defines parameters for binding haproxy in ingress controller pods.
// All fields are optional and will use their respective defaults if not set.
// See specific bindingOptions fields for more details.
//
//
// Setting fields within bindingOptions is generally not recommended. The
// default values are suitable for most configurations.
//
// +optional
BindingOptions *IngressControllerBindingOptions `json:"bindingOptions,omitempty"`
}
```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This and subsequent code blocks got rendered weirdly: no syntax highlight and back tips visible.
Maybe it's the matter of indent in the raw markdown, I think the code block doesn't need the indent.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

type IngressControllerPorts struct {
// http defines the port number which HAProxy process binds for
// http connections. Setting this field is generally not recommended. However in
// HostNetwork strategy, default http 80 port might be occupied by other processess
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// HostNetwork strategy, default http 80 port might be occupied by other processess
// HostNetwork strategy, default http 80 port might be occupied by other processes

HTTP int32 `json:"http,omitempty"`
// https defines the port number which HAProxy process binds for
// https connections. Setting this field is generally not recommended. However in
// HostNetwork strategy, default https 443 port might be occupied by other processess
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// HostNetwork strategy, default https 443 port might be occupied by other processess
// HostNetwork strategy, default https 443 port might be occupied by other processes

// stats is the port number which HAProxy process binds
// to expose statistics on it. Setting this field is generally not recommended.
// However in HostNetwork strategy, default stats port 1936 might
// be occupied by other processess
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// be occupied by other processess
// be occupied by other processes

If this was the first insertion (L4 load balancer in front of the router) the router's port will be the only one seen by the server (application inside OpenShift).
And if the server relies on `X-Forwarded-*` headers (like OpenShift image registry for instance) this may give a wrong impression that the client used the port set in `X-Forwarded-Port` which in case of the custom port binding may be different.

A mitigation to this risk is to change the forwarded header policy on the ingress controller or on the route level: [documentation](https://docs.openshift.com/container-platform/4.8/networking/ingress-operator.html#nw-using-ingress-forwarded_configuring-ingress).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
A mitigation to this risk is to change the forwarded header policy on the ingress controller or on the route level: [documentation](https://docs.openshift.com/container-platform/4.8/networking/ingress-operator.html#nw-using-ingress-forwarded_configuring-ingress).
A mitigation to this risk is to change the forwarded header policy on the ingress controller or on the route level: [documentation](https://docs.openshift.com/container-platform/4.9/networking/ingress-operator.html#nw-using-ingress-forwarded_configuring-ingress).

@Elbehery Elbehery force-pushed the Add-host-network-bind-options branch from c4baec2 to 8dd3884 Compare January 12, 2022 15:36
@Elbehery
Copy link
Contributor Author

@alebedev87 thanks for your review, I have adjusted all of them

cc @lmzuccarelli @sherine-k

```
### Validation

Setting `spec.endpointPublishingStrategy.type: HostNetwork` and ommitting
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Setting `spec.endpointPublishingStrategy.type: HostNetwork` and ommitting
Setting `spec.endpointPublishingStrategy.type: HostNetwork` and omitting

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@arjunrn adjusted 🙏🏽

@Elbehery
Copy link
Contributor Author

@sherine-k I added you to the authors section 👍🏽


Users might define conflicting ports which would cause HAProxy process to fail at startup.

A mitigation to this risk is to implement a validation feature in the reconciliation loop of the Cluster Ingress Operator
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So we're going to validate in the reconcile loop. It would be nicer to prevent the creation of IngressController resources with conflicting ports. For instance when a NodePort service is created with a port which is already allocated then the creation fails with an error message, like so:

The Service "service02" is invalid: spec.ports[0].nodePort: Invalid value: 30007: provided port is already allocated

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, would be good to have a validating webhook for the ingress operator. But for the moment the validation is done in the reconcile loop.


#### Usage of X-Forwarded headers

The default forwarded header policy of the ingress controller is to append X-Forwarded headers. This results into the insertion of `X-Forwarded-Port` header on OpenShift router side (HAProxy acts as a reverse proxy).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But why is this needed only when a non-default port if used by a reverse-proxy. This condition "And if the server relies on X-Forwarded-* headers (like OpenShift image registry for instance) this may give a wrong impression that the client used the port set in X-Forwarded-Port which in case of the custom port binding may be different." should be true even for the default router.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The default router binds on the standard 80, 443 ports which match the ports of the LB which "frontends" the router (endpointPublishingStrategy == LoadBalancer).
This works well for the applications like the image registry which needs to send the token retrieval URL back to the client. The token retrieval URL uses:

  • OpenShift cluster's *.apps wildcard DNS which get resolved to the before mentioned LB
  • original port (the one appended to X-Forwarded-Port by the router)

So, if the router is bound on a port 10443 instead of 443, the image registry will send the following URL:

https://registry.apps.<cluster_name>.<base_domain>.:10443/openshift/token?account=my-sa&client_id=docker&offline_token=true

which will fail as the "frontend" LB listens on the standard port 443 not 10443.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The default policy is append. So each host and port will be appended. The image registry should just use the first host and port in the headers. I don't think having non-standard behavior when one setting is changed is a good idea. It's confusing for a user who is trying to debug an issue.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The image registry does take the first host and port, it's just that the first values are the router ones as the LB in front of the OpenShift cluster is L4 one, so it doesn't append any HTTP headers.

I don't think having non-standard behavior when one setting is changed is a good idea. It's confusing for a user who is trying to debug an issue.

By "non-standard behavior" you mean the disabling of the append of the X-Forwarded headers? And by "one setting is changed" you mean the usage of the custom ports?
If so, the idea was to warn about this potential issue but keep the EP reasonably small moving the responsibility about using X-Forwarded headers to the applications. Do you have any idea of how to avoid the issue at reasonable cost?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Discussed this offline. This mitigation is not to be implemented in the controller but can be applied by the user to workaround any issues which might arise from components which use the X-Forwarded-Port to reconstruct the request URL.

@@ -0,0 +1,298 @@
---
title: ingress-host-network-configurable-ports

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that's extremely picky, but what's up with all those newlines here? this seems to also prevent the markdown renderer to show this as a table


Update the IngressController API by adding an optional `HostNetworkParameters` field with type `*HostNetworkParameters` to the `HostNetworkStrategy` struct:

```go
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This multiple level of structs for the configuration seems redundant since the structs contain a single field. The HTTP, HTTPS and STATS port configuration can be moved into HostNetworkStrategy directly.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with the idea of the simplification. But I think it can be done in many ways.

  1. What Arjun proposed:
endpointPublishingStrategy:
  type: HostNetwork
  hostNetwork:
    httpPort: 10080
    httpsPort: 10443
    statsPort: 11936
  1. Similar to Arjun's one but with 1 level more to avoid Port suffix:
endpointPublishingStrategy:
  type: HostNetwork
  hostNetwork:
    ports:
      http: 10080
      https: 10443
      stats: 11936
  1. What Sherine put into EP:
endpointPublishingStrategy:
  type: HostNetwork
  hostNetwork:
     hostNetworkParameters:
       httpPort: 10080
       httpsPort: 10443
       statsPort: 11936

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removing the hostNetworkParameters level makes sense too.
I vote for # 1

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I'm more for the second option just to logically group similar fields.
However if the majority is for the first option, I'm fine with this.


## Motivation

In the current implementation, Ingress Controllers in `HostNetwork` strategy use standard ports for HTTP(80), HTTPS(443), NO_SNI(10443), SNI(10444) and STATS(1936) on the underlying nodes.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We changed the HAProxy configuration not to use TCP ports 10443 and 10444 in OpenShift 4.10: openshift/router#326

Suggested change
In the current implementation, Ingress Controllers in `HostNetwork` strategy use standard ports for HTTP(80), HTTPS(443), NO_SNI(10443), SNI(10444) and STATS(1936) on the underlying nodes.
In the current implementation, Ingress Controllers in `HostNetwork` strategy use standard ports for HTTP (80), HTTPS (443), and stats (1936) on the underlying nodes.

Configuring http, https and stats port for the ingress controllers that use the `HostNetwork` endpoint publishing strategy.


Update the IngressController API by adding an optional `HostNetwork` field with type `*IngressControllerHostNetwork` to the `HostNetworkStrategy` struct:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A HostNetwork field already exists; you are adding new subfields to the existing HostNetwork field.

//
// +kubebuilder:validation:Optional
// +kubebuilder:validation:Minimum=1
// +kubebuilder:validation:Maximum=30000
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why specify +kubebuilder:validation:Maximum=30000? I'm guessing the intention is to prevent overlap with the nodeport range, correct? The default nodeport range is 30000-32767, but this range is configurable, and anyway, the user may want to use a port above the high end of the nodeport range. I suggest removing this restriction, instead specifying +kubebuilder:validation:Maximum=65535, and adding a note in the godoc that the user would be well advised to avoid specifying a port in the configured nodeport range.

// +kubebuilder:default:=443
// +optional
HTTPSPort int32 `json:"httpsPort,omitempty"`
// stats is the port number that router pods will open on the hosting node for stats.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This would be a good opportunity to explain that the purpose of exposing the metrics port is to facilitate health checks by external load-balancers. We can also provide some guidance on proper configuration of health checks. For example, something like the following could be helpful:

If you configure an external load-balancer to forward connections this ingress controller, the load balancer should use this port for health checks. A load balancer can send HTTP probes to this port on a given node with the path /healthz/ready to determine whether the ingress controller is ready to receive traffic on the node. For proper operation, the load balancer must not forward traffic to a node until /healthz/ready reports ready, and the load balancer must stop forwarding within a maximum of 45 seconds after /healthz/ready starts reporting not-ready. Probing every 5 or 10 seconds, with a 5-second timeout and with a threshold of two successful or failed requests to become healthy or unhealthy, respectively, are well-tested values.


```

Here's an example of a public and an internal ingress controllers with the HostNetwork strategy that run on the same nodes without port conflicts:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Here's an example of a public and an internal ingress controllers with the HostNetwork strategy that run on the same nodes without port conflicts:
Here's an example of a public ingress controller and an internal ingress controller with the HostNetwork strategy that run on the same nodes without port conflicts:


#### Failure Modes

N/A.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • A router pod could fail to be scheduled if no node had compatible labels and taints and available ports; in this case, the ingresscontroller's status conditions should report the scheduling failure.
  • A router pod could be scheduled to a node where the configured port were in use by something other than a pod, such as a system daemon (the Kubernetes schedule only checks for conflicts with ports allocated by Kubernetes); in this case, the ingresscontroller's status conditions should reflect the pod's failure to start.
  • A router pod could be scheduled and start listening on a port inside the nodeport range; in this case, a subsequently created nodeport service could by chance be assigned the in-use port and fail to be configured on the host with the router pod.

Comment on lines 208 to 209
n
N/A.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Spurious 'n'.

We could document here how to check the ingresscontroller's status conditions, router pod logs, and SDN/OVN logs to diagnose the aforementioned failure modes, and how to remediate them.


### Upgrade / Downgrade Strategy

N/A.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There shouldn't be any issues when upgrading to a version of OpenShift with this enhancement, but there could be problems if the user downgraded after creating ingresscontrollers using this enhancement.


## Drawbacks

N/A.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Additional operational complexity, which will increase the needed test matrix, frustration for users, and volume of customer cases.


## Alternatives

N/A.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An alternative would be to continue to guide users to the NodePortService endpoint publishing strategy type. Another alternative (or variation on that alternative) would be to augment the NodePortService strategy to allow the user to specify requested ports within the nodeport range. I understand that some users have been less than completely satisfied with NodePortService, but it has been our answer for years, so it would be useful to document why we have decided now to add this enhancement.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How the cluster ingress operator can find an available port from the nodeport range in a race free way?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From the docs:

If you want a specific port number, you can specify a value in the nodePort field. The control plane will either allocate you that port or report that the API transaction failed.

I think @Miciah was saying that the same configuration could be added to the node port strategy instead of trying to expose the ports through the host network.

Copy link
Contributor

@alebedev87 alebedev87 Feb 2, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, on the level of the contract with the user of the API, specifying host ports or node ports are equally prone to the conflicts.
Then let me ask this timely question: why this was not considered as a counter proposal to the customer need?
Why customers are more satisfied with the hostNetwork publishing strategy? Performance (fewer hops)?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Miciah , @arjunrn & @alebedev87
From the information we got from the PMs(@lmzuccarelli + @Deepthidharwar), other customers are still interested in implementing this change for hostNetwork.

// A load balancer can send HTTP probes to this port on a given
// node with the path /healthz/ready to determine whether the
// ingress controller is ready to receive traffic on the node.
// For proper operation, the load balancer must not forward traffic // to a node until /healthz/ready reports ready, and the load
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// For proper operation, the load balancer must not forward traffic // to a node until /healthz/ready reports ready, and the load
// For proper operation, the load balancer must not forward traffic
// to a node until /healthz/ready reports ready, and the load

// +kubebuilder:validation:Maximum=65535
// +kubebuilder:validation:Minimum=1
// +kubebuilder:default=80
HTTPPort int `json:"HTTPPort"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.


Cluster administrators should watch out for the ingress controller status after updates:
In case the router pod cannot be scheduled due to port unavailability (or any other scheduling concerns), the ingress controller's conditions will report the scheduling failure.
In such a case, the cluster administrator might need to inspect other pods or daemonsets that use the same port. Router pod logs, OVN/SDN logs might be helpful to identify the root cause for scheduling failure.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would say "pods and system daemons"; a likely source of conflicts is running daemons on the host that aren't managed by OpenShift. ss or netstat would be an easy way to check for conflicts.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We actually have a host-port registry -- https://github.com/openshift/enhancements/blob/master/dev-guide/host-port-registry.md -- as well as user-facing documentation (https://github.com/openshift/openshift-docs/blob/main/modules/installation-network-user-infra.adoc) for which ports we "claim". So end-user documentation should reference those ranges as being off-limits.

In case the router pod cannot be scheduled due to port unavailability (or any other scheduling concerns), the ingress controller's conditions will report the scheduling failure.
In such a case, the cluster administrator might need to inspect other pods or daemonsets that use the same port. Router pod logs, OVN/SDN logs might be helpful to identify the root cause for scheduling failure.

Cluster administrators might also encounter issues if an ingresscontroller with HostNetwork strategy is using a port from the nodePort range: if a nodeport service attempts to use the same port, the router pod will not be able to get scheduled correctly. The nodePort ingress controller's conditions should report the scheduling failure.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If a nodeport service attempts to use a port that the ingresscontroller is already using, then the nodeport service may not work properly. I know that OVN-Kubernetes will log the conflict in its pods; I'm not sure about openshift-sdn's failure mode.

Comment on lines 223 to 230
status: "True"
type: PodsScheduled
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In case of failure to schedule pods, the operator would report PodsScheduled=False (I hope).

Logs reporting the failure to bind to the requested port on the underlying host should be found.

c. Checking SDN/OVN logs
Check the logs from the ovn/sdn pods on the nodes where the router pods are running for errors related to port conflicts.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be helpful to identify the pods to check here. You can ask the SDN team if you need help figuring out the appropriate namespace or pod names.

Comment on lines 294 to 313
* Advise users to the NodePortService endpoint publishing strategy type.
* Augment the NodePortService strategy to allow the user to specify requested ports within the nodeport range.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Usually this section will include some explanation of why the alternatives were rejected. Do we have that information? Maybe it's as simple as that these alternatives have been rejected by a subset of users who require HostNetwork for unspecified reasons. If we do know a reason why these alternatives were rejected, this would be the place to record it.

// +kubebuilder:validation:Maximum=65535
// +kubebuilder:validation:Minimum=1
// +kubebuilder:default=443
HTTPSPort int `json:"HTTPSPort"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not the API definition, so it's less important, however to match with the example below (and with the most API examples actually): JSON tag names are better to be lower camel case.

@sherine-k sherine-k force-pushed the Add-host-network-bind-options branch 2 times, most recently from bb5fe6c to 99593a2 Compare March 1, 2022 17:58
Comment on lines +249 to +253
oc logs -f ds/ovnkube-node -n openshift-ovn-kubernetes -c ovnkube-node
W0301 17:17:48.060076 5734 port_claim.go:119] PortClaim for xxxx: xxxx/xxxxxxxx on port: 32634, err: listen tcp :32634: bind: address already in use
E0301 17:17:48.060097 5734 port_claim.go:145] Error claiming port for service: xxxx/xxxxxxxx: listen tcp :32634: bind: address already in use
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This needs to be a shell command sh.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated. Thanks

@sherine-k sherine-k force-pushed the Add-host-network-bind-options branch from 99593a2 to 7da4094 Compare March 14, 2022 16:11
@sherine-k sherine-k force-pushed the Add-host-network-bind-options branch from bc1c8bd to b1d0c6e Compare March 22, 2022 16:31
@Miciah
Copy link
Contributor

Miciah commented Mar 22, 2022

/approve

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 22, 2022

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: Miciah

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Mar 22, 2022
@arjunrn
Copy link
Contributor

arjunrn commented Mar 23, 2022

/label tide/merge-method-squash

@openshift-ci openshift-ci bot added the tide/merge-method-squash Denotes a PR that should be squashed by tide when it merges. label Mar 23, 2022
@sherine-k sherine-k force-pushed the Add-host-network-bind-options branch from b1d0c6e to 55a8336 Compare March 23, 2022 12:53
@Miciah
Copy link
Contributor

Miciah commented Mar 23, 2022

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Mar 23, 2022
@Miciah
Copy link
Contributor

Miciah commented Mar 23, 2022

/refresh

@openshift-merge-robot openshift-merge-robot merged commit 53e64c2 into openshift:master Mar 23, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. tide/merge-method-squash Denotes a PR that should be squashed by tide when it merges.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

10 participants