-
Notifications
You must be signed in to change notification settings - Fork 450
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CFE-10: Add host-network-bind-options #1002
CFE-10: Add host-network-bind-options #1002
Conversation
/assign @alebedev87 |
## Summary | ||
|
||
This enhancement updates the Ingress Controller API to allow specifying binding options in the HostNetwork strategy and allows running many instances of the ingress controller on the same node using the HostNetwork strategy, therefore avoiding port binding conflicts. | ||
## Motivation |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Maybe add a new line before Motivation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I can make a suggestion, can we capitalize Ingress through the entire file where applicable . I do see instances of ingress.
|
||
While utilising the HostNetwork strategy for ingress controllers, the ports 80, 443, 10443, 10444, 1936 on the host are bound by HAProxy for http, https, no_sni, sni and stats correspondingly. | ||
|
||
Those ports might be occupied by other processes which makes it unfeasible to run multiple ingress controllers on the same node with HostNetwork strategy. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe something like "These ports might be occupied by other processes which will make it impossible to run multiple..."
|
||
### User Stories | ||
|
||
#### As a cluster administrator, I need to set binding ports for the IngressController |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe add "so that I can deploy multiple ingress controllers on the same node for the HostNetwork strategy"
|
||
Also the deployments should not have ip:port conflicts for `SNI_PORT` and `NO_SNI_PORT`. There are multiple ways to achieve this: | ||
|
||
- The cluster ingress operator configures the router to use a unique loopback address for the sni/nosni frontends. This loopback address would be determined deterministically, for example by hashing the ingresscontroller’s name. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: not sure about "determined deterministically", I'm not an English grammar expert but maybe something like "the loopback address is configured automatically , for example ...
@Elbehery - could you check and fix the lint problem ? |
@Elbehery - lets discuss the test plan, and other topics not addressed in the PR |
200806e
to
c4baec2
Compare
```go | ||
// HostNetworkStrategy holds parameters for the HostNetwork endpoint publishing | ||
// strategy. | ||
type HostNetworkStrategy struct { | ||
// ... | ||
// bindingOptions defines parameters for binding haproxy in ingress controller pods. | ||
// All fields are optional and will use their respective defaults if not set. | ||
// See specific bindingOptions fields for more details. | ||
// | ||
// | ||
// Setting fields within bindingOptions is generally not recommended. The | ||
// default values are suitable for most configurations. | ||
// | ||
// +optional | ||
BindingOptions *IngressControllerBindingOptions `json:"bindingOptions,omitempty"` | ||
} | ||
``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This and subsequent code blocks got rendered weirdly: no syntax highlight and back tips visible.
Maybe it's the matter of indent in the raw markdown, I think the code block doesn't need the indent.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
type IngressControllerPorts struct { | ||
// http defines the port number which HAProxy process binds for | ||
// http connections. Setting this field is generally not recommended. However in | ||
// HostNetwork strategy, default http 80 port might be occupied by other processess |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// HostNetwork strategy, default http 80 port might be occupied by other processess | |
// HostNetwork strategy, default http 80 port might be occupied by other processes |
HTTP int32 `json:"http,omitempty"` | ||
// https defines the port number which HAProxy process binds for | ||
// https connections. Setting this field is generally not recommended. However in | ||
// HostNetwork strategy, default https 443 port might be occupied by other processess |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// HostNetwork strategy, default https 443 port might be occupied by other processess | |
// HostNetwork strategy, default https 443 port might be occupied by other processes |
// stats is the port number which HAProxy process binds | ||
// to expose statistics on it. Setting this field is generally not recommended. | ||
// However in HostNetwork strategy, default stats port 1936 might | ||
// be occupied by other processess |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// be occupied by other processess | |
// be occupied by other processes |
If this was the first insertion (L4 load balancer in front of the router) the router's port will be the only one seen by the server (application inside OpenShift). | ||
And if the server relies on `X-Forwarded-*` headers (like OpenShift image registry for instance) this may give a wrong impression that the client used the port set in `X-Forwarded-Port` which in case of the custom port binding may be different. | ||
|
||
A mitigation to this risk is to change the forwarded header policy on the ingress controller or on the route level: [documentation](https://docs.openshift.com/container-platform/4.8/networking/ingress-operator.html#nw-using-ingress-forwarded_configuring-ingress). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A mitigation to this risk is to change the forwarded header policy on the ingress controller or on the route level: [documentation](https://docs.openshift.com/container-platform/4.8/networking/ingress-operator.html#nw-using-ingress-forwarded_configuring-ingress). | |
A mitigation to this risk is to change the forwarded header policy on the ingress controller or on the route level: [documentation](https://docs.openshift.com/container-platform/4.9/networking/ingress-operator.html#nw-using-ingress-forwarded_configuring-ingress). |
c4baec2
to
8dd3884
Compare
@alebedev87 thanks for your review, I have adjusted all of them |
``` | ||
### Validation | ||
|
||
Setting `spec.endpointPublishingStrategy.type: HostNetwork` and ommitting |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Setting `spec.endpointPublishingStrategy.type: HostNetwork` and ommitting | |
Setting `spec.endpointPublishingStrategy.type: HostNetwork` and omitting |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@arjunrn adjusted 🙏🏽
@sherine-k I added you to the authors section 👍🏽 |
|
||
Users might define conflicting ports which would cause HAProxy process to fail at startup. | ||
|
||
A mitigation to this risk is to implement a validation feature in the reconciliation loop of the Cluster Ingress Operator |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So we're going to validate in the reconcile loop. It would be nicer to prevent the creation of IngressController
resources with conflicting ports. For instance when a NodePort service is created with a port which is already allocated then the creation fails with an error message, like so:
The Service "service02" is invalid: spec.ports[0].nodePort: Invalid value: 30007: provided port is already allocated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, would be good to have a validating webhook for the ingress operator. But for the moment the validation is done in the reconcile loop.
|
||
#### Usage of X-Forwarded headers | ||
|
||
The default forwarded header policy of the ingress controller is to append X-Forwarded headers. This results into the insertion of `X-Forwarded-Port` header on OpenShift router side (HAProxy acts as a reverse proxy). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But why is this needed only when a non-default port if used by a reverse-proxy. This condition "And if the server relies on X-Forwarded-*
headers (like OpenShift image registry for instance) this may give a wrong impression that the client used the port set in X-Forwarded-Port
which in case of the custom port binding may be different." should be true even for the default router.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The default router binds on the standard 80, 443 ports which match the ports of the LB which "frontends" the router (endpointPublishingStrategy == LoadBalancer
).
This works well for the applications like the image registry which needs to send the token retrieval URL back to the client. The token retrieval URL uses:
- OpenShift cluster's
*.apps
wildcard DNS which get resolved to the before mentioned LB - original port (the one appended to
X-Forwarded-Port
by the router)
So, if the router is bound on a port 10443
instead of 443
, the image registry will send the following URL:
https://registry.apps.<cluster_name>.<base_domain>.:10443/openshift/token?account=my-sa&client_id=docker&offline_token=true
which will fail as the "frontend" LB listens on the standard port 443
not 10443
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The default policy is append
. So each host and port will be appended. The image registry should just use the first host and port in the headers. I don't think having non-standard behavior when one setting is changed is a good idea. It's confusing for a user who is trying to debug an issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The image registry does take the first host and port, it's just that the first values are the router ones as the LB in front of the OpenShift cluster is L4 one, so it doesn't append any HTTP headers.
I don't think having non-standard behavior when one setting is changed is a good idea. It's confusing for a user who is trying to debug an issue.
By "non-standard behavior" you mean the disabling of the append of the X-Forwarded headers? And by "one setting is changed" you mean the usage of the custom ports?
If so, the idea was to warn about this potential issue but keep the EP reasonably small moving the responsibility about using X-Forwarded headers to the applications. Do you have any idea of how to avoid the issue at reasonable cost?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Discussed this offline. This mitigation is not to be implemented in the controller but can be applied by the user to workaround any issues which might arise from components which use the X-Forwarded-Port
to reconstruct the request URL.
@@ -0,0 +1,298 @@ | |||
--- | |||
title: ingress-host-network-configurable-ports | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that's extremely picky, but what's up with all those newlines here? this seems to also prevent the markdown renderer to show this as a table
|
||
Update the IngressController API by adding an optional `HostNetworkParameters` field with type `*HostNetworkParameters` to the `HostNetworkStrategy` struct: | ||
|
||
```go |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This multiple level of structs for the configuration seems redundant since the structs contain a single field. The HTTP, HTTPS and STATS port configuration can be moved into HostNetworkStrategy
directly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with the idea of the simplification. But I think it can be done in many ways.
- What Arjun proposed:
endpointPublishingStrategy:
type: HostNetwork
hostNetwork:
httpPort: 10080
httpsPort: 10443
statsPort: 11936
- Similar to Arjun's one but with 1 level more to avoid
Port
suffix:
endpointPublishingStrategy:
type: HostNetwork
hostNetwork:
ports:
http: 10080
https: 10443
stats: 11936
- What Sherine put into EP:
endpointPublishingStrategy:
type: HostNetwork
hostNetwork:
hostNetworkParameters:
httpPort: 10080
httpsPort: 10443
statsPort: 11936
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removing the hostNetworkParameters
level makes sense too.
I vote for # 1
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I'm more for the second option just to logically group similar fields.
However if the majority is for the first option, I'm fine with this.
|
||
## Motivation | ||
|
||
In the current implementation, Ingress Controllers in `HostNetwork` strategy use standard ports for HTTP(80), HTTPS(443), NO_SNI(10443), SNI(10444) and STATS(1936) on the underlying nodes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We changed the HAProxy configuration not to use TCP ports 10443 and 10444 in OpenShift 4.10: openshift/router#326
In the current implementation, Ingress Controllers in `HostNetwork` strategy use standard ports for HTTP(80), HTTPS(443), NO_SNI(10443), SNI(10444) and STATS(1936) on the underlying nodes. | |
In the current implementation, Ingress Controllers in `HostNetwork` strategy use standard ports for HTTP (80), HTTPS (443), and stats (1936) on the underlying nodes. |
Configuring http, https and stats port for the ingress controllers that use the `HostNetwork` endpoint publishing strategy. | ||
|
||
|
||
Update the IngressController API by adding an optional `HostNetwork` field with type `*IngressControllerHostNetwork` to the `HostNetworkStrategy` struct: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A HostNetwork
field already exists; you are adding new subfields to the existing HostNetwork
field.
// | ||
// +kubebuilder:validation:Optional | ||
// +kubebuilder:validation:Minimum=1 | ||
// +kubebuilder:validation:Maximum=30000 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why specify +kubebuilder:validation:Maximum=30000
? I'm guessing the intention is to prevent overlap with the nodeport range, correct? The default nodeport range is 30000-32767, but this range is configurable, and anyway, the user may want to use a port above the high end of the nodeport range. I suggest removing this restriction, instead specifying +kubebuilder:validation:Maximum=65535
, and adding a note in the godoc that the user would be well advised to avoid specifying a port in the configured nodeport range.
// +kubebuilder:default:=443 | ||
// +optional | ||
HTTPSPort int32 `json:"httpsPort,omitempty"` | ||
// stats is the port number that router pods will open on the hosting node for stats. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This would be a good opportunity to explain that the purpose of exposing the metrics port is to facilitate health checks by external load-balancers. We can also provide some guidance on proper configuration of health checks. For example, something like the following could be helpful:
If you configure an external load-balancer to forward connections this ingress controller, the load balancer should use this port for health checks. A load balancer can send HTTP probes to this port on a given node with the path /healthz/ready to determine whether the ingress controller is ready to receive traffic on the node. For proper operation, the load balancer must not forward traffic to a node until /healthz/ready reports ready, and the load balancer must stop forwarding within a maximum of 45 seconds after /healthz/ready starts reporting not-ready. Probing every 5 or 10 seconds, with a 5-second timeout and with a threshold of two successful or failed requests to become healthy or unhealthy, respectively, are well-tested values.
|
||
``` | ||
|
||
Here's an example of a public and an internal ingress controllers with the HostNetwork strategy that run on the same nodes without port conflicts: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here's an example of a public and an internal ingress controllers with the HostNetwork strategy that run on the same nodes without port conflicts: | |
Here's an example of a public ingress controller and an internal ingress controller with the HostNetwork strategy that run on the same nodes without port conflicts: |
|
||
#### Failure Modes | ||
|
||
N/A. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- A router pod could fail to be scheduled if no node had compatible labels and taints and available ports; in this case, the ingresscontroller's status conditions should report the scheduling failure.
- A router pod could be scheduled to a node where the configured port were in use by something other than a pod, such as a system daemon (the Kubernetes schedule only checks for conflicts with ports allocated by Kubernetes); in this case, the ingresscontroller's status conditions should reflect the pod's failure to start.
- A router pod could be scheduled and start listening on a port inside the nodeport range; in this case, a subsequently created nodeport service could by chance be assigned the in-use port and fail to be configured on the host with the router pod.
n | ||
N/A. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Spurious 'n'.
We could document here how to check the ingresscontroller's status conditions, router pod logs, and SDN/OVN logs to diagnose the aforementioned failure modes, and how to remediate them.
|
||
### Upgrade / Downgrade Strategy | ||
|
||
N/A. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There shouldn't be any issues when upgrading to a version of OpenShift with this enhancement, but there could be problems if the user downgraded after creating ingresscontrollers using this enhancement.
|
||
## Drawbacks | ||
|
||
N/A. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Additional operational complexity, which will increase the needed test matrix, frustration for users, and volume of customer cases.
|
||
## Alternatives | ||
|
||
N/A. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
An alternative would be to continue to guide users to the NodePortService endpoint publishing strategy type. Another alternative (or variation on that alternative) would be to augment the NodePortService strategy to allow the user to specify requested ports within the nodeport range. I understand that some users have been less than completely satisfied with NodePortService, but it has been our answer for years, so it would be useful to document why we have decided now to add this enhancement.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How the cluster ingress operator can find an available port from the nodeport range in a race free way?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From the docs:
If you want a specific port number, you can specify a value in the nodePort field. The control plane will either allocate you that port or report that the API transaction failed.
I think @Miciah was saying that the same configuration could be added to the node port strategy instead of trying to expose the ports through the host network.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, on the level of the contract with the user of the API, specifying host ports or node ports are equally prone to the conflicts.
Then let me ask this timely question: why this was not considered as a counter proposal to the customer need?
Why customers are more satisfied with the hostNetwork publishing strategy? Performance (fewer hops)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Miciah , @arjunrn & @alebedev87
From the information we got from the PMs(@lmzuccarelli + @Deepthidharwar), other customers are still interested in implementing this change for hostNetwork.
// A load balancer can send HTTP probes to this port on a given | ||
// node with the path /healthz/ready to determine whether the | ||
// ingress controller is ready to receive traffic on the node. | ||
// For proper operation, the load balancer must not forward traffic // to a node until /healthz/ready reports ready, and the load |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// For proper operation, the load balancer must not forward traffic // to a node until /healthz/ready reports ready, and the load | |
// For proper operation, the load balancer must not forward traffic | |
// to a node until /healthz/ready reports ready, and the load |
// +kubebuilder:validation:Maximum=65535 | ||
// +kubebuilder:validation:Minimum=1 | ||
// +kubebuilder:default=80 | ||
HTTPPort int `json:"HTTPPort"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This field and the other port fields should have type int32
; see https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#primitive-types.
|
||
Cluster administrators should watch out for the ingress controller status after updates: | ||
In case the router pod cannot be scheduled due to port unavailability (or any other scheduling concerns), the ingress controller's conditions will report the scheduling failure. | ||
In such a case, the cluster administrator might need to inspect other pods or daemonsets that use the same port. Router pod logs, OVN/SDN logs might be helpful to identify the root cause for scheduling failure. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would say "pods and system daemons"; a likely source of conflicts is running daemons on the host that aren't managed by OpenShift. ss
or netstat
would be an easy way to check for conflicts.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We actually have a host-port registry -- https://github.com/openshift/enhancements/blob/master/dev-guide/host-port-registry.md -- as well as user-facing documentation (https://github.com/openshift/openshift-docs/blob/main/modules/installation-network-user-infra.adoc) for which ports we "claim". So end-user documentation should reference those ranges as being off-limits.
In case the router pod cannot be scheduled due to port unavailability (or any other scheduling concerns), the ingress controller's conditions will report the scheduling failure. | ||
In such a case, the cluster administrator might need to inspect other pods or daemonsets that use the same port. Router pod logs, OVN/SDN logs might be helpful to identify the root cause for scheduling failure. | ||
|
||
Cluster administrators might also encounter issues if an ingresscontroller with HostNetwork strategy is using a port from the nodePort range: if a nodeport service attempts to use the same port, the router pod will not be able to get scheduled correctly. The nodePort ingress controller's conditions should report the scheduling failure. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If a nodeport service attempts to use a port that the ingresscontroller is already using, then the nodeport service may not work properly. I know that OVN-Kubernetes will log the conflict in its pods; I'm not sure about openshift-sdn's failure mode.
status: "True" | ||
type: PodsScheduled |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In case of failure to schedule pods, the operator would report PodsScheduled=False
(I hope).
Logs reporting the failure to bind to the requested port on the underlying host should be found. | ||
|
||
c. Checking SDN/OVN logs | ||
Check the logs from the ovn/sdn pods on the nodes where the router pods are running for errors related to port conflicts. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be helpful to identify the pods to check here. You can ask the SDN team if you need help figuring out the appropriate namespace or pod names.
* Advise users to the NodePortService endpoint publishing strategy type. | ||
* Augment the NodePortService strategy to allow the user to specify requested ports within the nodeport range. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Usually this section will include some explanation of why the alternatives were rejected. Do we have that information? Maybe it's as simple as that these alternatives have been rejected by a subset of users who require HostNetwork for unspecified reasons. If we do know a reason why these alternatives were rejected, this would be the place to record it.
// +kubebuilder:validation:Maximum=65535 | ||
// +kubebuilder:validation:Minimum=1 | ||
// +kubebuilder:default=443 | ||
HTTPSPort int `json:"HTTPSPort"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not the API definition, so it's less important, however to match with the example below (and with the most API examples actually): JSON tag names are better to be lower camel case.
bb5fe6c
to
99593a2
Compare
oc logs -f ds/ovnkube-node -n openshift-ovn-kubernetes -c ovnkube-node | ||
W0301 17:17:48.060076 5734 port_claim.go:119] PortClaim for xxxx: xxxx/xxxxxxxx on port: 32634, err: listen tcp :32634: bind: address already in use | ||
E0301 17:17:48.060097 5734 port_claim.go:145] Error claiming port for service: xxxx/xxxxxxxx: listen tcp :32634: bind: address already in use |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This needs to be a shell command sh
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated. Thanks
99593a2
to
7da4094
Compare
bc1c8bd
to
b1d0c6e
Compare
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: Miciah The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/label tide/merge-method-squash |
b1d0c6e
to
55a8336
Compare
/lgtm |
/refresh |
This PR add a proposal for adding Binding Options for the IngressAPI