-
Notifications
You must be signed in to change notification settings - Fork 8.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Nginx Controller using endpoints instead of Services #257
Comments
The old docs explained:
|
Ah okay. Thanks for that, I didn't find it in my searches. Are there efforts being made to have it more responsive to the readiness and liveness checks do you know? |
@Nalum yes. We need to release a final version before changing the current behavior. |
@aledbf anything I can do to help with this? |
Just dropping these comments here:
|
OK, I was taking a look at this: https://github.com/yaoweibin/nginx_upstream_check_module Maybe we can open a feature request to configure this, but I think this is not the case of this issue. About using services instead of PODs, we also have to take in mind that serviceIPs are valid only inside a Kubernetes Cluster, while POD IPs might be valid in your network (using Calico or some other solution). Anyway, ingress gets the upstream IP from Service (connect to service, watch the service and check for POD IP changes in that service) so it might be the case to health check the Upstream POD with the module above, instead of using the Service IP :) @aledbf Is this the case to open a feature request using the upstream healthcheck module or should we keep this open? Thanks! |
@rikatz correct me if I'm wrong, the brief look I took at that repo didn't show that it is using the kubernetes health checks, I would think it better to take advantage of those rather than adding additional checks. So have nginx be more responsive to the changes of the service. |
@rikatz adding an external module for this just duplicates the probes feature already provided by kubernetes. |
@rikatz what this issue is really about how we update the configuration (change in pods) without reloading nginx |
Yeap, Kube health check is the best approach. I was just thinking about an alternative :) Will take a look in how to change the config without reload the nginx also. |
@aledbf and about this module: https://github.com/yzprofile/ngx_http_dyups_module And ingress adding / removing upstreams based in HTTP request for this? This is the same approach that NGINX Plus uses to add/remove upstreams without reload / kill nginx process |
@rikatz I tried that module like a year ago (without success). Maybe we need to check again |
@aledbf I did some quick checks here and it worked 'fine'. But the following situations still 'reloads' the nginx:
I couldn't verify if, for each upstream change, NGINX got reloaded or if this doesn't happens. So, is this module of upstreams still applicable to the whole problem? I can start writing a PoC of this approach (it includes changing the nginx.tmpl, the core and a lot of other things) but if you think it's still a valuable change, we can do this :D Thanks |
Hi @chrismoos, |
Closing. It is possible to choose between endpoints or services using a flag. |
@aledbf Hi, can you tell me which flag should I use to use service-upstream by default ? I can't find it in the documentation or with -h. |
@mbugeia It's |
@jordanjennings that link is broken now, here's a working link: |
No, sorry. One of the reason is that adding lua again implies that will work only in some platforms |
It might be a stupid question of mine, but are we 'HUP'ing the process? (http://nginx.org/en/docs/control.html) I've seen also that we could send a 'USR2' signal (https://www.digitalocean.com/community/tutorials/how-to-upgrade-nginx-in-place-without-dropping-client-connections) but this might overload NGINX with stale connections, right? |
@rikatz the USR2 procedure create several new problems:
|
This is likely improved by using #2174 |
@montanaflynn that link is 404, here's the now-working link: https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md#service-upstream |
When I use the annotation I still see all the Pod IPs as endpoints. Shouldn't I see the Service's ClusterIp? I see this:
|
@michaelajr make sure you're using nginx.ingress.kubernetes.io/service-upstream -- the shorter form no longer works for nginx-specific ingress settings by default. |
Does anyone know why the docs indicate that sticky sessions won't work if the EDIT It looks like this is due to the fact that Kubernetes session affinity is based on client IPs. By default, NGINX ingress obscures client IPs because the Service it creates sets |
I too use
|
Same here and I would really like to understand whether the directive is actually working. Right now it seems to me that it isn't. |
For anyone finding this page years later, you can confirm the change is working by looking at the ingress-nginx logs for traffic to the ingress in question. By default, the logging will show the backend IP that handled the request. It's either an endpoint IP or service IP depending on the configuration. https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/log-format/ |
Instead of having NGINX maintain it's own list of upstreams, we want it instead to use the service ClusterIP such that k8s can handle e.g. cases where pods are evicted. Without this we end up with 5xx errors e.g. on deploys. See kubernetes/ingress-nginx#257 for more details.
Instead of having NGINX maintain it's own list of upstreams, we want it instead to use the service ClusterIP such that k8s can handle e.g. cases where pods are evicted. Without this we end up with 5xx errors e.g. on deploys. See kubernetes/ingress-nginx#257 for more details.
Just so people coming to this issue considered it: We tried rolling We've decided to roll back to use the default |
@narqo thanks for the udpate |
@narqo did you get it working with |
Yes it worked in our tests (AWS EKS 1.21, ingress-nginx v1.3.1, deployed and configured via Helm). We confirmed that the change (and the rollback of the change) took the effect by observing the changes in nginx controller's access logs, where we log the details of the request's upstream. |
This is exactly the issue we were looking at, when we experimented with |
Honestly, I would even be happy with like a 5-10 second The next thing I'm planning on trying is setting |
Welp, turns out the actual problem I was running into was related to new pods coming up slowly rather than old pods terminating. Apologies for misidentifying the issue! |
* flyteadmin http port * flyteadmin grpc port * flyteconsole grpc port This is necessary because the ingress may be configured in a way that it sends TLS traffic to internal Flyte services. Istio will use port names to determine traffic - and may therefore assume the appProtocol of http, even though traffic from ingress -> flyteadmin is actually https. This misconfiguration prevents any traffic from flowing through the ingress to the service. Flyteadmin http and grcp ports *are* accessible using `http` and `grpc` values for appProtocol respectively within the cluster, but as soon as traffic travels between the ingress and the service those settings will not work. The most "compatible" setting is `tcp` which works for any network stream. - Adds the nginx.ingress.kubernetes.io/service-upstream: "true" Nginx Controller using endpoints instead of Services kubernetes/ingress-nginx#257 kubernetes/ingress-nginx@main/docs/user-guide/nginx-configuration/annotations.md#service-upstream Signed-off-by: noahjax <noah.jackson@dominodatalab.com> Signed-off-by: ddl-ebrown <ethan.brown@dominodatalab.com>
* flyteadmin http port * flyteadmin grpc port * flyteconsole grpc port This is necessary because the ingress may be configured in a way that it sends TLS traffic to internal Flyte services. Istio will use port names to determine traffic - and may therefore assume the appProtocol of http, even though traffic from ingress -> flyteadmin is actually https. This misconfiguration prevents any traffic from flowing through the ingress to the service. Flyteadmin http and grcp ports *are* accessible using `http` and `grpc` values for appProtocol respectively within the cluster, but as soon as traffic travels between the ingress and the service those settings will not work. The most "compatible" setting is `tcp` which works for any network stream. - Adds the nginx.ingress.kubernetes.io/service-upstream: "true" Nginx Controller using endpoints instead of Services kubernetes/ingress-nginx#257 kubernetes/ingress-nginx@main/docs/user-guide/nginx-configuration/annotations.md#service-upstream Signed-off-by: noahjax <noah.jackson@dominodatalab.com> Signed-off-by: ddl-ebrown <ethan.brown@dominodatalab.com>
Is there any reason as to why the Nginx controller is set up to get the endpoints that a Service uses rather than using the Service?
When I run updates to a deployment and they get rolled out some requests are being sent to the pods that are being terminated which results in a 5XX error because nginx still thinks it's available. Doesn't the use of the Service take care of this issue?
I'm using this image:
gcr.io/google_containers/nginx-ingress-controller:0.8.3
I'd be happy to look into changing this so it works with the Services rather than the Endpoints if it makes sense to everyone that it does that.
The text was updated successfully, but these errors were encountered: