Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Nginx Controller using endpoints instead of Services #257

Closed
Nalum opened this issue Feb 10, 2017 · 29 comments
Closed

Nginx Controller using endpoints instead of Services #257

Nalum opened this issue Feb 10, 2017 · 29 comments

Comments

@Nalum
Copy link
Contributor

@Nalum Nalum commented Feb 10, 2017

Is there any reason as to why the Nginx controller is set up to get the endpoints that a Service uses rather than using the Service?

When I run updates to a deployment and they get rolled out some requests are being sent to the pods that are being terminated which results in a 5XX error because nginx still thinks it's available. Doesn't the use of the Service take care of this issue?

I'm using this image: gcr.io/google_containers/nginx-ingress-controller:0.8.3

I'd be happy to look into changing this so it works with the Services rather than the Endpoints if it makes sense to everyone that it does that.

@yissachar
Copy link

@yissachar yissachar commented Feb 10, 2017

The old docs explained:

The NGINX ingress controller does not uses Services to route traffic to the pods. Instead it uses the Endpoints API in order to bypass kube-proxy to allow NGINX features like session affinity and custom load balancing algorithms. It also removes some overhead, such as conntrack entries for iptables DNAT.

Loading

@Nalum
Copy link
Contributor Author

@Nalum Nalum commented Feb 10, 2017

Ah okay.

Thanks for that, I didn't find it in my searches.

Are there efforts being made to have it more responsive to the readiness and liveness checks do you know?

Loading

@aledbf
Copy link
Member

@aledbf aledbf commented Feb 10, 2017

Are there efforts being made to have it more responsive to the readiness and liveness checks do you know?

@Nalum yes. We need to release a final version before changing the current behavior.
The idea (we need a POC first) is to use the kong load balancer code that allows the update of the nginx upstreams without requiring a reload.

Loading

@Nalum
Copy link
Contributor Author

@Nalum Nalum commented Feb 13, 2017

@aledbf anything I can do to help with this?

Loading

@Nalum
Copy link
Contributor Author

@Nalum Nalum commented Feb 22, 2017

Just dropping these comments here:

aledbf Slack 11:59 AM 2017-02-22
@Nalum about that, I just need time 🙂 . We need to release 0.9 first. I hope after beta 2 #303 we can release 0.9. After that I will start looking the kong load balancer lua code. Basically this is the code we need to understand and “port” Kong/kong#1735

aledbf Slack 12:00 PM 2017-02-22
@Nalum please take this just as an idea and POC. We really need to found a way to avoid reloads and better handling of nginx upstreams

Loading

@rikatz
Copy link
Contributor

@rikatz rikatz commented Jun 7, 2017

OK, I was taking a look at this:

https://github.com/yaoweibin/nginx_upstream_check_module

Maybe we can open a feature request to configure this, but I think this is not the case of this issue.

About using services instead of PODs, we also have to take in mind that serviceIPs are valid only inside a Kubernetes Cluster, while POD IPs might be valid in your network (using Calico or some other solution).

Anyway, ingress gets the upstream IP from Service (connect to service, watch the service and check for POD IP changes in that service) so it might be the case to health check the Upstream POD with the module above, instead of using the Service IP :)

@aledbf Is this the case to open a feature request using the upstream healthcheck module or should we keep this open?

Thanks!

Loading

@Nalum
Copy link
Contributor Author

@Nalum Nalum commented Jun 7, 2017

@rikatz correct me if I'm wrong, the brief look I took at that repo didn't show that it is using the kubernetes health checks, I would think it better to take advantage of those rather than adding additional checks. So have nginx be more responsive to the changes of the service.

Loading

@aledbf
Copy link
Member

@aledbf aledbf commented Jun 7, 2017

@rikatz adding an external module for this just duplicates the probes feature already provided by kubernetes.

Loading

@aledbf
Copy link
Member

@aledbf aledbf commented Jun 7, 2017

@rikatz what this issue is really about how we update the configuration (change in pods) without reloading nginx

Loading

@rikatz
Copy link
Contributor

@rikatz rikatz commented Jun 7, 2017

Yeap, Kube health check is the best approach. I was just thinking about an alternative :)

Will take a look in how to change the config without reload the nginx also.

Loading

@rikatz
Copy link
Contributor

@rikatz rikatz commented Jun 9, 2017

@aledbf and about this module: https://github.com/yzprofile/ngx_http_dyups_module

And ingress adding / removing upstreams based in HTTP request for this? This is the same approach that NGINX Plus uses to add/remove upstreams without reload / kill nginx process

Loading

@aledbf
Copy link
Member

@aledbf aledbf commented Jun 9, 2017

@rikatz I tried that module like a year ago (without success). Maybe we need to check again

Loading

@rikatz
Copy link
Contributor

@rikatz rikatz commented Jun 12, 2017

@aledbf I did some quick checks here and it worked 'fine'.

But the following situations still 'reloads' the nginx:

  • New vHost
  • New Path (changed Ingress)
  • New SSL Cert

I couldn't verify if, for each upstream change, NGINX got reloaded or if this doesn't happens.

So, is this module of upstreams still applicable to the whole problem? I can start writing a PoC of this approach (it includes changing the nginx.tmpl, the core and a lot of other things) but if you think it's still a valuable change, we can do this :D

Thanks

Loading

@arikkfir
Copy link

@arikkfir arikkfir commented Sep 27, 2017

Hi @chrismoos,
Would it be possible to also support this annotation in the Nginx configmap as well? We have quite a few Ingress resources and would love setting it in a centralized place instead of in each one... ?

Loading

@aledbf
Copy link
Member

@aledbf aledbf commented Oct 8, 2017

Closing. It is possible to choose between endpoints or services using a flag.

Loading

@aledbf aledbf closed this Oct 8, 2017
@mbugeia
Copy link

@mbugeia mbugeia commented Oct 11, 2017

@aledbf Hi, can you tell me which flag should I use to use service-upstream by default ? I can't find it in the documentation or with -h.

Loading

@jordanjennings
Copy link

@jordanjennings jordanjennings commented Oct 11, 2017

@mbugeia It's ingress.kubernetes.io/service-upstream and you can find that on this page: https://github.com/kubernetes/ingress-nginx/blob/master/configuration.md#service-upstream

Loading

@montanaflynn
Copy link

@montanaflynn montanaflynn commented Oct 26, 2017

Loading

@jesseshieh
Copy link

@jesseshieh jesseshieh commented Nov 5, 2017

@Nalum @aledbf sorry to comment on a closed issue, but I was wondering what the status of using "the kong load balancer code that allows the update of the nginx upstreams without requiring a reload" is. Was there any progress made on that at all?

Loading

@aledbf
Copy link
Member

@aledbf aledbf commented Nov 5, 2017

Was there any progress made on that at all?

No, sorry. One of the reason is that adding lua again implies that will work only in some platforms

Loading

@rikatz
Copy link
Contributor

@rikatz rikatz commented Nov 5, 2017

It might be a stupid question of mine, but are we 'HUP'ing the process? (http://nginx.org/en/docs/control.html)

I've seen also that we could send a 'USR2' signal (https://www.digitalocean.com/community/tutorials/how-to-upgrade-nginx-in-place-without-dropping-client-connections) but this might overload NGINX with stale connections, right?

Loading

@aledbf
Copy link
Member

@aledbf aledbf commented Nov 5, 2017

@rikatz the USR2 procedure create several new problems:

  • doubles the resource usage
  • hard to coordinate
  • if you have clients connections with keepalive you loose request
    (I already tried this approach two years ago)

Loading

@mcfedr
Copy link

@mcfedr mcfedr commented Apr 2, 2018

This is likely improved by using #2174

Loading

@jwatte
Copy link

@jwatte jwatte commented Oct 17, 2018

@montanaflynn that link is 404, here's the now-working link: https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md#service-upstream
(and that's very likely to rot, too -- google for nginx annotation service-upstream to find it whenever you next need this)

Loading

@michaelajr
Copy link

@michaelajr michaelajr commented Mar 8, 2019

When I use the annotation I still see all the Pod IPs as endpoints. Shouldn't I see the Service's ClusterIp?

I see this:

Host  Path  Backends
  ----  ----  --------
  *     *     testapp:443 (192.168.103.2:443,192.168.203.66:443,192.168.5.3:443)

Loading

@doubleyewdee
Copy link

@doubleyewdee doubleyewdee commented Mar 18, 2019

@michaelajr make sure you're using nginx.ingress.kubernetes.io/service-upstream -- the shorter form no longer works for nginx-specific ingress settings by default.

Loading

@robbie-demuth
Copy link

@robbie-demuth robbie-demuth commented Nov 14, 2019

Does anyone know why the docs indicate that sticky sessions won't work if the service-upstream annotation is used? Wouldn't the session affinity configuration on the Kubernetes Service itself be honored in that case?

EDIT

It looks like this is due to the fact that Kubernetes session affinity is based on client IPs. By default, NGINX ingress obscures client IPs because the Service it creates sets externalTrafficPolicy to Cluster. It looks like setting it to Local instead might resolve the issue, but I'm not quite sure I understand what the implications are - particularly regarding imbalanced traffic spreading

https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip

Loading

@zhangyuxuan1992
Copy link

@zhangyuxuan1992 zhangyuxuan1992 commented Jun 5, 2020

@michaelajr make sure you're using nginx.ingress.kubernetes.io/service-upstream -- the shorter form no longer works for nginx-specific ingress settings by default.

I too use
nginx.ingress.kubernetes.io/service-upstream: 'true'
but till see all pod and ports instead of ClusterIp.

/*.* *:8423 (192.168.102.25:8423,192.168.149.225:8423)

Loading

@VengefulAncient
Copy link

@VengefulAncient VengefulAncient commented Sep 21, 2020

I too use
nginx.ingress.kubernetes.io/service-upstream: 'true'
but till see all pod and ports instead of ClusterIp.

/*.* *:8423 (192.168.102.25:8423,192.168.149.225:8423)

Same here and I would really like to understand whether the directive is actually working. Right now it seems to me that it isn't.

Loading

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet