Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

upstream hash by host #3722

Closed
LoicMahieu opened this issue Feb 5, 2019 · 10 comments
Closed

upstream hash by host #3722

LoicMahieu opened this issue Feb 5, 2019 · 10 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@LoicMahieu
Copy link
Contributor

I am not completely confident that it is a bug report, do not hesitate to contradict me ;)

I have several ingresses and for some of them, I need to stick to one backend.
The reason is that theses apps are pretty old JSP apps that consumes a lot of memory. By arranging to gather requests from the same HTTP host on the same backend, it saves me some memory.

At the time of the 0.20.0, I put this annotation:

"nginx.ingress.kubernetes.io/upstream-hash-by": "$host"

It seems to worked well. But I recently upgraded to 0.22.0 from 0.20.0 and now the requests are balanced to each backends.

  • What's your opinion about upstream-hash-by = "$host" ? Is there a good way ? Will the controller switch to another backend if the previous is not healthy ?

In the 0.20.0, I observed some errors (113: No route to host) while connecting to upstream due to backend scale up/down.

  • Is there a need to specify upstream-hash-by-subset and upstream-hash-by-subset-size ?
    It tried it but not works.

Thanks for your help. 👍

NGINX Ingress controller version: 0.22.0
Kubernetes version: v1.11.6-gke.3
Cloud provider or hardware configuration: GKE
Install tools: Helm

@JordanP
Copy link
Contributor

JordanP commented Feb 6, 2019

You seem to be describing several problems.

Could you tell me more about In the 0.20.0, I observed some errors (113: No route to host) while connecting to upstream due to backend scale up/down. ? I might be seeing something like this too.

@LoicMahieu
Copy link
Contributor Author

Hi @JordanP
Thanks for your response. Sorry, indeed my message is not clear.

The primary problem that I want to solves is: stick requests from the same http-host to the same backend.
For this reason, I added this annotation to the desired ingress:

"nginx.ingress.kubernetes.io/upstream-hash-by": "$host"

In 0.20.0, it worked well, in my tests. But recently I tried to scale up the backends and then scale down. After that the ingress-controller was failing to proxy the requests. Logs was: (113: No route to host) while connecting to upstream. A restart of controller pods solved the problem.

In 0.22.0, it seems that requests are not sticked to the same backend, depending on the http-host. I tried scale up/down and not more routing errors.

@diegows
Copy link
Contributor

diegows commented Feb 10, 2019

Hi @LoicMahieu, could you share all the annotations (or the whole ingress object) you are using in the ingress? I've modified some code that's related to upstream-hash-by annotation, maybe I broke some specific use case.

If you don't need the "subset" feature, you don't need to add those annotations. It's disabled by default.

@LoicMahieu
Copy link
Contributor Author

Hi @diegows

Annotations are quite simple:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/from-to-www-redirect: "true"
    nginx.ingress.kubernetes.io/upstream-hash-by: $host
  name: app-somedomain
spec:
  rules:
  - host: somedomain.com
    http:
      paths:
      - backend:
          serviceName: backend
          servicePort: 80
        path: /

@diegows
Copy link
Contributor

diegows commented Feb 11, 2019

You must use quotes in all the annotations or K8s API is going to drop it somewhere before reaching the Nginx controller.

@LoicMahieu
Copy link
Contributor Author

Value is quoted in our sources. I used kubectl get -o yaml and it prints it without quotes ;)

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 14, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 13, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

5 participants