Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Backend maxconn config not working properly when running multiple ingress controllers #595

Closed
m4nu56 opened this issue Dec 19, 2023 · 1 comment
Labels

Comments

@m4nu56
Copy link

m4nu56 commented Dec 19, 2023

Hi all,
I have an HAProxy ingress controller set up as a Load Balancer on my k8s cluster.
Deployed using haproxytech/helm-charts and using the latest version 1.35.3 with haproxytech/kubernetes-ingress:1.10.10.

I've noticed that, with only 1 server registered to my backend, when I set the following config to my core.haproxy.org/v1alpha2:Backend:

  config:
    default_server:
      maxconn: 1

And when I run only 1 haproxy instance, then the limit works properly and I can see the subsequent requests piling up in the queue.

But if I make my ingress controller scale up to 2, then I get about 50% chance having a following request pile up in the queue or being sent to the actual server.

As you can see in the screenshot: limit is set to 1 but the max went to 2.

image

And I can actually port-forward into the 2 different ingress controller individual stats pages, shouldn't I have only one ?

Do you know of any config that would prevent this unwanted behaviour ?

Thanks for your help
Cheers

Copy link

stale bot commented Jan 19, 2024

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale label Jan 19, 2024
@stale stale bot closed this as completed Feb 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant