-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Not closing connections ( CLOSE_WAIT ) #11
Comments
Hmm it seems I cannot reproduces this (this container has been running for ~2 weeks):
Are you on the latest version? There has been a leak fixed in 0.3.1. |
Still seeing this w/ latest version when I do a lsof on pid 1 I get the following
when I do a general lsof with in the container I get the following
|
If I restart the container and run this again I get the followiung
and w/ lsof on pid 1 I get the following
I have also anecdotally notiuced that when auto kuma is running I have mush less responsivness from uptime-kuma |
Hi, this seems to be as expected now, there are no connections in CLOSE_WAIT state and it's expected to see the connections in lsof for some time after they are already closed, as long as the number doesn't keep increasing everything is fine. You might want to try and increase |
so I increased the interval to 60 seconds and today I took a look from the uptime-kuma container and noticed 26510 connections / responses from autokuma's container ip alll sitting in ESTABLISHED state
I then went into the autokuma container and saw 79728 connections in established from that container... I only keep looking into this because uptime kuma's performance goes to **** and the fix seems to be to restart both autokuma and uptime-kuma. Once I do this the performance of uptime-kuma returns to normal... |
Hi, can you please give a few more details about your setup, like I said I can't reproduce this here: Specifically:
Here's my lsof output in the uptime kuma container
and in the autokuma one:
|
Here is my abbreviated docker file
Kuma
autoheal
Simple network in this case, we're just using an internal docker network and addressing via hostname,... I was using Traefik at one point but not any more... same behavior with both though |
Please pull the autokuma image and try again, you are still using version 0.3.1 |
Seemed to be better for a bit but today I checked and there were 28000 connections .. logs here https://gist.github.com/johntdyer/5b77172e60f08e5e5b131fd05fb18e66 |
So I think the connection issue is a symptom not causal. Uptime Kuma seems to lose its mind when it has 10's of thousands of connections to autokuma. I check from both containers and they both show the same connections. When I restart autokuma everything is fine for a while |
This is fixed on master, connections were not closed correctly when an error occurred during connecting, btw those are neither 28000 nor 10's of thousands of connections, more like ~2000 |
this seems to have resolved itself, thank you very much for your help and efforts in this project ! |
It appears the app isnt closing connections which is causing a leak of stale http connections, at this moment I see 30600 on my container
The text was updated successfully, but these errors were encountered: