-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix k8s memory leak #387
Fix k8s memory leak #387
Conversation
Waiting feedback from @nlf before merging |
a39d7ae
to
2b50b8f
Compare
LGTM very very significant improvement 👍 |
Hum, I need more time on this... |
It seems much better, but it still looks like memory usage is growing unbounded, albeit at a much slower rate... |
2b50b8f
to
307b120
Compare
I just pushed a new version for testing purpose on docker hub: |
Ok just deployed |
i've got the latest k8s tag deployed too and am monitoring memory growth, i'll report back later today |
The leak is still there, but much slower. I am seeing the memory usage go from 20MiB to 256MiB in 5 hours, after 256MiB I am killing the process. It seems that memory is leaking from somewhere in k8s.client#WatchAll as I have a branch where I am watching endpoints rather than pods and replication controllers, and the memory leak is much slower again... as watched events fire far less often. |
i can't say for sure, our hosting provider had some issues so we had a bunch of services restart from that. if everything stays up ok today i'll report back later |
@emilevauge I am still seeing the issue, but I these changes are a big improvement so I think it would be good to merge them in. For me the issue is solved when I am running these changes, plus the stuff I am working on to build the backend with endpoints. I will be able to open a PR with as soon as this is merged... |
@errm OK ;) Let's merge this one then /cc @vdemeester :) |
307b120
to
d92f61b
Compare
Signed-off-by: Emile Vauge <emile@vauge.com>
Signed-off-by: Emile Vauge <emile@vauge.com>
Signed-off-by: Emile Vauge <emile@vauge.com>
Signed-off-by: Emile Vauge <emile@vauge.com>
d92f61b
to
0f23581
Compare
I tend to agree, it's so significantly better than it was it's worth merging as is to keep things moving forward. For what it's worth, of the 4 instances I'm running it looks like only one of them is leaking at all, and it appears that garbage collection is fighting to keep it under control so the growth is pretty minimal |
Sounds good to me (based on the comments). |
This PR fixes memory leaks in Kubernetes backend.
/cc @nlf
Fixes #377