-
-
Notifications
You must be signed in to change notification settings - Fork 455
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running in container, memory never released #314
Comments
It seems reasonable to me that K8s is killing the pod since the memory limit is quite low for a use case like image processing, even more, if you are benchmarking it with a dozen of requests concurrently. You can try increasing the memory limit to 1GB - 2GB and see how it manages the memory pressure. Historically, the Go VM puts extra presume in the resident memory for situations where potentially dozens/hundreds of MB are memory are allocated into the heap memory stack, as imaginary does in order to process images passing potentially large buffers. This is also influenced by the This is a related issue with some useful insights: #198. |
hi @h2non Memory limit was so low just to reproduce the OOMKill behaviour faster. I've already set MALLOC_ARENA_MAX environment as suggested in linked issue and log rotate it's already managed by my k8 cluster. Is there anything more I have to set? Being in a K8 cluster I'd like to scale imaginary pods horizontally, with a K8 Autoscaler providing more pods at need (and deallocating them once done) more then allocating more ram (i've seen up to 16GB in the other issue), however I'll try with your suggested 2GB setup, but if memory in never deallocated I expect this to make no difference :( |
Please specify which versions of libvips and imaginary are you running. |
I'm running on latest docker image
|
Using the MALLOC_ARENA_MAX = 2 solved the issue for us. The memory usage for each pods is now stable at around 500MB. |
I have also problems with memory. I have Using jmeter I did 2 concurrent requests for 3 minutes, using imaginary 1.1.3. Here is grafana CPU (i think it's about 4m avg, that's why it gradually increases and decreases) and memory graphs: 2 imaginary pods took about 3.5GB memory just to serve same image for 2 concurrent users! Same 2 concurrent users test for 3 minutes, but this time using imaginary 1.1.1: This time memory usage is about 500MB - 6 times smaller! But memory is never released. Tested with 50 concurrent users for 3 minutes with imaginary 1.1.1: With 50 concurrent users (actually 10 per pod because of Good news is that if I repeat last test 3 times with some cool-down time in between, then overall memory usage does not increase - it maybe even decreases a little bit: |
Hi there,
I'm currently running imaginary as a kubernetes pod, everything is fine except the fact that container memory cache is never deallocated.
Testing with apache benchmarck causes memory to grow up until the pod reached its limit, become unresponsive and get killed by kubernetes probe.
Currently I'm running with following parameters:
Is there any additional setting to have memory properly managed within a docker container ?
The text was updated successfully, but these errors were encountered: