Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running in container, memory never released #314

Open
aroundthecode opened this issue Jun 16, 2020 · 7 comments
Open

Running in container, memory never released #314

aroundthecode opened this issue Jun 16, 2020 · 7 comments
Labels

Comments

@aroundthecode
Copy link

Hi there,
I'm currently running imaginary as a kubernetes pod, everything is fine except the fact that container memory cache is never deallocated.

imaginary_ab

Testing with apache benchmarck causes memory to grow up until the pod reached its limit, become unresponsive and get killed by kubernetes probe.

imaginary

Currently I'm running with following parameters:

"containers": [
          {
            "name": "imaginary",
            "image": "h2non/imaginary",
            "args": [
              "-cors",
              "-enable-url-source",
              "-http-cache-ttl",
              "0",
              "-mrelease",
              "10"
            ],
            "env": [
              {
                "name": "MALLOC_ARENA_MAX",
                "value": "2"
              }
            ],
            "resources": {
              "limits": {
                "memory": "128Mi"
              },
              "requests": {
                "memory": "64Mi"
              }
            },

Is there any additional setting to have memory properly managed within a docker container ?

@h2non
Copy link
Owner

h2non commented Jun 16, 2020

It seems reasonable to me that K8s is killing the pod since the memory limit is quite low for a use case like image processing, even more, if you are benchmarking it with a dozen of requests concurrently. You can try increasing the memory limit to 1GB - 2GB and see how it manages the memory pressure.

Historically, the Go VM puts extra presume in the resident memory for situations where potentially dozens/hundreds of MB are memory are allocated into the heap memory stack, as imaginary does in order to process images passing potentially large buffers. This is also influenced by the cgo layer bindings share memory design.

This is a related issue with some useful insights: #198.

@h2non h2non added the question label Jun 16, 2020
@aroundthecode
Copy link
Author

hi @h2non
thanks for your fast reply!

Memory limit was so low just to reproduce the OOMKill behaviour faster.

I've already set MALLOC_ARENA_MAX environment as suggested in linked issue and log rotate it's already managed by my k8 cluster. Is there anything more I have to set?

Being in a K8 cluster I'd like to scale imaginary pods horizontally, with a K8 Autoscaler providing more pods at need (and deallocating them once done) more then allocating more ram (i've seen up to 16GB in the other issue), however I'll try with your suggested 2GB setup, but if memory in never deallocated I expect this to make no difference :(

@h2non
Copy link
Owner

h2non commented Jun 18, 2020

Please specify which versions of libvips and imaginary are you running.

@aroundthecode
Copy link
Author

I'm running on latest docker image

h2non/imaginary                                         latest              c579709005d3        12 days ago         226MB
/usr/local/bin/imaginary -v    
1.2.2
Name: vips
Description: Image processing library
Version: 8.9.2

@batiste
Copy link
Contributor

batiste commented Jul 13, 2020

Hi! I think I am experiencing a similar issue using the latest released image version of imaginary (imaginary:1.2.2) running in Kubernetes (gclould managed)

Screenshot 2020-07-13 at 15 34 15

@batiste
Copy link
Contributor

batiste commented Jul 27, 2020

Using the MALLOC_ARENA_MAX = 2 solved the issue for us. The memory usage for each pods is now stable at around 500MB.

@petslane
Copy link

petslane commented Dec 7, 2020

I have also problems with memory. I have MALLOC_ARENA_MAX set to 2 and using command arguments -concurrency 10 -mrelease 5. And 2 imaginary pods for serving requests with autoscaling.


Using jmeter I did 2 concurrent requests for 3 minutes, using imaginary 1.1.3. Here is grafana CPU (i think it's about 4m avg, that's why it gradually increases and decreases) and memory graphs:

image

2 imaginary pods took about 3.5GB memory just to serve same image for 2 concurrent users!


Same 2 concurrent users test for 3 minutes, but this time using imaginary 1.1.1:

image

This time memory usage is about 500MB - 6 times smaller! But memory is never released.


Tested with 50 concurrent users for 3 minutes with imaginary 1.1.1:

image

With 50 concurrent users (actually 10 per pod because of -concurrency 10 argument) CPU usage increased 2 times, but memory usage is still less that 2 times compared to 1.1.3. Compared to previous 1.1.1 test, memory usage increased about 5 times. But again, memory is never released.


Good news is that if I repeat last test 3 times with some cool-down time in between, then overall memory usage does not increase - it maybe even decreases a little bit:

image


And last test with 50 concurrent users with 10m duration:
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants