Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Causing a memory leak (and crash) on Google App Engine #5

Closed
sudhirj opened this issue Jan 31, 2014 · 11 comments
Closed

Causing a memory leak (and crash) on Google App Engine #5

sudhirj opened this issue Jan 31, 2014 · 11 comments
Assignees

Comments

@sudhirj
Copy link

sudhirj commented Jan 31, 2014

I'm trying to use imaging in an App Engine app, and after resizing about 6 images the application shuts down because of excessive memory usage. This usually happens because of goroutines that don't exit, so that might be the case. Is there a way to turn off parallel computation via a flag or option for environments like this? Or we can just make sure that all spawned goroutines always exit cleanly.

@sudhirj
Copy link
Author

sudhirj commented Jan 31, 2014

Currently fixed by running runtime.GC() at the end of every request - that sounds very inefficient, though.

@disintegration
Copy link
Owner

@sudhirj
Hi,
There is a flag but it's not public currently.
Can you please clone the repository. Edit file parallel.go:
https://github.com/disintegration/imaging/blob/master/parallel.go#L9
Set parallelizationEnabled to false, to see if it solves the problem.

If so I'll try to find what causes the memory leak. I use sync.WaitGroup to ensure that all the goroutines finished. Maybe there is a bug in my code.

Maybe I'll make parallelizationEnabled option public.

@ghost ghost assigned disintegration Jan 31, 2014
@sudhirj
Copy link
Author

sudhirj commented Jan 31, 2014

Found it and tried that a while ago, but it has no effect on App Engine. I did some basic testing on my dev machine (ran on a continuous load), and I don't think there's actually a memory leak, per se.

I think the problem is more that on GAE, an application is terminated as soon as it starts using about 120MB of memory. This problem also goes away when forcing the GC after every request. It might just be that the algorithm (I'm using Lanczos) uses a lot of memory. It may not leak memory, but it might just be using a lot. Trying to optimize for memory might help, but I don't know if that's a priority.

@disintegration
Copy link
Owner

I think resampling filter doesn't affect a lot, but I'll test it.
Image size is important: 10 megapixel image is 40MB of memory (32bits per pixel). Two-pass resize needs two additional allocations. First - the temp image with changed width. Second - the resulting image.

I'm not sure how Go decides when to run the garbage collection. Maybe forcing runtime.GC is the only way for your application.

Anyway, I'll try to find ways to reduce memory usage by Resize function.

@sudhirj
Copy link
Author

sudhirj commented Jan 31, 2014

So just for my reference (I'm not familiar with image resizing algorithms), holding and resizing a 10MB image with a two pass filter will require 30MB of RAM altogether? That would definitely explain the App Engine problem. It simply wouldn't have run the GC in time.

One possible way I can see would be to forcibly deallocate the intermediate objects once the operation is finished. I don't know if Go has the equivalent of destroy() or free(), though.

@disintegration
Copy link
Owner

So just for my reference (I'm not familiar with image resizing algorithms), holding and resizing a 10MB image with a two pass filter will require 30MB of RAM altogether?

Not quite right. For example: if you want to resize 3000x2000 image to 150x100. In first pass 150x2000 pixels will be allocated. In second pass 150x100 pixels will be allocated (image that is returned from function). Temporary 150x2000 will be freed as soon as the gc collects them.

@sudhirj
Copy link
Author

sudhirj commented Jan 31, 2014

Ah, right. Got it.

Temporary 150x2000 will be freed as soon as the gc collects them.

Forcing this collection manually, before returning, instead of waiting for GC will automatically solve half the memory problem. I'm trying to see if there a way to do that.

@disintegration
Copy link
Owner

There are no such things as destroy or free in Go. I think runtime.GC() is the only way.

@zak905
Copy link

zak905 commented Oct 1, 2018

Just for future users who may find this useful, I had also the same issue with imaging.Blur, and I got better results by running runtime.GC(), The latency is still volatile though and depends on whether the server handles other requests at the same time. After doing some profiling, I saw that dst := image.NewNRGBA(image.Rect(0, 0, src.w, src.h)) at https://github.com/disintegration/imaging/blob/master/effects.go#L36 eats about 80% of the memory that was allocated (111MB of 138MB). Since there is not much to do besides runtime.GC(), I am thinking of trying gocv which is a go wrapper for opencv. It wrapps the C implementation which hopefully frees memory as soon as the process is done.

@sudhirj
Copy link
Author

sudhirj commented Oct 1, 2018

In the years since this discussion, the Go garbage collector has gotten so good that just calling runtime.GC() is cheap and quite viable. As time goes by it should become practically as cheap as a manual deallocation.

@zak905
Copy link

zak905 commented Oct 3, 2018

runtime.GC() was not enough in my case. For the information purpose: I tried GaussianBlur from gocv, and it does better in term of memory consumption, but the docker image is now much bigger (more then 10x), I am still using other functionalities like contrast, and brightness without issues so far.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants