Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP

Loading…

Partially requesting huge files causes zotonic to eat up huge amounts of memory #319

Closed
hce opened this Issue · 4 comments

2 participants

@hce
hce commented

I have a 900 MB file that is served through resource_file_readonly. Requesting the whole file works just fine. Doing a partial request causes zotonic to eat up all available memory, until it finally crashes and is restarted by heart. (Tested on latest commit (0a1ac1e))

@mworrell
Owner

Normally, we send large files in chunks. The prevents the out of memory scenario you see.

I suspect there is a mechanism in Webmachine to handle partial requests when the resource doesn't support it itself. In that case Webmachine must load the complete output to slice the requested part.

We have to check the Webmachine code, though I think we will need to rewrite some parts of resource_file_readonly to let this work in an acceptable way.

@mworrell mworrell was assigned
@hce
hce commented

It might even be worth considering using the sendfile syscall where available (http://steve.vinoski.net/blog/2009/01/05/sendfile-for-yaws/) to serve resource_file_readonly requests, to further enhance performance, what do you think?

@mworrell
Owner

Yes, I actually have been thinking about it. Especially now file:sendfile/2 is part of R15B

http://www.erlang.org/doc/man/file.html#sendfile-2

As we are using our own fork of Webmachine it shouldn't be too hard to include.

@mworrell
Owner

The send file behavior of webzmachine has been redone, this should now be fixed.

@mworrell mworrell closed this
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Something went wrong with that request. Please try again.