Skip to content


Subversion checkout URL

You can clone with
Download ZIP


Partially requesting huge files causes zotonic to eat up huge amounts of memory #319

hce opened this Issue · 4 comments

2 participants

hce commented

I have a 900 MB file that is served through resource_file_readonly. Requesting the whole file works just fine. Doing a partial request causes zotonic to eat up all available memory, until it finally crashes and is restarted by heart. (Tested on latest commit (0a1ac1e))


Normally, we send large files in chunks. The prevents the out of memory scenario you see.

I suspect there is a mechanism in Webmachine to handle partial requests when the resource doesn't support it itself. In that case Webmachine must load the complete output to slice the requested part.

We have to check the Webmachine code, though I think we will need to rewrite some parts of resource_file_readonly to let this work in an acceptable way.

@mworrell mworrell was assigned
hce commented

It might even be worth considering using the sendfile syscall where available ( to serve resource_file_readonly requests, to further enhance performance, what do you think?


Yes, I actually have been thinking about it. Especially now file:sendfile/2 is part of R15B

As we are using our own fork of Webmachine it shouldn't be too hard to include.


The send file behavior of webzmachine has been redone, this should now be fixed.

@mworrell mworrell closed this
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Something went wrong with that request. Please try again.