OS: Arch Linux x86-64
Appmod that does that: http://pastebin.com/1K61xNAs
port = 8001
listen = 127.0.0.1
docroot = /home/foxbunny/code/gallery
appmods = <gallery, gallery>
dir_listings = true
I've tried uploading a 12MB file and Yaws crashed with:
eheap_alloc: Cannot allocate 934157120 bytes of memory (of type "heap").
I still haven't gotten around to testing the smallest file size that causes this. Tested on both my desktop (2G RAM + 1G swap) and laptop (1.5G RAM + 1G swap).
Also tested same with Yaws 1.88.
Also, with temp_file, there is no problem.
It's obviously not very smart to not use temp file if large files are expected, but to me it seems somewhat excessive to require over 1GB of RAM for a 12MB upload, I think.
Thanks for providing all the details -- I've duplicated it on the latest github bits as well.
I'll investigate why it's trying to use so much memory.
Turns out the multipart/form-data handling code is quite old, almost 10 years as a matter of fact. It converts all the posted data into lists, keeps multiple copies of the lists, and works through them byte by byte. Together all this adds up to significant memory usage.
I didn't know what a web form was back then. :D
:-) Yes, I was surprised to find the code had not really been touched since then.
Improvements will be committed shortly.
I've pushed changes to try to address this problem. Uploads that used to cause Yaws to run out of memory on my laptop now work, so memory usage is better, but it's still too high. Unfortunately backwards compatibility makes this hard to fix due to the need to convert results to lists, but with my next commit I hope to add a new option to the multipart handling functions to tell Yaws to leave the data as binaries. This should allow apps willing to use the new option to handle large uploads.
Added new binary option for handling uploaded file data as binaries rather than lists, though for backward compatibility the list form is still the default. Given this addition together with the previous fix, I'm closing this issue.