Platform as a service offerings such as nodejitsu either forbid or limit disk writes. With the current implementation I was unable to large amounts of text (giant generated CSV files) to S3 as the application would throw errors.
I've added a noDisk option which when set to true keeps the data for each part in-memory instead of writing to a temporary file. It defaults to false.
Also added trivial change of including status code when an error is thrown because response code is not 200, as this was helpful for debugging.
provide status code when upload fails--helpful for debugging
ability to optionally keep parts in-memory instead of writing to temp…
… files. Useful for PaaS offerings that don't let you write much (or anything) to disk
@Zugwalt I have a different solution which might be relevant:
use different batching module that allows concurrent uploading as dat…
…a is coming in
This would solve the lingering ENOENT errors that pop up
delete part data once uploaded to be sure its out of memory
Hi @Zugwalt - you make a good case for buffering the file into memory instead of the file - my initial version that I used did this, but I made a deliberate design decision to switch to files instead because I didn't want to have to put a bit of logic along the lines of what @shahriman proposed at that stage in order to prevent people accidentally crashing their servers.
I intend on refactoring your commit a little in the future to use an in memory stream (so that the file and memory streams are easily interchangeable), as well as take into account @shahriman's changes - but seeing as how I'm a little short strapped for time at the moment, I'll just pull yours in for the moment, and refactor later on.
Merge pull request from @Zugwalt - fixes #6
Awesome Nathan! Thanks!