-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multipart Uploads #19
Comments
In theory a stream is not so much different than the Probably just some function that returns data until an elf, and the lib can put send it over to AWS. Just have to check if we need to have the file size up front. Do you know the size of your data before uploading? |
For my application the file size is given by the client (browser), so yes. However, I wasn't planning on trusting this number. I can also imagine use cases where I might not know this number ahead of time. For some further context, I'm following cowboy's multipart docs. My current implementation accumulates the stream received into a list and then sending that binary over with put, but I would like to send over each buffer coming from the cowboy request. |
I kind of expect that the S3 protocol does allow a PUT without content-length, checking... For our WebDAV and FTP versions of this library I am sure that it is ok. UPDATE I did a little bit of digging, and it seems that if we use transfer-encoding chunked then we can stream files to S3 using HTTP/1.1. Note: use |
Thanks for looking into this! I'm open to trying this out. Could you give an example callback for chunkify? I'm assuming it takes the accumulator as input, but I'm not sure what the output would look like. |
The only missing feature I have with this library is streaming uploads. I was thinking I could try implementing it myself until I found this: https://github.com/byronpc/erlaws3
Would you be open to the idea of adding the open_stream code from erlaws3 into s3filez? I'm imagining it would be more of a port than a copy and paste.
I can use both libraries, but it would be nice to have a unified s3 config for a project.
The text was updated successfully, but these errors were encountered: