Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multipart Uploads #19

Open
fancycade opened this issue Oct 12, 2022 · 4 comments
Open

Multipart Uploads #19

fancycade opened this issue Oct 12, 2022 · 4 comments

Comments

@fancycade
Copy link

The only missing feature I have with this library is streaming uploads. I was thinking I could try implementing it myself until I found this: https://github.com/byronpc/erlaws3

Would you be open to the idea of adding the open_stream code from erlaws3 into s3filez? I'm imagining it would be more of a port than a copy and paste.

I can use both libraries, but it would be nice to have a unified s3 config for a project.

@mworrell
Copy link
Owner

In theory a stream is not so much different than the {filename, ...} we can now use.

Probably just some function that returns data until an elf, and the lib can put send it over to AWS.

Just have to check if we need to have the file size up front. Do you know the size of your data before uploading?

@fancycade
Copy link
Author

Just have to check if we need to have the file size up front. Do you know the size of your data before uploading?

For my application the file size is given by the client (browser), so yes. However, I wasn't planning on trusting this number. I can also imagine use cases where I might not know this number ahead of time.

For some further context, I'm following cowboy's multipart docs. My current implementation accumulates the stream received into a list and then sending that binary over with put, but I would like to send over each buffer coming from the cowboy request.

@mworrell
Copy link
Owner

mworrell commented Oct 17, 2022

I kind of expect that the S3 protocol does allow a PUT without content-length, checking...

For our WebDAV and FTP versions of this library I am sure that it is ok.

UPDATE

I did a little bit of digging, and it seems that if we use transfer-encoding chunked then we can stream files to S3 using HTTP/1.1.

Note: use {chunkify, function/1, Acc} for the body.

@fancycade
Copy link
Author

Thanks for looking into this!

I'm open to trying this out.

Could you give an example callback for chunkify? I'm assuming it takes the accumulator as input, but I'm not sure what the output would look like.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants