Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

File size #30

Closed
maticzav opened this issue Dec 18, 2017 · 5 comments
Closed

File size #30

maticzav opened this issue Dec 18, 2017 · 5 comments
Labels

Comments

@maticzav
Copy link

Hey,

is there a way to get the length of the stream? (size of the uploaded file)

😄

@jaydenseric
Copy link
Owner

To avoid trusting what the client says, and due to the nature of streaming uploads we can't tell how big the file is until it has finished streaming in. It is up to you to either meter the stream by piping it through a library, or you could simply inspect the file once it has finished streaming to wherever you are storing it.

Keep in mind that there is a maxFileSize setting, that defaults to infinity. If you just want to enforce a max file size for uploads that is an easy way; the stream will automatically cut off once the limit is reached. There is an open issue to improve the error handling of this situation.

@dizlexik
Copy link

It actually would be nice to have access to the file size that the client reports even if it is possible for them to lie about it. In the vast majority of cases it will be accurate and it would be nice to use it in the maxFileSize validation to short-circuit the process and avoid needlessly pulling in maxFileSize bytes of data before erroring out. This would be desired behavior even if the client misreports the size, e.g. if they lie and say the file is bigger than it really is then we would still want to throw an error and now allow it. And if they lie and underreport then it will still error out once the uploaded bytes reaches maxFileSize.

@loremaps
Copy link

@jaydenseric @mike-marcacci with #81 now merged, we have the uploads into temp files...correct? So this should theoretically allow us to have the actual size for each upload.
In our project, we need to "proxy" the request to another micro-service (multiple uploads and some data). In order to do that we need to know the sizes of each upload, otherwise the request is rejected by the Hapi server.
What we do right now is to create buffers from the streams, which if I understand correctly we are duplicating the work done in #81.

@jaydenseric
Copy link
Owner

@loremaps the temp files are an implementation detail of the file streams provided to resolvers. The API is still a stream, so you still don't know how big the final file will be until the stream ends.

@mike-marcacci
Copy link
Collaborator

@jaydenseric is correct that it’s just an implementation detail, and in the future we may do something like buffer to memory up to a certain size, then fall back to the file system. However, if we end up going with #92 as written (it’s still a WIP) you would have access to the writable stream (the capacitor property), and could listen for its finish event and use its bytesWritten property to get size. I don’t think we want to document the capacitor property as parts of our public API just yet, so you would need to more strictly pin your version if you chose to use that strategy, but it would let you do what you want very efficiently.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants