-
Notifications
You must be signed in to change notification settings - Fork 299
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Stream source in browser? #59
Comments
The library has not been designed to directly support your use case but, I believe, one should be able to achieve what you want to do. You may take a look at browserify and the |
Thanks for your answer. You are right, resuming a Stream is a problem cause of the lack of |
Seems there is no way to implement smth like a let blob = this._source.slice(start, end);
if(blob instanceof Promise) {
blob.then(data => xhr.send(data)).catch(err => console.error(err));
} else {
xhr.send(blob);
} Using this with a fixed |
I think, we are talking about two different things here. An "async blob" is not the same as a streaming source. For me, an "async blob" is just a regular accumulation of binary data which will be available in the future but once it's available, the entire Blob will be ready to be consumed. On the other hand, a streaming source will provide you with chunks of data whenever available, meaning you may get a chunk now but the next will only be available after a few moments. Your original question referred to latter one but your last two comments to the first. Would you be so kind and be more concrete about what you are looking for? |
I'm speaking about a stream, data is not (or should not) available at once. My last comment with the snippet shows how I could chunk-stream data with tus cause it fires a request for each chunk which gets resolved async through the Promise. The trick with |
Thank you for your clarification. While your code example will probably work in the most cases, I believe it may fail if the server accepts only a part of the chunk and the library tries to slice your chunk in order to upload the remaining part. It is possible to get this working properly on your end but I believe this will be a thought issue to tackle. Instead, it is already possible to upload one chunk at a time which is calculated in an asynchronous fashion. This approach is not intuitive and it took me some time to get it right but I hope following example will guide you a bit: https://jsbin.com/bajeyelate/edit?js,console,output. Personally, I prefer this approach for a few reasons. |
Thank you!! I cant get the problematic thing with the Promise solution. At least in my case tus-js-clients requests the specific slice boundaries as it would do on regular (FileReader) Blobs and I could resolve exactly the requested peace. Many thanks for all your stuff, I'm quite happy with the monnky-patched xhr.send in this case so I'll close this. |
It's great to hear that this is working for your specific case. The situation I warned about does not apply to you as it is basically the opposite of what you are doing:
This cannot be achieved in every situation, e.g. when you are encoding a video, it is not easy to calculate the chunk of an encoded video at a specific offset. Also, having to patch a library is not the most inconvenient but it depends on you :) I'm pleased to hear that I could help you! |
Of course would be nice if you adopt me patch :) And youre completly right, the slice transform thing only works for transformations which keeps the bytecount. |
@Acconut will your example work if the "uploadSize" is not known in advance? Is it safe to just set it to a very large number, or might that result in data never properly being stored on the server? I'm trying to stream microphone data to a tus server in chunks as it is recorded. |
The tus protocol does indeed allow setting the length at a later moment (this is called deferring the length). However, tus-js-client currently does not implement this behaviour because at the moment no server features this functionality.
You could do it but you need to watch out for: a) the server may have a limit for the size of the created uploads (so if you create a 1TB upload but only going to use 10GB, the server may still reject the original upload creation) and |
Thanks! I could probably live with (a), but can you expand on what additional difficulties there might be? |
Some server and client implementations expect an upload to be completed at some point. For example, the tusd allows a routine to be executed when an upload is completed which is used to notify other applications but this would obviously not work in your case. The same applies to clients, e.g. the tus-js-client would not emit the upload-complete event. Furthermore, on both sides resources may only be freed when an upload is finished or terminated. |
@anonimousse12345 I am sorry that it's not easily possible to use tus-js-client for your application because tus-js-client (and also tusd) both don't supported the principle of deferred lengths yet. If you would like to work on these features, I am more than happy to assist you :) |
Is there a way to support uploading from a readable stream in the browser?
I've a use case where I need to pipe my stream through some processors.
The text was updated successfully, but these errors were encountered: