Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stream source in browser? #59

Closed
psi-4ward opened this issue Jan 4, 2017 · 14 comments
Closed

Stream source in browser? #59

psi-4ward opened this issue Jan 4, 2017 · 14 comments

Comments

@psi-4ward
Copy link
Contributor

Is there a way to support uploading from a readable stream in the browser?
I've a use case where I need to pipe my stream through some processors.

@Acconut
Copy link
Member

Acconut commented Jan 5, 2017

The library has not been designed to directly support your use case but, I believe, one should be able to achieve what you want to do. You may take a look at browserify and the --no-browser-field flag. However, this will remove some functionality such as resuming uploads after a tab is closed. Not sure, how much this advice helps you but some insights into your current setup may be helpful for me.

@psi-4ward
Copy link
Contributor Author

Thanks for your answer. You are right, resuming a Stream is a problem cause of the lack of slice.
In my case I need to upload something like an async blob.
As far as I can see xhr.send() consumes the (File)Blob using a FileReader, the question is now: How to implement a Blob where I can produce the data myself async.

@psi-4ward
Copy link
Contributor Author

Seems there is no way to implement smth like a FileWriter so I patched the slice call to support a Promise:

    let blob = this._source.slice(start, end);
    if(blob instanceof Promise) {
      blob.then(data => xhr.send(data)).catch(err => console.error(err));
    } else {
      xhr.send(blob);
    }

Using this with a fixed chunkSize setting gives the possibility to async generate data.

@Acconut
Copy link
Member

Acconut commented Jan 5, 2017

In my case I need to upload something like an async blob.

I think, we are talking about two different things here. An "async blob" is not the same as a streaming source. For me, an "async blob" is just a regular accumulation of binary data which will be available in the future but once it's available, the entire Blob will be ready to be consumed. On the other hand, a streaming source will provide you with chunks of data whenever available, meaning you may get a chunk now but the next will only be available after a few moments. Your original question referred to latter one but your last two comments to the first. Would you be so kind and be more concrete about what you are looking for?

@psi-4ward
Copy link
Contributor Author

I'm speaking about a stream, data is not (or should not) available at once.
In my case I've to modify all uploaded data and this is to big to do it in memory.
So I can not compute the modifiedBlob before starting the upload.

My last comment with the snippet shows how I could chunk-stream data with tus cause it fires a request for each chunk which gets resolved async through the Promise.

The trick with chunkSize and Promise works well in my case.

@Acconut
Copy link
Member

Acconut commented Jan 6, 2017

Thank you for your clarification. While your code example will probably work in the most cases, I believe it may fail if the server accepts only a part of the chunk and the library tries to slice your chunk in order to upload the remaining part. It is possible to get this working properly on your end but I believe this will be a thought issue to tackle. Instead, it is already possible to upload one chunk at a time which is calculated in an asynchronous fashion. This approach is not intuitive and it took me some time to get it right but I hope following example will guide you a bit: https://jsbin.com/bajeyelate/edit?js,console,output. Personally, I prefer this approach for a few reasons.

@psi-4ward
Copy link
Contributor Author

Thank you!! upload.abort() on chunkComplete is interesting...

I cant get the problematic thing with the Promise solution. At least in my case tus-js-clients requests the specific slice boundaries as it would do on regular (FileReader) Blobs and I could resolve exactly the requested peace.

Many thanks for all your stuff, I'm quite happy with the monnky-patched xhr.send in this case so I'll close this.

@Acconut
Copy link
Member

Acconut commented Jan 6, 2017

I cant get the problematic thing with the Promise solution. At least in my case tus-js-clients requests the specific slice boundaries as it would do on regular (FileReader) Blobs and I could resolve exactly the requested peace.

It's great to hear that this is working for your specific case. The situation I warned about does not apply to you as it is basically the opposite of what you are doing:

At least in my case tus-js-clients requests the specific slice boundaries as it would do on regular (FileReader) Blobs and I could resolve exactly the requested peace.

This cannot be achieved in every situation, e.g. when you are encoding a video, it is not easy to calculate the chunk of an encoded video at a specific offset.

Also, having to patch a library is not the most inconvenient but it depends on you :) I'm pleased to hear that I could help you!

@psi-4ward
Copy link
Contributor Author

Of course would be nice if you adopt me patch :)

And youre completly right, the slice transform thing only works for transformations which keeps the bytecount.

@anonimousse12345
Copy link

anonimousse12345 commented Feb 24, 2017

@Acconut will your example work if the "uploadSize" is not known in advance? Is it safe to just set it to a very large number, or might that result in data never properly being stored on the server?

I'm trying to stream microphone data to a tus server in chunks as it is recorded.

@Acconut
Copy link
Member

Acconut commented Feb 24, 2017

will your example work if the "uploadSize" is not known in advance?

The tus protocol does indeed allow setting the length at a later moment (this is called deferring the length). However, tus-js-client currently does not implement this behaviour because at the moment no server features this functionality.

Is it safe to just set it to a very large number, or might that result in data never properly being stored on the server?

You could do it but you need to watch out for:

a) the server may have a limit for the size of the created uploads (so if you create a 1TB upload but only going to use 10GB, the server may still reject the original upload creation) and
b) the upload will never be considered completed which may bring additional difficulties with it.

@anonimousse12345
Copy link

Thanks! I could probably live with (a), but can you expand on what additional difficulties there might be?

@Acconut
Copy link
Member

Acconut commented Feb 27, 2017

Some server and client implementations expect an upload to be completed at some point. For example, the tusd allows a routine to be executed when an upload is completed which is used to notify other applications but this would obviously not work in your case. The same applies to clients, e.g. the tus-js-client would not emit the upload-complete event. Furthermore, on both sides resources may only be freed when an upload is finished or terminated.

@Acconut
Copy link
Member

Acconut commented Mar 1, 2017

@anonimousse12345 I am sorry that it's not easily possible to use tus-js-client for your application because tus-js-client (and also tusd) both don't supported the principle of deferred lengths yet. If you would like to work on these features, I am more than happy to assist you :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants