Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Structured cloning of streams (response/request/fetch) #313

Closed
annevk opened this issue Jun 11, 2014 · 12 comments
Closed

Structured cloning of streams (response/request/fetch) #313

annevk opened this issue Jun 11, 2014 · 12 comments

Comments

@annevk
Copy link
Member

annevk commented Jun 11, 2014

When you put a response or request in storage, it needs to be structured clone. However, we don't want the stream to be depleted in that case so

storage.add(response); 
event.respondWith(response);

remains working. It seems we need to figure out what we want out of domenic/cloning-and-transfering#1 before we can implement and ship this.

@annevk annevk added the fetch label Jun 11, 2014
@annevk
Copy link
Member Author

annevk commented Jun 11, 2014

Tried to assign this to @domenic but that did not work.

@jakearchibald
Copy link
Contributor

Need to think about:

fetch(request).then(function(response) {
  response.body.to('json').then(function(data) {
    if (!data.err) {
      myCache.put(request, response);
    }
  });
});

…too. Would that fail because the stream was already depleted?

@annevk
Copy link
Member Author

annevk commented Jun 11, 2014

Yes that would fail. That is why @domenic suggested body.readToEnd("json") to make that more apparent.

@jakearchibald
Copy link
Contributor

I don't think that would make it more apparant. Happy with to.

Hah, you beat me to the .body. edit.

@domenic
Copy link
Contributor

domenic commented Jun 11, 2014

@sicking's breakdown of the three use cases in domenic/cloning-and-transfering#1 (comment) speaks to me.

For example, when cloning to disk (e.g. adding to cache), the most useful behavior seems to me to be to tee the stream, then read it to the end, and only after the read is completely done (could be never), serialize it to the disk as a snapshot of the contents.

Whereas, when cloning (transfering?) to a worker, you don't really want to disturb the state, or wait for it. You just want the worker to have access to it and the ability to grab any data that comes out of it.

The essential problem, with both streams and promises, is that someone else is controlling the "actual data" of the object, and doing so asynchronously. They're not just static objects that you can snapshot; they are intimately tied to asynchronous processes initiated and controlled by their creator, that may still be ongoing.

@wanderview
Copy link
Member

For example, when cloning to disk (e.g. adding to cache), the most useful behavior seems
to me to be to tee the stream, then read it to the end, and only after the read is completely
done (could be never), serialize it to the disk as a snapshot of the contents.

It seems on memory constrained platforms, like mobile, we would probably want to be able to stream straight to disk. This could be to a temporary file, however, and then only moved to its final storage location when complete.

Also, if code calls match() again on the cache to retrieve the response before it completes, will we need to return the same original promise in order for them to retrieve the stream? Or create a new tee? It would seem we should allow a new reader like this to receive the response as it streams in as well, right?

Perhaps thats a cache detail, but seems related to how we clone the response stream.

@domenic
Copy link
Contributor

domenic commented Jun 11, 2014

Also, if code calls match() again on the cache to retrieve the response before it completes, will we need to return the same original promise in order for them to retrieve the stream? Or create a new tee? It would seem we should allow a new reader like this to receive the response as it streams in as well, right?

My thought was that the cache.add call would not complete (i.e. the promise it returns would not fulfill) until the stream is completely written to the disk.

@wanderview
Copy link
Member

Ok. So the secondary, late match() would also not complete until the stream is completely written to disk. I think I understand. Thanks!

@annevk
Copy link
Member Author

annevk commented Jun 24, 2014

@gavinp
Copy link

gavinp commented Jul 7, 2014

In issue #361 there's some interesting discussion of how a late commit to cache.add() can lead to service workers issuing duplicate requests that erase each other in the cache.

@KenjiBaheux
Copy link
Collaborator

This issue appears to be a spec issue with no implementation impact on MVP beyond what's being called out in 361/362. Defining storage in terms of structured-clone simplifies specs and adds new functionality like sending Request/Response objects via postMessage or storing in IDB.

@annevk
Copy link
Member Author

annevk commented Oct 23, 2014

Let's close this. For this we need structured cloning of promises/streams. One we have that I'm sure we'll have another look at this and other objects representing streams.

@annevk annevk closed this as completed Oct 23, 2014
annevk added a commit to whatwg/fetch that referenced this issue Aug 10, 2016
w3c/ServiceWorker#313 was closed long
ago and it hasn’t really come up since.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants