New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prefetch and double-key caching #82

Open
yoavweiss opened this Issue Aug 29, 2018 · 7 comments

Comments

Projects
None yet
4 participants
@yoavweiss
Copy link
Contributor

yoavweiss commented Aug 29, 2018

Moving a private discussion with @kinu and @igrigorik to a public forum

#78 raised questions regarding which origin should a navigation prefetch be tied to in terms of service workers.

Similar questions also arise when thinking about prefetch and double key caching.
Let's say host A is prefetching a linked document from host B.

If we were to consider A as the origin used as the secondary key for the document, when the user were to navigate to B, the resource won't be used, another would be downloaded instead, resulting in slower experience and sadness.

So, it probably makes sense to consider B the double-key origin for the prefetched document, when double-keying is applied.

The plot thickens when talking about prefetching subresources. If they are same origin as the document that will use them, then we can consider caching them similarly to documents, using their origin as the secondary key. But if they are cross-origin, we'd need to explicitly state which document/origin they are prefetched for. Not sure that's worth the complexity though.

Thoughts?

/cc @wanderview @cdumez @youennf

@youennf

This comment has been minimized.

Copy link

youennf commented Sep 29, 2018

Prefetch makes most sense for navigation loads so it might be best to focus on this specific scenario.
Something like the following might work with double key caching:

  1. Prefetched resources are loaded with: credentials=omit, referrerPolicy=no-referrer, redirect=manual
  2. Prefetch loads bypass service workers.
  3. Prefetch loads are optional: low power mode/network cache already having an entry
  4. Prefetched resources are stored in a non-partitioned memory-based cache, cache entries are cleared after some limited time.
  5. Prefetched resources can only match top level document navigation.
@kinu

This comment has been minimized.

Copy link

kinu commented Oct 9, 2018

Thanks @youennf, I think this is a pretty good/clear proposal to start with. Hoping that we can discuss more at TPAC but giving some quick thoughts here too:

  1. Prefetched resources are loaded with: credentials=omit, referrerPolicy=no-referrer, redirect=manual

To clarify, do we even want to avoid going with credentials=same-origin?

  1. Prefetch loads bypass service workers.

Have been thinking about this a while, but I think this makes a lot sense at least to start with. (One interesting option @wanderview mentioned off-thread is to skip service workers for prefetch but use the prefetch as NavigationPreload for the service worker when the real navigation occurs. I actually like this idea but given that NavigationPreload is not yet widely supported we can put off considering this further)

  1. Prefetch loads are optional: low power mode/network cache already having an entry

Agreed, and I believe this is currently spec'ed.

  1. Prefetched resources are stored in a non-partitioned memory-based cache, cache entries are cleared after some limited time.
  2. Prefetched resources can only match top level document navigation.

Sounds sensible to me.

One related question is if spec helps prefetches for top-level navigations be distinguishable from others (so that UAs can make better decisions). One way is to use as=document as a signal (while it can't tell whether it's for top-level frames or subframes, and it's proposed to be deprecated).

@youennf

This comment has been minimized.

Copy link

youennf commented Oct 9, 2018

  1. Prefetched resources are loaded with: credentials=omit, referrerPolicy=no-referrer, redirect=manual

To clarify, do we even want to avoid going with credentials=same-origin?

Agreed we should tackle this.
I restricted it this way for simplicity and since that this is the biggest issue right now.
Same-origin prefetches do not require all these protections, we could decide to special case them for instance.

Also, in the case of prefetch, it is not clear how it is interacting with the fetch spec, its browsing context, if it is attached to a browsing context, whether it should be cancelled or kept alive when the context goes away...

@yoavweiss

This comment has been minimized.

Copy link
Contributor

yoavweiss commented Oct 12, 2018

  1. Prefetch loads bypass service workers.

Have been thinking about this a while, but I think this makes a lot sense at least to start with. (One interesting option @wanderview mentioned off-thread is to skip service workers for prefetch but use the prefetch as NavigationPreload for the service worker when the real navigation occurs. I actually like this idea but given that NavigationPreload is not yet widely supported we can put off considering this further)

I'm concerned that this will trigger cases of double download in scenarios where the SW is e.g. modifying the request for a navigation request.

At the same time, this seems necessary for privacy protection - otherwise the destination SW can leak the fact that the prefetch happened.

Also, in the case of prefetch, it is not clear how it is interacting with the fetch spec, its browsing context, if it is attached to a browsing context, whether it should be cancelled or kept alive when the context goes away...

Agree we need to better specify how prefetch relates to Fetch, how the prefetched resources are cached, etc.

@igrigorik

This comment has been minimized.

Copy link
Member

igrigorik commented Oct 18, 2018

👍 to the above.

As a brief aside, I'd actually propose we pull out prefetch from RH into a standalone spec doc, or spec it directly in Fetch.. WDYT?

@yoavweiss yoavweiss added the Prefetch label Oct 21, 2018

@yoavweiss

This comment has been minimized.

Copy link
Contributor

yoavweiss commented Oct 23, 2018

Specifying a processing model that tied directly into HTML's <link> processing model (similar to what we ended up doing with preload) seems the best approach to me. I think Fetch already has all the primitives we'd need for this. I'll sketch something up.

@yoavweiss

This comment has been minimized.

Copy link
Contributor

yoavweiss commented Oct 23, 2018

I think Fetch already has all the primitives we'd need for this

That's actually not true. We need to introduce the concept of a "speculative fetch" and the concept of a "prefetch cache" that would not be partitioned.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment