-
Notifications
You must be signed in to change notification settings - Fork 313
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove foreign fetch #1188
Comments
what is double-keying and why that disables foreign-fetch ? |
Double-keying refers to the system some browsers do for separating data set by a cross-origin iframe from the top frame, something like this: https://bugzilla.mozilla.org/show_bug.cgi?id=565965 |
thanks @mattto, but how this suffers foreign-fetch ? |
One of the use-cases we had was font caching, where fonts.google.com would have its own service worker that handled its own caching strategies. With double keying, when example.com uses the fonts.google.com foreign worker, it has storage and execution keyed to This results in the same fonts being stored multiple times, for each combination. |
@jakearchibald make sense, thanks! |
A foreign-fetch use-case: Kinda polyfilling something like cache digests https://twitter.com/mjackson/status/901090486739378177. Double-keying would somewhat get in the way here, although you'd be able to send details of the double-keyed storage back to the CDN. |
Fixes part of w3c/ServiceWorker#1188.
Fixes part of w3c/ServiceWorker#1188.
is there potensial alternative for foreign-fetch? Third party like Ad probider, analytics, CDN seems has a use cases, and I'm waitkng for foreign-fetch for avoid/separate handling fetch of these 3rdP request. |
@Jxck I haven't heard a proposal that meets the use cases while preserving privacy. |
+1 to removing foreign fetch from the spec. Maybe less clear what to do about Link: headers and elements for installing service workers? Either remove completely as well, or limit to processing these only on top-level (documents/workers) loads? |
Please also ensure tests are updated and browser bugs get filed. See whatwg/fetch#596 (comment) for details. |
One example use case I was working towards- I wanted to build a audioscrobbling service. It would let any media player register that the user was viewing something. Without foreign fetch, this is impossible to do in an offline case. I was also working on a library 0hub to fulfill some of my early hopes for navigator-connect, which is enabling discovery. Rather than have to know about my audioscrobbling service, I was hoping to make a service where other services could register. My scrobbler could register itself, as could other scrobblers, and then anyone who wanted to post scrobbles could query for any scrobbling services and push to all of them. Later down the road I intended to implement a feed reader around this premise. This is truly one of the saddest things I have ever heard for the web, my lightyears & lightyears. Offline will be savagely ruined, not web at all, if we can only work offline with ourselves. The web has to have some kind of functional, interconnected offline capability. It has to. I totally would not expect radical tech like this to have fast uptake. It needs half a decade of people playing around with it and learning about it and mainstreaming and libraries. We barely have service workers. Please, let new trials begin. Soon. This is incredibly deeply saddening to hear of. |
Thanks for posting my tweet here, @jakearchibald 😅 Just wanted to chime in and say that I think the possibility of using foreign fetch to improve caching behavior for a CDN like unpkg.com is really appealing, especially with the advent of web modules. FF would make it possible to build better support for a module-level cache. |
I would like to also echo this, whilst I'm not too concerned about loosing foreign fetch for the time being. The ability to install a Service Worker via a Link header opened a lot of very interesting potentials for delivering dynamic client-side caching logic via CDNs that act as a proxy for the first party domain, without having to compromise security or mutate html document responses. The above mentioned cache-digests polyfills is one such example. |
Foreign Fetch is being removed from the Service Worker spec. (w3c/ServiceWorker#1188 (comment)) TBR=rsesek Bug: None Change-Id: If84db57f7d62d065e389f97bbc100ae5d5e6f84b Reviewed-on: https://chromium-review.googlesource.com/669740 Reviewed-by: Chris Palmer <palmer@chromium.org> Commit-Queue: Chris Palmer <palmer@chromium.org> Cr-Commit-Position: refs/heads/master@{#502510}
This can be closed, as per #1207, I believe. But I would very much like a clearer path on knowing what challenges are to re-open it, and to hear thoughts on what can be done to help advance this fantastically hugely important capability that greatly facilitates & is necessary for a useful offline web. |
You'd need to come up with a way of adding them without making tracking worse. |
Does anyone know the status/direction of foreign fetch? It still seems incredibly useful, and I'd love a chance to build something with it. Are there plans for future Origin Trials in Chrome? Or is this feature entirely deprecated with no plans to be implemented any longer? |
Chrome has no plans currently to reimplement foreign fetch. |
@jozanza what are you wanting to do with it? |
@jakearchibald unless I’m misunderstanding how it works, foreignfetch seems like a huge boon to webrtc. I’d want to use it to cache offers/answers for RTCPeerConnection signaling. And once connected, it could also be used to scalably relay media streams without a CDN. |
@jozanza Are you speaking as the person who'd own the RTCPeerConnection signaling server, or the person who'd run the site using the RTCPeerConnection signaling server? |
@jakearchibald I started writing and was having a hard time describing what I thinking clearly. So I wrote some pseudo code here. It shows more or less what I was hoping would be possible: async function sendMediaToPeer({ config, from, to }) {
// Create a peer connection
const pc = new RTCPeerConnection(config);
// Get user's video/audio stream
const stream = await navigator.mediaDevices.getUserMedia({
video: true,
audio: true
});
// Do all the signaling with a foreign fetch service worker 🤞
pc.onnegotiationneeded = async () => {
// Create offer
const offer = await pc.createOffer();
// Add media stream
pc.addStream(stream);
await pc.setLocalDescription(offer);
// Gather all ice candidates for simplified signaling
while (true) {
if (pc.iceGatheringState === "complete") break;
await new Promise(f => setTimeout(f, 100));
}
// Send offer, gets intercepted by service worker
// The worker can store the offer with the Cache API
const res = await fetch(`${API_ROOT}/${from}/offers`, {
method: "POST",
headers: { "content-type": "application/json" },
body: JSON.stringify({ to, offer })
});
if (!res.ok) throw new Error("Could not create offer");
// The offer is now cached in the service worker so
// We can just poll for answer from the intended peer
// (They would use the service worker to post their answer)
while (true) {
const res = await fetch(`${API_ROOT}/${to}/answers`);
const answer = await res.json();
if (answer) {
// Aaaand we're connected! :)
pc.setRemoteDescription(answer);
break;
}
}
};
} tl;dr It'd be pretty amazing, if all of the signaling between peers could be done in a serverless manner by relying on a common foreign fetch service worker storing offers/answer with "a single, authoritative cache instance". And the signaling code could obviously be even cleaner if the foreign fetch service worker also supported |
Does this mean if my site is in example.com and consuming api.example.com, the API requests cannot be intercepted by fetch due to cross domain limits? |
They can be intercepted by example.com, not by api.example.com. |
Very sad to hear that this has been dropped. We, the IPFS Project, were super excited about this feature -- ipfs-shipyard/ipfs-service-worker-demos#5 --
Making the the base storage to be content address vs kv would enable the browser to avoid false cache misses.
Foreign Fetch would enable an IPFS Node to run on a Service Worker and become the ideal Content Addressed CDN. You can try a taste of this with by visiting https://js.ipfs.io Once enabled, a js-ipfs node is spawn in a Service Worker and any request to js.ipfs.io/ipfs/SomeHash will be routed through the js-ipfs node itself. With foreign fetch, we would be able to load webpage assets from js.ipfs.io and if the Service Worker was installed, the browser could cache it or even better, serve it locally to other browsers, if the Service Worker was not installed, it would it one of the IPFS Gateways. I believe this to be a fantastic use case for foreign fetch as it would enable Web Assets to be loaded through a DWeb Protocol that verifies their integrity and can serve the same assets fetch to close peers. Would love to have your opinion and know if there is still time to make the case. |
In case it helps this issued be considered for re-opening, when one desires that a web-app work on a standalone basis with all code in-line and no additional files required (i.e. a service worker javascript file) as currently a service worker cannot be installed unless a separate file is used (unless I'm mistaken and the code contained in that script could be added inline to the .html file - which is what I am trying to do). https://stackoverflow.com/questions/47163325/register-inline-service-worker-in-web-app?rq=1 P.s. I was experimenting with trying to also get the json manifest inline. The application I am using is browser based but works the same whether online or offline so it should meet the definition of a progressive web app. However, not being able to add the service worker without creating additional files is causing certain tests to fail when running a Light house audit in Chrome. |
Seems like this would have been a great way to give web apps the ability to fetch content from mirrored sources if the primary source is down or blocked. The service worker should be able to work its way down a list of mirror domains, perhaps falling back to something like an IPFS request when all mirrors fail. Perhaps there should be an exception, allowing this to be done for assets which provide a content hash or are otherwise uniquely named, so as to not require further keying. If it's a security concern, then require the asset name to match the content hash and enforce a check before updating the cache, IPFS-style. This would enable dramatically more robust apps which might take advantage of P2P networks to fetch resources locally when available. |
I don't see how that could work now that browsers bucket storage & cache per origin. |
Storage security does limit a lot of use cases, but I continue to believe very strongly that there's a ton of use cases for foreign fetch that make sense with per-origin sharding. For example, if I want to make an offline-capable recently-listened/"scrobbler" service that your/any website can post currently listening music to, what are my options? I can use something custom & fancy like Commlink & it's implicit internal protocols to message an iframe. But I'd way rather, when I load my music player, give it a URL it can use to post tracks into. They can do whatever login flow is needed, and from then on, your music player app would be able to use foreign fetch to post new tracks or get a list of it's own recently listened to tracks from my recently-listened service. The SW can store & foreward this data. The architectures enabled by Foreign Fetch are so much better than the alternatives, make the web really offline capable. I hope beyond hopes we can stop saying that per-origin caches make Foreign Fetch pointless. Yes, that does obstruct some use cases, but for many, re-downloading the data, re-logging in is not a problem at all, and the architecture of it is 100x "more web", more http-centric, versus having to get fancy. Developers know HTTP. Please, let's give them the capability to use HTTP across origins via ServiceWorkers. |
Foreign fetch could be used in development to simulate a server running within the browser. For a runtime like Deno, the service worker and serverside code would be almost the same. The locally running code could be debugged using browser tools. Generally speaking, I would imagine there could be many applications where this could be useful. You could view it as extending edge or 'originless' computing right into the browser. |
Need this to be put back into the spec in some fashion; the sooner the better. Trying to build a single sign on (SSO) solution that relies upon a service worker at an SSO address; other services can query the SSO URL and receive a response from the service worker as to the user account in the browser. This offline functionality is required to give a browser-based solution the same capabilities of a native application. Aside from my SSO project, I can see limitless potential in offline browser-local APIs powered by service workers. There are numerous examples in previous comments; what is needed from me or them to get this back into the limelight? |
i want to know how i can acces to images from my server Mv3, i can't get access to my images on local network. |
Discussed in #1173.
Due to problems with double-keying, unclear trial results, and unclear use-cases, we're going to remove foreign fetch from the spec (and fetch spec).
We can reexamine use-cases later and look to reintroduce it in another form once we have better data.
The text was updated successfully, but these errors were encountered: