Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Preventing server-forced updates #822

Open
mappum opened this issue Jan 21, 2016 · 24 comments
Open

Preventing server-forced updates #822

mappum opened this issue Jan 21, 2016 · 24 comments
Milestone

Comments

@mappum
Copy link

mappum commented Jan 21, 2016

I'd like to propose a change to the ServiceWorker spec that allows for applications which cannot be forced by the server to install updates.

Background

Native applications have a significant security advantage over web applications: they control their own update lifecycles. New code is not loaded by default, so if there is no auto-update functionality built in, new versions are only adopted at the user's will. By contrast, web applications load new, untrusted code from a server each time the user navigates to them. This increases the attack surface for malicious code to be deployed by whoever has control of the server (hackers, political actors, rogue service administrators, etc.).

The Application Cache API allowed this feature; servers could set the AppCache manifest max-age very far in the future which would prevent the browser from contacting the server. However, servers configured to cache aggressively would cache these manifests and brick apps accidentally, leaving developers unable to deploy fixes to their apps. Since this problem was common, the ServiceWorker spec is defined to limit max-age to 24 hours, after which an update is ensured to happen.

Use Cases

Server-forced updates need to be prevented in the following cases:

  • Handling private keys or other sensitive data
  • Apps that rely on anonymity or encryption, such as TOR or secure messaging
  • ServiceWorkers which verify integrity and authenticity of updates loaded from an untrusted CDN

For an example of code that uses this ability of the Application Cache API, see hyperboot (written by @substack, the author of browserify).

Solutions

In the ServiceWorker spec, preventing forced updates must only happen when explicitly requested, so as not to cause the accidental bricking tragedies seen with AppCache.

Possible methods for adopting this feature:

  • Removing the 24-hour cap for the proposed Service-Worker-Max-Age header (Introduce Service-Worker-Max-Age header #721). Servers likely won't set this header by default.
  • An additional Service-Worker-No-Forced-Updates header, that should only be used to specifically opt in to this feature, which will remove the 24-hour cap.

These changes still allow for applications to trigger their own updates by unregistering their ServiceWorker and registering the new version. It may also be beneficial to add a method which triggers a reload of the registered ServiceWorker and does the standard byte-for-byte comparison to see if there is an update.

Thank You

Thanks for considering this proposal. I believe this small change would make ServiceWorkers much more powerful, and bring the web a huge step closer to parity with native applications.

@mappum
Copy link
Author

mappum commented Jan 21, 2016

Side-note: it was a great idea by you guys to develop this spec on Github. 👍 This allows for a very open discussion. (I have never made any proposals like this to any sort of standards body before this).

@jakearchibald
Copy link
Contributor

I can't quite get my dead around the use cases.

Handling private keys or other sensitive data
Apps that rely on anonymity or encryption, such as TOR or secure messaging

How does the current update model prevent this?

ServiceWorkers which verify integrity and authenticity of updates loaded from an untrusted CDN

SW scripts are same-origin for security reasons. When you're adding thigns to the cache you can already verify integrity, although CSP is a better mechanism for this.

@mappum
Copy link
Author

mappum commented Jan 26, 2016

I can't quite get my dead around the use cases.

Imagine we have created a PGP app to encrypt messages, and we serve it at https://mypgpapp.com. It generates private keys and stores them in IndexedDB, then lets the user encrypt messages with those keys so they can copy their encrypted messages to send via email.

So far, our security is pretty good. Since we used HTTPS, we can be reasonably sure the user won't load a backdoored script from some attacker's server. However, what happens if our server gets compromised? If an attacker had access to it, they could deploy an update to the registered SW script which fetches code that will upload the private key somewhere. Now it's game over for any user who visits the app and has their SW script caching expire.

How does the current update model prevent this?

By this example I meant that when the app is being hosted on servers owned by some other party, the host who actually runs the machines could deploy backdoored code. But if a SW prevented forced updating, it could be built to check that the updated version is cryptographically signed by the author before accepting updates. Then, the author can sign the resources offline, and never has to give the private key to the host to have the users verify authenticity.

CSP doesn't solve this since the malicious scripts are coming from a whitelisted domain.

(I suppose I didn't need to mention integrity, authenticity was the important part).

@jakearchibald jakearchibald added this to the Version 2 milestone Jan 29, 2016
@delapuente
Copy link

delapuente commented Jun 7, 2016

Related if not dup of #761
Use case of giving the user the ability to optionally stop an update is enough for me. Consider my sw could give this chance if updating patch numbers but forcing the update if increasing minor or major numbers.

@asutherland
Copy link

I resonate with this use-case (I worked on the Firefox OS mail app which similarly wanted what amounted to signed-offline app packages), but I think giving the service-worker the ability to defeat upgrades of itself is the wrong way to handle it. (And I think this issue is roughly equivalent to 761, although this issue more concretely describes motivating use-cases.)

The security model of https is trust in an origin, authenticated by the certificate authority (CA) infrastructure, with compromises being handled via OCSP (stapling) and/or short-lived certificates. Using a trust-on-first-use service worker that can deny upgrades is a clever way to attempt to embed an offline signature based security/trust model inside the existing online signature trust model. Some people have even come up with an exceedingly clever approach also combining HPKP and AppCache, see https://www.reddit.com/r/encryption/comments/4027ci/how_2_spacex_alums_are_using_encryption_for_good/cyywmc8

The major risk of a prevent-update feature, as covered by Jake, in #761 is allowing an evil/attacker-controlled service worker to extend the duration of its nefarious residency, potentially permanently. And also as Jake suggests, it seems better to make the validation part of a standard already concerned with validating contents, namely CSP which already has precedent with its SRI-like "valid hash" mechanism https://w3c.github.io/webappsec-csp/2/#source-list-valid-hashes. This also allows the browser to alert the user, helping avoid spoofable UI and related confusion/fatigue.

Additionally:

  • There are potential interactions such as shift+reload bypassing service-workers, among others. It's a lot to ask a complex spec like service-workers to adopt a fundamentally divergent use-case that in many cases is indistinguishable from an attacker's dream and a developer-who-makes-typos' nightmare.
  • Preventing updates is a shallow security model on its own. The origin is being defended exclusively by the service worker intercepting all requests and maintaining itself intact. If an attacker does gain control of the origin and can cause the service worker to fail to respondWith or is bypassed via some other bug or new feature (like a bypass header), they have access to all of your app's local storage at the origin. What you really want is your own distinct origin with its own storage bucket that is affirmatively guarded by verifying a signature before letting any code/document run in the origin. Gecko has internal support for such things hung off an existing origin using OriginAttributes, or just if a separate scheme and thereby origin is created like "signed-package://KEY", etc. This would have been surfaced by the now-defunct Firefox OS "new security model" at https://wiki.mozilla.org/FirefoxOS/New_security_model#Verifying_signatures_-_bug_1153422 and is otherwise not something web content can currently avail itself of.

The big wrinkle is that what is really needed is a cross-browser effort with this specific use-case in mind because it really is its own big picture idea. I believe there are many people who care about the use-case, but I suspect most browser engines are focusing their efforts just on getting service workers and progressive web apps going. The best hope for this use-case right now are browser extensions/add-ons. All browsers seem to be converging on WebExtensions and at least some browsers (ex: Firefox) allow easy disabling of extension auto-updates. Which is important since the trust-model looks like it depends on the extensions marketplaces' authentication mechanisms and them not being compromised themselves, which is strictly weaker than an air-gapped private key. (NB: Pre-WebExtensions Firefox extensions can, however, be cryptographically signed.) This is clearly nowhere as good as a packaged apps model that does not require installation like Firefox OS was shooting for, but it seems to be the most realistic cross-browser solution.

@twiss
Copy link
Member

twiss commented Nov 4, 2017

[Sorry for the 1.5 year late reply :) coming here from #1208, which is similar to this issue but instead of preventing the update, the old Service Worker warns the user about the update.]

@asutherland and others have brought up valid concerns about this approach, e.g., what if the user force-refreshes the web app, what if they open it in an incognito window, what about when they open it for the first time, what about on a new device. I agree with all of those, in fact, I proposed a different solution built on Certificate Transparency a while back that would solve those things, and if browsers implemented that, I would be very happy too :)

However, a solution built on Service Workers has the big advantages that

  1. It requires very little extra work on the part of browser developers
  2. It potentially requires less work on the part of web app developers, as well. For example, instead of requiring them to sign their code with a public key, a SW solution can just require that updates be pushed to GitHub (or it can require both). A solution in the browser is unlikely to be flexible enough to allow that. And since many open source web apps already do that, it can be relatively more of a "set and forget" solution.

And I think it would solve the biggest part of this problem, especially because any attempt to send the user unsigned code runs the risk of detection by the SW, unless the server can somehow detect that the SW is being bypassed. That might be possible in the case of force-refresh (Cache-Control headers), though. Maybe we can use the SW to always send force-refresh-like Cache-Control headers?

And while some have indeed used browser extension to solve this problem (Signal, blockchain.info and mega.nz come to mind, although the first two of those are Chrome apps, which are getting phased out. I myself made an old-style Firefox addon), it would be preferable if all users are protected, and they don't even protect against most of the loopholes ("what if they open it in an incognito window, what about when they open it for the first time, what about on a new device").

@MisterTicot
Copy link

Hi,

I want to secure my app by storing each asset SHA256SUM on a blockchain, then allow update when on-line file match the shasum. This way release have to be signed by author/auditors before update. This is an innovative use case of webworker/blockchain that can bring a new level of security to web applications, because an attacker would have to gain access to both the service and the private keys to perform a mass-attack.

However, this will be impossible to do as long as the service worker update is forced. Does someone knows about an alternate way to achieve this or eventually a feature implementation that could open this possibility?

I understand the issue about attacker being able to leverage the service worker self-control to implement malicious code durably. I'm not sure how this could be mitigated.

@MisterTicot
Copy link

I've been thinking a bit about it and I believe we can solve this one properly.

The core idea is having the installed service worker being able to check the new available version before installation; Eventually, it would have the power to prevent it if certain condition are not met (essentially if the new version is judged insecure). This should be an exceptional event, especially as securing service worker update would make the attack useless. I'm basing this affirmation on the proposed use case for this feature so far.

On the other hand, we don't want an attacker to install a corrupted service worker that wouldn't allow any update, implementing durably malicious code. This attack could happen massively on websites which didn't implement an integrity check on service worker update. It could also happen on supposedly secured website on a per-user basis. Meaning, by altering someone's browser files by physical access or exploiting a flaw in the operating system. This last attack doesn't scale, but has the viciousness of breaking a system in which the user would have the highest confidence, which could lead to the biggest damage (think banking/crypto currency apps).

A natural solution respecting thoses two requirements would be warning the user when an update have been blocked. This highlight that an exceptional situation is happening, and that something may need to be done about it. The active service worker would have to provide a message explaining the reason why it blocked an update, and the user would be able to inform herself on social medias / forums /... about what's happening. An option would be provided to update anyway.

The banner could use the same UI than the one proposing PWA installation. I would advice to use a sober banner, but to put the message telling the update have been blocked in red. I would rather avoid something like what we've seen in the past where everything becomes orange and red and convey more fear than knowledge to non-technical people. But, some bit of something is needed to call the attention of the user, as we want to be sure the exceptional situation is known by her.

I would rather make the banner non-intrusive and the navigation to continue. If closed, a new pop would happen on the next normal cache update if the situation is still abnormal (so every 24h or more often depending on cache headers). Another strategy could be blocking totally the navigation. This would prevent corrupted code to run at all in some case (locally corrupted service worker). On the other hand, it could give some incentive to allow the update no matter what so the website can be used. I think some further thinking on this very aspect is needed.

Technically, I would expect this feature to be available from the service worker environment. There could be an event so we can use self.addEventListener('updateAvailable', ...). waitUntil would be available to allow for further fetching/computation. The 'updateAvailable' event would be triggered again at normal cache interval. This would allow to perform an update that have first been blocked but that have been verified as valid at a later time.

The function blocking the update could be more widely available as 'navigator.serviceWorker.rejectUpdate(message)'. The hard-coded reaction of the browser, which's showing a banner,  would prevent any abuse/hackerish use of this fonctionality.

I have no technical knowledge about how service workers are currently implemented, so this may not really fit. I'm basing this mostly on how I would find it convenient and logical to use as a website developer. It would be nice if someone could confirm this actually make sense implementation-wise.

I'd like to ask anybody interested in this feature to participate and challenge the design I'm proposing. Better think good about it before going ahead. I'd also like to hear about implementators and weither they would consider this solution as acceptable and implementable. I think we have a nice opportunity to push further the usefulness of this new web technology, so I hope we can go ahead and add this one to the specs.

@MisterTicot
Copy link

It's been 3 month since I did propose a fix to this, and got no answer. Is there anything I can do so we can move on with this? Should I go ahead and make contact somewhere else? Or is it already settled ?

@asutherland
Copy link

This issue is on the upcoming ServiceWorkers F2F agenda for discussion in October, although I can't promise any resolution.

Note that my previous comments still stand about the security model of the web and in particular, the Clear-Site-Data header since implemented by Chrome and Firefox has a "kill switch" which will wipe out all storage for an origin including ServiceWorkers. That's an intentional feature that a SW would never be able to inhibit even if normal updates could be blocked or delayed.

There is a new experimental option that seems better suited to your use-case. There's a very experimental Mozilla project https://github.com/mozilla/libdweb that enables WebExtensions to implement custom protocols like IPFS (see the blog post).

In regards to your proposal, I understand the core of your suggestion to be that a banner would be presented if the potentially-malicious SW blocks an update and that the potentially-malicious SW is able to provide some of the text to be displayed in the banner. The user would then reach a decision informed by searching the web and asking other people on social media what they should do.

The issue with prompting the user in cases like this is that the user is frequently unable to make an informed decision about what is going on. Especially if the attacker can supply scary text like "There is a virus in the update, don't update or your computer will be permanently damaged!" or text that leverages update fatigue like "The update will take 10 minutes to install, are you sure you want to update?" or unique strings that the attacker can use to game any search the user would make so that the guidance they find on the internet is from the attacker. This isn't really solving the problem, it's making the problem the user's problem.

@MisterTicot
Copy link

Thank you for the answer and feeding the search.

My proposal has indeed nothing to do with bypassing clear browser cache functionality.

One of my premise was that in legit case, the banner wouldn't show for long as the server take-over is likely to be fixed within hours or at worth within days.

On the other hand, a malicious SW would continue to pop a warning forever until updated, incentivising user to do something about it. Non technical users are likely to accept the update after a few days and wipe malicious SW just to get rid of the banner. Not saying it's so great, just saying it leverages blind behaviors.

Now there may be a other ways to leverage this difference in update denial timespan, like setting a reasonable time limit to rejection at 24 or 48 hours.

The problem I see in their cases is that domain owners may know users have a malicious SW but remains powerless about it.


Another option (if it's acceptable for browsers) I can see is having the SW rejection option being enabled for a website through a DNS TXT field.

In this scenario, the malicious SW squat attack against a website that hadn't the SW update rejection enabled could work only by taking over the DNS, and only as long as control over DNS remains as legit owner would be able to switch off the rejection option, triggering SW renewal.

The TXT field would be either inexistent (option not enabled) either a number that represent for how much minutes an update rejection remains valid, maybe with a reasonable maximum limit.

@mappum
Copy link
Author

mappum commented Oct 22, 2018

Just thought I should chime in as I was the one who opened this issue:

I now understand the rationale in preventing updates. I previously thought the 24-hour limit was to prevent the accidental bricking of apps by well-meaning server admins, but really it's about preventing an attacker from intentionally bricking the app forever (so my proposed solution of using response headers doesn't really help). I no longer think it makes sense to be able to fully prevent updates in the Service Worker API.

BTW, I have some ideas about how to accomplish what I want with a Subresource Integrity attribute for iframes, hopefully that will be implemented some day.

@jespertheend
Copy link

jespertheend commented Jan 18, 2020

I can't quite get my dead around the use cases.

The exact use case @mappum was describing seems to be what is being discussed over here. Although I can imagine not only E2EE apps can benefit from this. Almost any downloaded application uses codesigning to verify updates nowadays.
Servide Workers require https by default which I guess is the reason why code signed updates aren't really a thing on the web, and perhaps they're not really necessary. But I believe some applications could still benefit from this.

Perhaps being able to prevent server-forced updates is too generic. Maybe what we need here is something more specific to code signing updates. So that only Service Workers that make use of code signing may prevent server forced updates.
I'm not really sure how this would work though, because that way a malicious script could still block a Service Worker from getting updated by introducing code signing for the first time.

@hlandau
Copy link

hlandau commented May 18, 2023

Indeed, I previously proposed a solution for this problem, namely simply adding SRI support to service workers: w3c/webappsec-subresource-integrity#66

Created an issue in the correct repository for discussion: #1680

@valpackett
Copy link

SRI cannot help this use case as SRI just uses hashes, not asymmetric signatures. SRI is only useful for cross-origin security.


I would like to implement signature verification in a service worker in my current project, where unlike in a typical web app, the client code is user-controlled; the user would upload a signed app update to the server, and the service worker would check the signature on the files coming from the server – allowing a trust-on-first-use model with regards to the server… if there was a "paranoia mode" for service workers that would disable all the escape hatches, making sure that the service worker could strip Clear-Site-Data and that update requests for the service worker would always go through it and never bypass it (and shift+reload would, I guess, ideally display a prompt about breaking the security? or just, like, get turned into a special message for the worker?).

Please, please, please give us the choice.

For the vast majority of web apps the current way of working, which would indeed "prevent an attacker from intentionally bricking the app forever", is the appropriate one.

For our paranoid E2EE apps with unconventional update models, we would like to opt in to full control of the update process with no escape hatches.

@hlandau
Copy link

hlandau commented Jun 28, 2023

SRI cannot help this use case as SRI just uses hashes, not asymmetric signatures. SRI is only useful for cross-origin security.

Not true, as SRI can be used to specify an immutable bootloader which implements the asymmetric signature verification.

@valpackett
Copy link

Ah, that's clever. But either way, whether triggered by SRI or by passing some mode: 'paranoid' flag to register(), what's needed is a way to opt out of escape hatches.

@valpackett
Copy link

Oh, one key thing I've just realized: to not allow attackers to "permanently screw up" a compromised site that was using the normal mode (or no SWs at all), opting in to the secure mode should require at least a permission prompt, and probably be reflected in the browser UI (such as a "lock with refresh arrows" icon in place of the normal lock icon).

@JamesTheAwesomeDude
Copy link

JamesTheAwesomeDude commented Oct 4, 2023

While this doesn't solve the problem of preventing Service Worker updates, here's a rough sketch of an idea that might (with user co-operation) at least allow cobbling together a ToFU level of security to detect unauthorized updates, vet authorized updates, and fail-safe if an unvetted update occurs, on existing browsers:

  • Call the actual "sensitive application data" (e.g. PGP private key) CS0.
  • Each and every service worker is installed with a unique per-installation random secret, distributed as a "hard-coded constant" in the Service Worker's source code, wiped immediately from memory after generation (since at this time we're just aiming at ToFU security, say the server hasn't been compromised yet); call this CS1. This will fill a similar role to the AEM machine validation secret, serving as living proof that the service worker is the same as it was when it was installed.
    • The endpoint that distributes the service workers must GENERALLY serve an HTTP 4XX response, except during active installation (which is indicated by an HTTPS-enabled cookie which is set by JS immediately before register() is called, and which is always unset by any non-4XX response from that endpoint)
  • During "app setup" (Service Worker installation time), randomly generate a brief "challenge" passphrase and either generate or make the user generate a "client encryption" passphrase; call these P1 and PE.
  • (Obviously, the service worker also does all the other things needed to keep a web application securely/immutably pinned.)
  • Add Enc(CS1, P1) to LocalStorage. You need CS1 to reconstitute P1 from LocalStorage.
  • Add Enc(PE, Enc(CS1, CS0)) to LocalStorage. You need PE and CS1 to reconstitute CS0 from LocalStorage.
    • For the love of god, please use at least half the available RAM for the KDF of the outer encryption layer here.
  • Tell the user that they MUST NOT EVER disclose EP without verifying P1 first. Remind them that any request for PE, which is not accompanied by P1, is certainly from an adversary.
  • Each time the application "starts up" (the page is visited), display P1 (maybe masked behind a clickwall with a reminder about shoulder-hoppers) and get EP input from the user; use that to decrypt CS0 into memory.
    • While the KDF is running, display a reminder to always validate P1 before entering EP
  • When an application update is needed, the current version of the application follows this process:
    1. Fetch (using some API other than the service worker update URL) an alleged upcoming update of the service worker*, which embeds pins for the alleged new version of the application
    2. Fetch and vet a hash of the alleged new version of the service worker*. "Vetting" could include anything from checking it against a digital signature from the developer's offline signing keys, or multi-person geographically distributed wrench-resistant signatures, to checking multiple canaries, to getting the user to actually affirm its value.
      • *Technically, this will just be a hash of "the service worker but with a dummy value for CS1".
    3. Once the new update's hash has been approved, execute this critical path subroutine:
      1. Assert that the current service worker has been installed with updateViaCache to "all"
      2. Remind the user again that if they are prompted for EP again soon, that they MUST NOT disclose it unless P1 validates again
      3. Fetch a copy (from a different URL than the Service Worker's installation path) of the new service worker's source code
      4. Compare the hash of the new service worker's source code to the hash the user just approved
      5. If the hash compares unequal to the hash which the user had vetted, ABORT
      6. Modify the source code, replacing the embedded CS1 [some dummy value] with the current value of CS1
      7. Do an offline-update of the SW, inserting the "modified" service worker that comprises validated source code plus device-specific value of CS1
      8. Restart the application (e.g. reload the page)

This should maybe be safe, since while an adversary can overwrite the existing service worker by simply serving a 200 OK response that would blow away CS1 (cf. an Evil Maid failing by blowing away TPM-protected secrets when factory-resetting the PC), I don't think there's any way for them to actually get at the source code of the currently running service worker.

The update mechanism is hella janky, though, and I'm less sure of it (especially since the "offline update" of a service worker isn't proven yet). The initial installation sort of has to include a leap of faith (that's ToFU), but I had hoped to allow the application to reduce the surface area the user has to think about to nothing more than matching an on-screen hash to an otherwise-known-good value.


Obviously, this is a LOT of engineering work, and an additional UX burden of some song-and-dance. But, of course, at least some UX burden is absolutely necessary due to this crux. Certainly something more "batteries-included" for web applications to keep sensitive data safe from adversarial takeover at an unknown-but-surely-upcoming future date would be nice.

@twiss
Copy link
Member

twiss commented Oct 12, 2023

Hi all 👋 FYI, I proposed an alternative solution to the underlying goal (of facilitating web apps that don't trust the server) at the WICG (working title: Source Code Transparency). I also presented on it at the WebAppSec WG meeting at TPAC (minutes), and it seemed like there was interest from the browsers there.

Instead of trying to prevent updates, the proposal here is to make all updates transparent and publicly auditable by security researchers, to make it detectable if any malicious code gets deployed by a web app's server. While this doesn't prevent malicious code from being deployed, it strongly discourages servers from ever doing so (due to the risk of reputational damage). The security model here is similar to Certificate Transparency, which has been very successful at detecting and preventing malicious certificates from being issued. And contrary to the proposals here, it wouldn't be TOFU, but protect users from the first time they open the web app (if the browser implements source code transparency, obviously).

Even though I also previously commented in favor of the proposal in this issue, my impression is that browsers are quite resistant to preventing updates entirely, and would actually be more open to a solution dedicated to the underlying problem, rather than something "hacked" on top of Service Workers, even if it's more work to implement.

For the full proposal, please see the explainer. If you have any comments or suggestions, please open an issue or discussion on the repo. If you support the proposal, please leave a 👍 on the WICG proposal. Thanks!

@jayaddison
Copy link

I'm interested in this functionality too, to protect disruption of the ServiceWorker within RecipeRadar, a web-hosted Progressive Web Application.

I've reviewed and broadly like @twiss's proposal, and I have a competing proposal that is compatible with delivery over both HTTP and HTTPS (my understanding is that SCT requires TLS).

My competing proposal is that site operators should deploy a W3C-SRI-format compatible value to DNS containing the expected hash of the content body served from the root path of the webserver. I admit that this single-path limitation is somewhat constraining.

Although I've asked the dnsext mailing list whether an additional DNS record type would be suitable for this, I think any kind of standardisation is a long way off, and perhaps unlikely. However, for the sake of demonstration, I can illustrate a snippet of the current contents of DNS TXT records for reciperadar.com, where I've deployed the app's current expected hash:

;; ANSWER SECTION:
reciperadar.com.	3600	IN	TXT	"B=sha512-p69NIfGgc1xBwGBO+91ELEttAujx0vc4iCHJi+GnFLp1fdQGv8YXc1OXxky38vTwqzrG80FYySIvubNOPSGt4A=="

(note that when I deploy updated code for RecipeRadar, this entry temporarily contains two hashes -- one for the cached/stale app allowing web browsers using web caches to continue to use the stale app until it expires, and one for the current/fresh app. the W3C SRI spec foresaw this requirement for subresources hosted by CDNs, and so it does support multiple values at a given hash strength level - see example 7 here)

No web client currently supports this in practice, as far as I'm aware - however the idea would be that if the integrity check for the root resource fails, then we should be careful about trusting or loading any of the referenced subresources (including the ServiceWorker script), even if they contain SRI hashes (in other words: the root resource hash appears faulty, so all bets are off on the subresource hashes).

@jayaddison
Copy link

I've reviewed and broadly like @twiss's proposal, and I have a competing proposal that is compatible with delivery over both HTTP and HTTPS (my understanding is that SCT requires TLS).

Sometimes I write comments too hastily. To clarify: I've informally reviewed SCT -- I'm not a member of any standards bodies, only a keen technologist -- and my proposal is to some extent competing, but there's no mutual exclusion between them (that is to say: both could be deployed in parallel).

@valpackett
Copy link

deploy [SRI] to DNS

This might be really useful for concerns about a CDN becoming malicious while the legitimate operator still controls the DNS, but doesn't do anything for the "don't trust the operator" use case.

similar to Certificate Transparency

Actually sounds compelling!… with the caveat that not every app is public and wants to be transparent, I guess.


Now that I think about it, what if we could have ServiceWorker/page-controlled updates under a special "installed web app" concept? So instead of imposing the controlled update model onto regular https:// origins, there would be an action to "install" an app that would move it to a special app:// origin that is basically a Cache-only one.. (For "add to home screen" this could happen implicitly, otherwise with a permission prompt for that in particular)

@jayaddison
Copy link

This might be really useful for concerns about a CDN becoming malicious while the legitimate operator still controls the DNS, but doesn't do anything for the "don't trust the operator" use case.

Thanks @valpackett - yep, that's exactly the kind of scenario that a DNS webintegrity checksum would be intended to guard against; and correct, the mechanism does not protect against an untrusted operator.

(if an application is free-and/or-open-source and reproducibly-buildable, then continuous inspection and confirmation of the published integrity hashes may be possible, but that'd be an independent process. less-transparent sites could continue to offer content integrity)

I don't feel knowledgeable enough about either ServiceWorkers or web origins to comment on the app:// origin suggestion, but to (try to) show some awareness: my understanding is that HTTPS is preferred for ServiceWorkers, so my proposal's attempt to design an approach that is backwards-compatible to HTTP could be out-of-context / off-topic here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests