Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SSL/TLS Only? #199

Closed
slightlyoff opened this issue Mar 20, 2014 · 43 comments
Closed

SSL/TLS Only? #199

slightlyoff opened this issue Mar 20, 2014 · 43 comments

Comments

@slightlyoff
Copy link
Contributor

There's strong feeling on the Chrome team that (with a localhost exception), Service Workers should be SSL-only.

What say we?

@tgvashworth
Copy link

What are the arguments for/against as you see them? My view is that it would be a barrier to adoption due to cost and complexity, although that's only off the top of my head.

@jakearchibald
Copy link
Contributor

Only if there's very good reason. It will hurt adoption.

@alecf
Copy link
Contributor

alecf commented Mar 20, 2014

I think the very basic usecase of a coffeeshop that uses man-in-the-middle on all non-encrypted sites that you visit while in the shop, and uses serviceworkers to inject ads (or 1x1 pixel trackers, or whatever) into your content..forever.

Even after you leave the coffee shop, the SW is potentially still installed for all sites you visited, and you as a user don't even know if you're "infected" or not.

Yes, we're supposed to re-fetch the SW script every 24 hours but you can think of lots of workarounds for that. So we need to think through all the workarounds and figure out a way that the service worker behaves sanely for a legal script, but

One example: because we're also trying to be resilient against bad network connections and captive portals, (such that our app behaves sanely if it tries to update with a bad network) then I assume the user agent should persistently cache a SW script past 24 hours, if the script url resolves to an HTML page during update (i.e. if you're on a captive portal and all pages are redirected) So now the attacker can choose a url that it has pre-determined always gets an HTML page - so we don't update the SW.

@jyasskin
Copy link
Member

It's the persistence that @alecf mentions that convinced me we need to require HTTPS. Today, if a coffeeshop owner injects ads (or bitcoin computation) into your HTTP pages, those ads go away when you leave the coffeeshop. With SWs, those ads could follow you forever (Even shift-refresh won't help if the real site doesn't register a SW of its own).

We could imagine auto-unregistering SWs more aggressively in order to limit the damage a malicious captive portal can do, but that's likely to cause problems in other cases. The simplest fix is to require secure transport for the original registration.

Service Workers can help reduce the cost of SSL too: since SWs can cache resources, they'll reduce the number of requests to the server, and the lower cost should make it a bit easier for site owners to switch to SSL.

@michael-nordman
Copy link
Collaborator

For the offline case where persistence is absolutely required, SSL only
makes sense. We'll need to make some exceptions to help developers at
development time (localhost, maybe same subnet).

But cases where persistence isn't really needed seem possible [this isn't
really about offline]. Would it make sense to have a transient
ServiceWorker minus the caching (or with equally transient caching). Where
transient means something like it goes away upon leaving the site and the
sw script gets re-validated on each new navigation. Would that be safe for
http?

On Thu, Mar 20, 2014 at 12:00 PM, Jeffrey Yasskin
notifications@github.comwrote:

It's the persistence that @alecf https://github.com/alecf mentions that
convinced me we need to require HTTPS. Today, if a coffeeshop owner injects
ads (or bitcoin computation) into your HTTP pages, those ads go away when
you leave the coffeeshop. With SWs, those ads could follow you forever
(Even shift-refresh won't help if the real site doesn't register a SW of
its own).

We could imagine auto-unregistering SWs more aggressively in order to
limit the damage a malicious captive portal can do, but that's likely to
cause problems in other cases. The simplest fix is to require secure
transport for the original registration.

Service Workers can help reduce the cost of SSL too: since SWs can cache
resources, they'll reduce the number of requests to the server, and the
lower cost should make it a bit easier for site owners to switch to SSL.


Reply to this email directly or view it on GitHubhttps://github.com//issues/199#issuecomment-38207599
.

@jyasskin
Copy link
Member

@michael-nordman I think we should examine "cases where persistence isn't needed" one-by-one. The cases I know of are things like push messages, geofencing, and notifications, where there's an even stronger argument to make them HTTPS-only: we and the user need to decide whether to allow these capabilities based on the identity of the site, and that identity is only trustworthy over HTTPS.

Do you have an example of an event that you think it's sensible to deliver to a SW loaded via HTTP? Is the value of that event worth complicating the SW spec/implementation?

@annevk
Copy link
Member

annevk commented Mar 21, 2014

I don't follow the scenario. I visit StarBucks. Someone injects a SW. 24h later my browser asks for an updated SW, the server returns nothing or a SW of its own. As long as we deal with "nothing" appropriately I do not see the problem.

I can totally imagine wanting to browse something like http://krijnhoetmer.nl/irc-logs/ offline and although the trend will be TLS-only, I do not think features should be gated on it.

@jakearchibald
Copy link
Contributor

For comparison:

With HTTP caching

  • User visits http://example.com in an evil cafe
  • Evil cafe serves evil index page, script & images, reads storage etc etc
  • User later visits http://example.com at home
  • User gets evil index page, because it (and resources) has a large max-age
  • User hits refresh
  • User gets real http://example.com

With AppCache

  • As above, but it survives refreshes if manifest has a long max-age
  • Can also capture 404s and show an evil page

With ServiceWorker

  • As Appcache, but the take-over only lasts 24 hours
  • Can capture any request on the origin and return something evil

From an onfetch perspective, I don't think we've created a new problem. Maybe other SW features such as push may require https?

@jakearchibald
Copy link
Contributor

Actually, the SW attack won't last only 24hrs, as the url for the evil SW will 404, and currently that means we retain the current SW. Should we change that?

In appcache, 404 is like unregister, should we adopt that?

@piranna
Copy link

piranna commented Mar 21, 2014

Actually, the SW attack won't last only 24hrs, as the url for the evil SW will 404, and currently that means we retain the current SW. Should we change that?

In appcache, 404 is like unregister, should we adopt that?

I think so, in offline it should remain but on 404, the domain exists but
the SW is not still available, so it should unregister until next time user
go to that domain.

@alecf
Copy link
Contributor

alecf commented Mar 21, 2014

Yeah implementors are going to have to be very careful here about aggressively unregistering SW scripts - if you imagine that an app which you thought was usable offline suddenly uninstalls itself (removing SW scripts, etc) because you drove by a starbucks with a captive portal, which 404's all urls until you hit "agree" on their terms of service.

I don't think 404 is a strong enough signal to justify unregistering the script... but again this issue is solved with https - if the https connection is invalid (i.e. rather than a successful http connection with a 404 response) then we know we simply couldn't reach the other end, and is possibly a proxy for whether or not the network connection is "valid"

@devd
Copy link

devd commented Mar 21, 2014

Minor issue: The current spec says that ServiceWorkers need to be same-origin to the document. SSL-only would mean that we can't have ServiceWorkers for HTTP apps.

I believe you will need to rewrite to this weird rule: "ServiceWorkers are same-origin except when main document, when upgraded to https is same-origin"

But, is even that enough? A site may have HTTP apps on "app.com" but supports SSL only over "secure.app.com"

Other examples: the github hosting (foobar.github.io) does not support https. There doesn't seem to be a way to support same-origin + SSL-only Service Workers. I can find a place to host my ServiceWorker code over SSL but it can't be same-origin to the github hosted content.

I will also point out that using bad networks to have a persistent XSS is not a new threat. See a blogpost, or Artur Jancic's work presentation, video or an academic paper Of course, no doubt it is the case that with ServiceWorkers things become a lot worse.

Still, maybe it is better to stick to same-origin restrictions and let sites switch to SSL/HSTS if they are worried?

@alecf
Copy link
Contributor

alecf commented Mar 21, 2014

Interesting point about persistent XSS, but I think we can do better with new APIs, and this is a perfect example.

The problem isn't that a site opts into using a ServiceWorker over insecure HTTP, this attack lets any man-in-the-middle inject into any web page that the user visits, whether or not that site already uses ServiceWorkers, and lets that injection stay permanent. I think this is somewhat unique to previous threats in that most previous persistent threats still relied on weaknesses in specific site's uses of, say, localStorage. i.e. gmail could have some injection attack on localStorage, but that only affects gmail if gmail had some weakness.

This affects all sites that the user visits on an insecure network, whether they are already subject to XSS attacks or not, and whether or not they use service workers.

@jyasskin
Copy link
Member

IIUC, browsers have a lot of discretion about when they accept a Service Worker registration, and when they time that SW out of their cache. So even if the spec doesn't require SWs to be served over TLS, each browser can pick policies about how long to save an insecure SW depending on their security team's preferences, possibly as far as deciding that each page load starts completely fresh.

@devd, I'll ask some Chrome security folks if they're happy with just a requirement that the main SW script itself is secure, rather than the page telling the browser to load it. I believe they want to use attractive new features like SWs as a lever to get more sites onto TLS, but they may be able to point out actual attacks with that weaker requirement too.

@jyasskin
Copy link
Member

Oh, wait, @devd, if the page requesting the SW isn't secure, and the SW is https but on an attacker-controlled domain, you haven't gained anything at all. Yes, the whole app will need to be https. github.io should upgrade.

@devd
Copy link

devd commented Mar 21, 2014

@alecf See @jakearchibald's comments above. With caching, an attacker can also achieve a long lived XSS. I apologize---jake's comment is what I should have used, the links I pointed to are not that directly related. How about we extend the SW design to say "hard refresh clears SW and fetches it again"? This will then bring the vulnerability down to what can be done with existing caching.

@jyasskin Well, one design that could work is "same-origin except for the protocol which must be https for the SW"

@adrifelt
Copy link

It's unfortunate that github.io doesn't offer HTTPS. devd@, have you filed a bug with them and asked why they don't?

[Disregard this portion of the comment, I was confused about the topic under discussion. But leaving for posterity: When the user asks to "keep" a site, the user is making that trust decision based on the identity of the site. HTTPS for both the serving page and service worker makes sure the user gets what she expects from who she expects.]

@devd
Copy link

devd commented Mar 21, 2014

I am not sure what you mean by "user asks to keep a site." Is there a UI dialog for installing SWs? Did I miss it in the spec?

I agree that "HTTPS ensures that the user gets what she expects". But, it seems to me that HTTP is still popular. GitHub is just a currently popular example. See Anne's example: http://krijnhoetmer.nl/irc-logs/ it is perfectly reasonable to want to browse that offline.

my point was: limiting SW to https means we are limiting the use of SW to HTTPS documents. Are we sure we want that?

@devd
Copy link

devd commented Mar 22, 2014

btw, I was wrong: Github supports HTTPS for *.github.io. I should have tested before commenting. I apologize! But the last question in comment above still stands, imho

@annevk
Copy link
Member

annevk commented Mar 23, 2014

404 seems sufficient to unregister to me. Captive portals could be problematic, but only if they are not detected first by the network stack and then again, having less guarantees on HTTP seems okay.

@piranna
Copy link

piranna commented Mar 23, 2014

404 seems sufficient to unregister to me.

+1

@jakearchibald
Copy link
Contributor

@alecf @annevk:

if you imagine that an app which you thought was usable offline suddenly uninstalls itself (removing SW scripts, etc) because you drove by a starbucks with a captive portal, which 404's all urls

Don't "friendly" captive portals do a cross-domain redirect instead of 404ing? Cross-domain redirects won't be treated as unregister, they'll be an update failure.

@jyasskin
Copy link
Member

@jakearchibald and I have put together a document at https://docs.google.com/document/d/1KWa2TIAtwkaAyFkV9tR6A_6VuoOx5MXGr8UDpF2RwBE/edit?usp=sharing that explains the attack and proposes a set of restrictions on HTTP service workers to mitigate it. I'm planning to send this to our security team to see if they can poke holes in it, but I'd like to get you folks' thoughts/fixes first. You should all be able to comment (please sign in or sign your comments though), and I'll grant edit permissions as I find your email addresses.

@yoavweiss
Copy link

I support the move to TLS-only, with the lack of other mitigation methods for these attacks.

Also - I find the info in @jakearchibald's comment regarding HTTP cache attacks disturbing. It's probably off topic, but I think we should do something to protect users from that form of attack as well. While moving to TLS-only is not an option IMO, Cache revalidation after switching networks might be a proper mitigation. Such revalidation won't be able to rely on Etag/Last-Modified headers, since they can easily be faked as well. Maybe this can be combined with the Subresource-integrity spec in some way. I'll take that with the WebAppSec people.

@ylafon
Copy link
Member

ylafon commented Mar 25, 2014

@jyasskin it describes only one kind of attack, the "in the clear" captive portal. Some captive portals also redirect SSL traffic (through DNS) and present usually self-signed certificates, this needs also to be addressed. And I am not even talking about injection done on https traffic from entities able to have access to trusted CAs.
That said, addressing the malicious portal (or traffic hijacking in the clear, no need to be a portal for that), is worthy enough to support the move to TLS-only.

@annevk
Copy link
Member

annevk commented Mar 25, 2014

Again, the attack would only last at most 24 hours and would only work for sites that cannot be trusted anyway. I'm not convinced.

@jakearchibald
Copy link
Contributor

@annevk we check for updates on navigation and cap max-age at 24 hours. However, if the update check ends with http 500, network failure, cross-origin redirect, parse error, we keep the old version. This means an attacker can keep the infected version around longer.

I don't want to go HTTPS-only unless we really have to, see https://docs.google.com/document/d/1KWa2TIAtwkaAyFkV9tR6A_6VuoOx5MXGr8UDpF2RwBE/edit for what we'd need to do to mitigate these attacks.

(note: appcache is currently open to most of these attacks)

@jakearchibald
Copy link
Contributor

Another idea (also commented in the doc):

For HTTP, could we require the ServiceWorker script (and imports) match what a trusted server sees at that url?

Eg, Chrome fetches the SW script, then asks a Google server for a hash of what it sees at that url. If the hashes do not match, no install.

This means you couldn't internationalise/personalise SW. If you want to do that, go HTTPS.

@annevk
Copy link
Member

annevk commented Mar 25, 2014

That sounds like overkill to me. Again, HTTP-only should only be considered for trivial sites anyway. Anyone who builds something complicated that needs to be trusted cannot use HTTP-only anyway.

@jakearchibald
Copy link
Contributor

I'm worried about sites like the BBC & Guardian where the user trusts the content.

(but yes, these sites are already vulnerable to appcache)

@mathiasbynens
Copy link
Contributor

I'm worried about sites like the BBC & Guardian where the user trusts the content.

Anything served over HTTP is vulnerable to MitM attacks, and can’t be trusted by any user. If the BBC / The Guardian value security of their users they should really start to use HTTPS (regardless of whether appcache/SW comes into play).

(but yes, these sites are already vulnerable to appcache)

They’re vulnerable to MitM attacks in general.

I agree with @annevk.

@jyasskin
Copy link
Member

@mathiasbynens One reason the security folks want to make new platform features HTTPS-only is that the BBC/Guardian demonstrably don't care about the security of their users in this way, but they're likely to care about the speed their pages load. By tying faster page loads to secure connections, even artificially, we can improve users' security.

Trivial sites that don't need Service Workers could stay on HTTP, but non-trivial sites would need to move to HTTPS, as @annevk suggested.

However, I also sympathize with the objections to tying things together artificially like that, which is why we put together the document about how to mitigate the more concrete problems with SWs over HTTP.

@hsivonen
Copy link
Member

hsivonen commented Apr 1, 2014

Only if there's very good reason. It will hurt adoption.

It could help TLS adoption, though, if new Web features were https-only. I can see the point that it's not cool to hold other features hostage in order to drive https adoption, but when a feature needs special mitigations in order not to be a disaster without https, I think it's fair to make the feature https-only. Also, it seems implausible that sites whose developers would be competent enough to deploy ServiceWorkers wouldn't be competent enough to deploy https.

HTTP cache attacks and AppCache attacks are not a good reason to introduce more attack surface of similar nature.

@piranna
Copy link

piranna commented Apr 1, 2014

What about localhost? Or file: scheme? Should this not be allowed to use ServiceWorkers because they don't have TLS? I see it a step backwards...

@davidben
Copy link

davidben commented Apr 4, 2014

Depending on how a ServiceWorker's caches work (I wasn't able to find details, if they've even been hammered down yet), I can imagine another potential issue with http and captive portals. Say we already have a SW installed on example.com and open it again while under a perfectly well-behaved captive portal. We load example.com, get the shell of the app from cache, never touch the network, app works offline, all is well.

Then something needs to hit the network and the SW decides to save it in some cache. What happens when the captive portal hijacks that request? If the captive portal is well-behaved, hopefully it sends sufficient Cache-Control headers to avoid poisoning the cache, but if SW doesn't use HTTP caching semantics, it might stick.

@slightlyoff
Copy link
Contributor Author

hey all,

There's been a ton of good debate in this thread. The face-to-face meeting at Mozilla today included Patrick McManus (who is helping to design HTTP 2.0).

The design points appear to be:

  • HTTP-available-with-remediations (unregister on 404, etc.)
  • HTTPS-only

The arguments for SSL include:

  • It's better for users -- and we can do good by encouraging HTTPS adoption
  • Existing "palyground" services (e.g. github.io) now work with HTTPS
  • HTTPS is coming across much more of the web quickly
  • Devtools can loosen the restriction for development (file://, localhost, etc.)

The provisional decision is to make Service Workers HTTPS-only. We'll see how the beta period goes. If we have to walk it back, we'll need to add many mitigations to the HTTP mode. At some level, this is the smallest intervention needed to get good security and developer usability.

@creationix
Copy link

This provisional decision still includes the localhost exception right? What about custom domains that resolve to localhost? For example, I put mytestdomain.com in my /etc/hosts file to point to 127.0.0.1 to test the virtual hosting part of a local server.

@jakearchibald
Copy link
Contributor

Yes, DevTools are encouraged to allow disabling of HTTPS-only restriction
in a nonpermanent way
On 9 May 2014 01:54, "Tim Caswell" notifications@github.com wrote:

This provisional decision still includes the localhost exception right?
What about custom domains that resolve to localhost? For example, I put
mytestdomain.com in my /etc/hosts file to point to 127.0.0.1 to test the
virtual hosting part of a local server.


Reply to this email directly or view it on GitHubhttps://github.com//issues/199#issuecomment-42623735
.

@matthew-andrews
Copy link
Contributor

Sorry to jump on this thread rather late, and what I have to say probably doesn't affect the outcome but...
@jakearchibald said this:-

In appcache, 404 is like unregister, should we adopt that?

I just checked this - and it doesn't seem to be true - it seems like if you return a 404 for a manifest then AppCache aborts the update process and sticks with the set of files it already has.

(I agree that this should be the behaviour though - especially if https isn't enforced for SW - that it would be sensible for a 404 to trigger the unregistration of both ServiceWorker and AppCache - but other similar status codes, eg. 5xx or other 4xx probably shouldn't cause this to happen)

Also because there is (as far as I know) no way to unregister an AppCache right now the upgrade process from AppCache to ServiceWorker is really difficult. Whilst there isn't a huge amount of adoption of AppCache there is some. Have I missed something or is it worth considering adding into the spec something to explicitly disable AppCache if a ServiceWorker is installed? (Or, better, just changing AppCache's spec so that it gets automatically installed if their manifests return 404s - or is that something we can't do/shouldn't discuss here?)

@jakearchibald
Copy link
Contributor

I just checked this - and it doesn't seem to be true - it seems like if you return a 404 for a manifest then AppCache aborts the update process and sticks with the set of files it already has.

Really? Does it continue to use the cache on refresh? This isn't what the spec says (http://www.whatwg.org/specs/web-apps/current-work/multipage/offline.html#downloading-or-updating-an-application-cache - see step 5)

@matthew-andrews
Copy link
Contributor

Oops. You're right. Test case was flawed :)

@slightlyoff
Copy link
Contributor Author

Clobbering on 404 is just a bad idea. It massively complicates
multi-data-center rollouts and/or demands session/DC pinning (which sucks).

The interop scenario is still relatively undecided. I think we should open
a new issue on it. My current thinking is that it should be exclusive with
SW "winning".

On Mon, May 12, 2014 at 3:46 AM, Matt Andrews notifications@github.comwrote:

Oops. You're right. Test case was flawed :)


Reply to this email directly or view it on GitHubhttps://github.com//issues/199#issuecomment-42818171
.

@yoavweiss
Copy link

I've outlined a possible mitigation for the "MITMed SW" scenario on WebAppSec.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests