Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CORS and RFC1918 #143

Closed
annevk opened this issue Mar 12, 2019 · 16 comments · Fixed by #480
Closed

CORS and RFC1918 #143

annevk opened this issue Mar 12, 2019 · 16 comments · Fixed by #480
Labels
position: positive venue: W3C CG Specifications in W3C Community Groups (e.g., WICG, Privacy CG) venue: WHATWG Specifications in a WHATWG Workstream

Comments

@annevk
Copy link
Contributor

annevk commented Mar 12, 2019

Request for Mozilla Position on an Emerging Web Specification

Other information

This might be on the critical path toward enabling high-resolution timers so it'd be good to know how much buy-in this has.

@annevk annevk added venue: IETF Specifications in IETF venue: W3C CG Specifications in W3C Community Groups (e.g., WICG, Privacy CG) venue: WHATWG Specifications in a WHATWG Workstream labels Mar 12, 2019
@martinthomson
Copy link
Member

https://bugzilla.mozilla.org/show_bug.cgi?id=1481298 has some background on this. I don't have time to do the digging right now, but there are some concerns there.

If this is part of spectre mitigations, maybe a summary explanation would help. That thread (like the bugs) is long.

@tomrittervg
Copy link

tomrittervg commented Mar 15, 2019

We would like to run-enable High Resolution Timers and Shared Array Buffer. Doing so presents a rick of a Spectre attack - either a Variant 1 attack that we are missing a mitigation for (we have no known holes, but nonetheless) or a different variant or attack that someone creates a practical application to the web platform of.

To mitigate the risk of what information is loaded into a Content Process (and thus vulnerable to Spectre) we would like to require the web page to opt into a more strict mode in order to re-enable High Res Timers.

It would require websites to signal their intention with an opt-in header. The header would enforce:

  1. All resource loads (javascript, css, images, etc) to EITHER disable all third party credentials (cookie and HTTP auth entries) OR require CORS (We'd prefer the latter)
  2. IF we don't require CORS all the time, probably require all resource loads to be over HTTPS
  3. IF we don't require CORS all the time, we would consider requiring a CORS preflight for any load from a private IP address (either 127.0.0.1, RFC1918, or their IPv6 analogs)

We'd like to require CORS all the time, but that may not match what other browsers want to do.

(1) is intended to prevent web resources intended to be kept private, private. However there are also the category if resources that are private but can be loaded solely on IP-based authentication. (2) is targeted at those: there seems to be a general opinion that most such resources don't use HTTPS. However that isn't the direction of the web we'd like to take: we want to encourage HTTPS adoption, not make it so enabling HTTPS makes you less secure.

Hence (3) - requiring the stronger CORS preflight for any private resource. This would not protect IP-based authenticated resources that exist on a public IP address.


The concern in 1481298 is that we may be legitimizing bad practice. It's bad practice to require IP-based authentication for anything, even local resources and even local network resources. While I tend to agree with this, I'm not certain we can meaningfully move this ship. Especially when we should consider that even if the local intranet site does require authentication in a proper way, the unauthenticated login page or resources can be an information disclosure for the user, identifying their place of employment, local software running, etc.

The problem from 354493 is, as far as I can tell, that it blocked all loads from private addresses by default. Requiring CORS would initially perform the same sort of breakage; but at least for SAB re-enablement, we only intend to require a CORS pre-flight when the site opts in.


So I'm supportive of this draft; but we'll have to figure out the breakage aspect of it before deployment. However I do think it's a good idea for SAB re-enablement.

@martinthomson
Copy link
Member

That analysis seems reasonable to me. If the goal of Spectre mitigations is to create processes that have absolutely zero secrets in them, even accidental ones, then we would need this, but I would say that option 1 is necessary. Networks that use IP authentication do use public address space. I worked for many years at such an institution.

On that basis, we should preflight absolutely everything when entering a mode that enables SharedArrayBuffer or removes fuzzing on timers. That's 1, not 3.

2 isn't hugely useful in this context, except as you say to cut out the most insecure stuff. I support it anyway.

We also need to ensure that the timing signal from other processes is sufficiently obscured. We can't have hostile processes equipped with high resolution timers attacking other processes using legitimate channels. That might mean obscuring timing of events from those processes by adding delays. That means that the site makes a trade-off when it enables this option: preferring its own performance and responsiveness at the cost of external communications latency.

(We can continue this discussion separately, I just wanted to point out that this is only a tiny part of the overall strategy.)

@tomrittervg
Copy link

We can continue this discussion separately

Where is best to do so?

@annevk
Copy link
Contributor Author

annevk commented Apr 9, 2019

Requiring CORS is indeed safer, for the case you mention. (Note that this doesn't mean preflights. Preflights only happen in certain cases. Preflights are done in "RFC1918 and CORS" to protect against XSRF, not to protect the response.) However, it doesn't seem like CORS as the only solution has enough cross-browser buy-in, which is why we're considering this alternative (perhaps simplified to block on private and local IP addresses, rather than have a mechanism to make them work). We might also want to put out an advisory of some kind that relying on IP-based authentication is not good.

@JLLeitschuh
Copy link

JLLeitschuh commented Apr 29, 2019

Some interesting research published by the PortSwigger blog that is relevant to RFC1918. Hopefully, any implementation of this RFC would help prevent this sort of attack.

Exposing Intranets with reliable Browser-based Port scanning

Proof of concept link

@JLLeitschuh
Copy link

Here's another example:

Remote Code Execution on most Dell computers

Dell running a localhost server on port 8884.

@annevk
Copy link
Contributor Author

annevk commented Jun 25, 2019

Update: this is no longer relevant as a path toward high-resolution timers. It still seems like a good idea to have something like this as defense-in-depth, e.g., for the links mentioned above. However, deploying this without opt-in might not be web-compatible. And given that Chrome has not moved forward with this recently maybe they realized as much as well?

@JLLeitschuh
Copy link

JLLeitschuh commented Aug 13, 2019

For documentation purposes:

Here's another example of this same abuse of a localhost server that could have been prevented by this RFC:

zoom
Zoom Zero Day: 4+ Million Webcams & maybe an RCE? Just get them to visit your website!

There actually was an RCE in that webserver:
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-13567

@adamroach adamroach removed the venue: IETF Specifications in IETF label Nov 16, 2019
@JLLeitschuh
Copy link

JLLeitschuh commented Mar 3, 2020

Here's another case from 2016 that would have been really bad if exploited. It was an RCE vulnerability in all JetBrains IDEs that enabled RCE.

https://blog.saynotolinux.com/blog/2016/08/15/jetbrains-ide-remote-code-execution-and-local-file-disclosure-vulnerability-analysis/

Edit: wrong link

@dbaron
Copy link
Contributor

dbaron commented Jun 1, 2020

However, deploying this without opt-in might not be web-compatible.

Who would do the opting in?

And what sorts of compatibility problems would you expect? Mainly ones involving localhost?

This does seem like a behavior that would be desirable if it were compatible.

@annevk
Copy link
Contributor Author

annevk commented Jun 2, 2020

Who would do the opting in?

An idea we had for COEP was that instead of requiring a Cross-Origin-Resource-Policy header on responses we would instead not send credentials with any request and require CORS and RFC1918 to be used as well for affected URLs.

And what sorts of compatibility problems would you expect?

Yeah, localhost will definitely hit issues. I suspect non-localhost too given IoT, but less sure.

I think it's desirable as well, especially for localhost, though preventing access to the others might also be good for defense-in-depth and fingerprinting reduction.

@JLLeitschuh
Copy link

JLLeitschuh commented Jun 2, 2020

There's been a recent discussion about how sites like ebay are doing local port scans on users computers as a security measure. They aren't asking users for their permission to do this.

@annevk
Copy link
Contributor Author

annevk commented Jan 27, 2021

I think this is something the web should have and I welcome Chrome's experiment here. I suggest we mark this as worth prototyping for these reasons:

  1. It makes it harder to attack naïve local networks, which seems like a big plus with the security story in internet of things leaving much to be desired. As well as the issues with localhost highlighted above.
  2. It makes it harder to pull of Spectre attacks against resources in local networks. https://github.com/annevk/orb will help with this as well, but this will help resources that filter has to let through because they might be CSS (I wish I was making this up), such as those without a Content-Type header.

This could go even further and better protect local-network-to-local-network requests, but I have some hope we'll be able to get there eventually.

@letitz
Copy link

letitz commented Jan 27, 2021

Thanks Anne for the update. If this is something worth prototyping, would Mozilla also be open to giving the green light to web-platform-tests/rfcs#72, which would allow testing the spec correctly?

@annevk
Copy link
Contributor Author

annevk commented Jan 27, 2021

@letitz to be clear, I'd like to await feedback from peers before formalizing Mozilla's position via a pull request. @ddragana mentioned she would try to get back to you on the bug you filed regarding that, hopefully this week.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
position: positive venue: W3C CG Specifications in W3C Community Groups (e.g., WICG, Privacy CG) venue: WHATWG Specifications in a WHATWG Workstream
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants