Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CfC to publish as an FPWD. #342

Closed
mikewest opened this issue Jun 2, 2021 · 28 comments
Closed

CfC to publish as an FPWD. #342

mikewest opened this issue Jun 2, 2021 · 28 comments

Comments

@mikewest
Copy link
Member

mikewest commented Jun 2, 2021

Given the state of the spec, the test suite, and the shipping implementation in Chromium-based browsers, it seems reasonable to publish the document as an FPWD.

This issue will serve as a record of the group's decision, one way or another. I've pre-populated this issue with both a 👍 and a 👎 to make it trivial to collect a signal from folks out there in the world. If you register discontent with the publication, please add a comment as well so we know what we can address to remove the concern (and if you really disapprove, Formal Objections are accepted through the usual venues).

Thanks!

@shhnjk
Copy link
Member

shhnjk commented Jun 2, 2021

What is FPWD? 😋

@marcoscaceres
Copy link
Member

First public working draft.

@annevk
Copy link
Member

annevk commented Jun 16, 2021

Mozilla is opposed to publishing this. We'd rather not give the impression this is a Working Group deliverable without multiple implementers being interested in pursuing it. The Working Group is also not chartered to work on this.

As we stated before, we are not pursuing it due to its inherent complexity making it difficult to adopt by the long tail of websites.

@mikewest
Copy link
Member Author

Hey Anne, thanks for your feedback! Should we consider this a formal objection from Mozilla, or simply a negative response? In either event, I have two thoughts:

  1. The group already adopted Trusted Types in 2019 after a brief thread on public-webappsec@ with no objections, and a discussion at TPAC where folks seemed to agree that the problem this mechanism was trying to address was a worthwhile one for us to tackle. Given that history, the transition to FPWD is not an expansion in the group's scope, or the set of deliverables the group has been considering. You correctly note that the group's charter doesn't currently list it as a deliverable (though the proposed update does, which I assume y'all have been reviewing?), but it seems quite clear to me that it falls in the group's "attack surface reduction" scope, and explicit mention has not been a requirement for publication in the past (see https://www.w3.org/TR/post-spectre-webdev/ most recently).

  2. I appreciate your concerns about complexity. I don't agree that this mechanism is overly complicated, but I understand that opinions can differ here. For this discussion, let's assume that you're right, and that TT simply cannot be deployed by the long tail, despite our best wishes. Even in that case, the fat head is responsible for protecting a substantial amount of user data, and tools which those entities can use will have direct effect on an outsized portion of a user's travels through the web. Is complexity an absolute argument against such mechanisms? It seems to me that it's only one of several considerations that we should consider. It also seems to me that it would be better to work on reducing the mechanisms' complexity together as a group, if that is indeed a blocking concern.

@annevk
Copy link
Member

annevk commented Jun 17, 2021

This came up during our internal review of the charter. We don't want Working Groups to publish things until they have the support of multiple implementers as that dilutes the value of standards. Please consider it a Formal Objection.

I do agree it's unfortunate we didn't make note of this in 2019 and going forward we'll do a better job of reviewing these CfCs.

@shhnjk
Copy link
Member

shhnjk commented Jun 18, 2021

@annevk where is the best place to understand and discuss the Mozilla's objection details? Standard Position thread is closed and your last comment was somewhere between worth prototyping and non-harmful.

@cwilso
Copy link
Collaborator

cwilso commented Jun 21, 2021

Indeed, it has never been a bar for FPWD or beginning a standards effort in a WG - in fact, I would point out that having multiple independent implementations is not defined as a hard requirement in the W3C Process document, either. But you should clearly write up this concern and objection (and what it would take to resolve), as this should be escalated.

@marcoscaceres
Copy link
Member

My unsolicited .000000000002BTC (will be worth a lot one day!):

The really great thing about the WebAppSec WG is that it does have active multi-implementer engagement - and that lends this WG continued credibility and respect. As such, I implore us to strive to have multi-implementer support before a FWPD, with no objections.

@cwilso is right that traditionally, we've not set such a high bar for publishing a FPWD - but I think it's worth striving for. Hopefully others agree.

@mikewest
Copy link
Member Author

mikewest commented Jul 7, 2021

I'd invite folks to skim through https://docs.google.com/document/d/1m91JZWKAGOR3jQoicMVE9Ydcq79gM2BetcRIBemrex8/edit?usp=sharing, which @koto put together as a summary of Trusted Type usage in the wild, including deployment details at Google, and pointers to concrete steps folks outside of Google are taking to protect their users as well.

My (biased) take on that report is that there's pretty clear agreement on the underlying problem, and more than sufficient interest in the mechanism described in this document to bring it into a working group for iteration and polish.

I'd also appreciate engagement on the more concrete questions above regarding the complexity claims. If that's the core reason that Mozilla isn't willing to express "interest" in the mechanism, perhaps we could resolve it and avoid the deeper question of implementer interest vs developer interest.

@samuelweiler
Copy link
Member

@annevk, thank you for raising this concern. @koto, thank you for the lovely summary of TT experience, both at Google and elsewhere. (I also appreciate the lengthy discussion over at mozilla/standards-positions#20 )

While I'm sympathetic to the complexity argument - and I want tools that benefit the long tail - I'm not hearing arguments that Trusted Types is broken or dangerous. If those exist, it would be helpful to hear them.

Over in mozilla/standards-positions#20 (comment) @martinthomson writes:

I too am concerned at the complexity of this. Part of that is the perl -T thing that haunts me. A bigger part of that derives from the desire to have custom sanitization routines. Maybe that is unavoidable, but there are probably ways in which this could be simplified. How far do we get with an in-browser sanitizer that essentially only prevents script execution? If we were able to neuter the current entry points and provide unsafe variants with sanitizer hooks, would that be a better model?

What progress has been made on simpler tools for solving DOM XSS?

Also, @mikewest and @koto, any chance of bringing people from the other orgs who have been using TT into the WG to directly discuss their experience?

@shhnjk
Copy link
Member

shhnjk commented Jul 13, 2021

Also, @mikewest and @koto, any chance of bringing people from the other orgs who have been using TT into the WG to directly discuss their experience?

I'm happy to chat about my experience of using Trusted Types in Microsoft Edge😊 Many things are already described in this blog post though.

@mikewest
Copy link
Member Author

Also, @mikewest and @koto, any chance of bringing people from the other orgs who have been using TT into the WG to directly discuss their experience?

  1. @shhnjk is right there, and would likely be happy to chat about Microsoft's experience. :)
  2. @dveditz and I will set next week's agenda tonight; I think it's reasonable to include this topic, and invite folks who are using it in the wild to participate.

Also, I ran across https://auth0.com/blog/securing-spa-with-trusted-types/ today as another example of developers poking at this in a way that seems pretty successful. It's a good read.

@OR13
Copy link

OR13 commented Oct 4, 2021

Note that Trusted Types does a bit more than stopping the malicious payload from executing. It refuses to assign any text-based data to a dangerous sink, even when that data is harmless. In this case, the use of Trusted Types breaks legitimate application functionality.

Sounds like a security improvement to me.... Browser APIs that take strings are to XSS as vulnerable C APIs that take pointers are to Buffer overflows... A gateway to arbitrary code execution.

Surprised to see Mozilla object to this given how Rust has helped mitigate these same problems in systems programming.

What is the alternative, leave injection attacks at 3rd in OWASP 10 ?

https://owasp.org/www-project-top-ten/

@mozfreddyb
Copy link
Collaborator

Hi,

This was discussed at great lengths in our August working group meeting. I'll paraphrase the Mozilla position:
So far, only Google Chrome intends to implement this. We at Mozilla are generally against the proliferation of technical specs that do not have wider implementation interest. From what I have been hearing at that meeting, this might not be sufficient to prevent this specification from entering the WG charter (I think we're still waiting for an official response here).

In terms of XSS prevention, we at Mozilla currently focus on the Sanitizer API, which will offer a string-based API that intends to be safe by default and still able to cater to developer's needs. It's already available for testing in Firefox and Chrome. @mikewest has made a nice demo page at https://sanitizer-api.dev/.

It also looks like you have not been involved in this working group much, so I will assume that you might not have seen all conversations about this topic? I strongly suggest you join all communication channels, to ensure you're in the loop :)

@OR13
Copy link

OR13 commented Oct 5, 2021

I just joined, looks like the first calls are not for a couple weeks, and obviously the charter is still being considered...

Thanks for the link to an alternative.

Seems like Sanitizer API focuses on keep allowing strings by trying to clean them up first.

Trust Types beaks existing code by disallowing strings... From a security perspective, sanitizing is less appealing than type checking to me / breaking unsafe apis, but both have their use.

https://cheatsheetseries.owasp.org/cheatsheets/Cross_Site_Scripting_Prevention_Cheat_Sheet.html

If the objection is really about advocating for one CG draft over another, it would be awesome to have a side by side comparison.... the meeting notes are bit hard to distill into list of pros and cons... seems like the main difference would be breaking changes associated with TT?

@dveditz
Copy link
Member

dveditz commented Oct 7, 2021

What is the alternative, leave injection attacks at 3rd in OWASP 10 ?

Content-security-policy already does a fantastic job of preventing injection XSS, but only if you use it in its strict form that breaks existing content. Only relatively small fraction of existing sites use CSP at all, and of those only a small fraction use it in strict form. On the other hand it's been adopted by some very large popular sites, so when you look at page views the numbers look better (but strict forms are still a minority):
45% page views use CSP
24% page views have "reasonable restrictions" (not sure what "reasonable" means)
6.5% have "better than reasonable" CSP

Some folks look at those numbers (especially the site numbers, not given here) and consider CSP overly complex and hard to use, and some look at the billions of protected page views represented by 24% and consider it a success.

CSP does less well at protecting against DOM-based XSS, which is where Trusted Types come in. Folks tend to see TT as being worth doing vs overly complex and not worth it depending on how they view CSP.

If the objection is really about advocating for one CG draft over another,

It's not. The Sanitizer API is complementary to Trusted Types. TT requires the page authors to create (or import) their own sanitizer. If they don't need to worry about special framework footguns, the built-in Sanitizer API could fill that function.

@OR13
Copy link

OR13 commented Oct 7, 2021

If they don't need to worry about special framework footguns, the built-in Sanitizer API could fill that function.

so it's a default sanitization strategy that can be turned off, or used with TT if you are super paranoid / use modern web frameworks (Angular, React, Vue) ?

So the objection to TT is maybe:

" mozilla does not think it's worth securing web frameworks only vanilla JS APIs " ?

Doesn't that boil down to "how much of dom based XSS can we mitigate with Sanitizer API by itself vs Sanitizer + TT" ?

Or "Sanitizer API by itself is enough / offers enough protection" ?

@mikewest
Copy link
Member Author

mikewest commented Oct 8, 2021

(I think we're still waiting for an official response here)

@wseltzer, @samuelweiler, and/or @sideshowbarker might be able to provide insight into the current state of resolving the charter objections.

(not sure what "reasonable" means)

These somewhat opaque metrics are spelled out in https://chromium.googlesource.com/chromium/src/+/refs/heads/main/docs/security/web-mitigation-metrics.md#content-security-policy. TL;DR: "Reasonable" => Strict CSP. "Better than Reasonable" => Strict CSP without 'strict-dynamic'.

@OR13
Copy link

OR13 commented Oct 11, 2021

AFAIK, the charter objections are now handled by the council, which is AB + TAG.... and they are responsible for handling disputes were the director would have weighed in previously.

Of particular interest for this charter is:

Should mozilla or google AB / TAG members recuse themselves?

@dveditz
Copy link
Member

dveditz commented Oct 12, 2021

So the objection to TT is maybe:
" mozilla does not think it's worth securing web frameworks only vanilla JS APIs " ?

That's not even remotely close to what I said.

@OR13
Copy link

OR13 commented Oct 14, 2021

@dveditz you said:

The Sanitizer API is complementary to Trusted Types. TT requires the page authors to create (or import) their own sanitizer. If they don't need to worry about special framework footguns, the built-in Sanitizer API could fill that function.

I probably misinterpreted special framework footguns, can you elaborate?

@mozfreddyb
Copy link
Collaborator

I can elaborate: The Sanitizer is only scrubbing markup to ensure that the HTML output is not XSSy in terms of what the HTML spec knows will execute scripts (event handlers, script elements, etc.). If the site-specific framework allows other markup to cause scripting (so-called Script Gadgets), then the built-in Sanitizer not be able to fill that function by default. However, a specific Sanitizer config will be able to do so. (See also the section on Script gadgets in the spec).

@OR13
Copy link

OR13 commented Oct 18, 2021

So basically, at some point Sanitizer API sorta becomes like trusted types in that it explodes when handling unsafe input wrt HTML / XSS... but but it can be extended to other languages, whereas trusted types is more bound to HTML and DOM APIs...

Both can be used to force developers to upgrade dangerous code paths or the app won't work any more... right?

@koto
Copy link
Member

koto commented Oct 18, 2021

Sanitizer API [...] can be extended to other languages

Not sure what you mean. Sanitizer API is tightly coupled with DOM (for example, it adds a function to DOM's Element), I don't think it can function outside of browsers and JS. Its behavior can be emulated in any environment, just like Trusted Types or DOM (in fact, TT already has an emulation working on nodejs).

Both can be used to force developers to upgrade dangerous code paths or the app won't work any more... right?

I actually think this is the crucial distinction between both APIs. Trusted Types can impose certain restrictions (like, say, CSP) and prevent XSS in that way - by forcing you to rewrite all of your code in a safe way. While they let you to shoot yourself in the foot (e.g. by not sanitizing the values when producing the types), you know exactly where the bullet is fired from.

Sanitizer API, on the other hand, is a tool that you can use to remove JS payloads from HTML, but developers are left with the task to identify the places that need HTML sanitization on their own. The API doesn't force your application to use it everywhere when needed - you can still have data flows in your codebase that, say, write user-controlled input to innerHTML. Only TT can prevent that.

@dveditz
Copy link
Member

dveditz commented Oct 19, 2021

TT can require you to sanitize inputs to sinks, but you have to provide your own sanitizing routines
The Sanitizer API gives you a sanitizer but can't make you use it.

"Two great tastes that taste great together."

@shhnjk
Copy link
Member

shhnjk commented Oct 19, 2021

However, a specific Sanitizer config will be able to do so. (See also the section on Script gadgets in the spec).

While I love Sanitizer API, I just want to be clear that Sanitizer API won't be able to solve DOM-based XSS. There are existing HTML sanitizer libraries (e.g. DOMPurify) available for at least more than 5 years, which supports sanitizer customization.

So, is DOM-based XSS solved with HTML sanitizer libraries? Nope.

We can't depend on frameworks either. For example, React is a relatively secure framework, but devs should be careful when assigning user input to href, srcdoc, etc. There are other frameworks which have more XSS sinks.

In a world where every site is built on top of various frameworks and libraries, it becomes almost impossible for developers to understand which input would reach XSS sinks (e.g. Script Gadgets), and who is responsible for sanitization/validation.

Trusted Types just makes this task easy by:

  1. If an unnecessary string is treated as HTML, change the code to use safe APIs (e.g. textContent).
  2. If a string needs to be treated as HTML, whoever is responsible for safeness of the string must convert the string to TrustedHTML.
  3. Whoever is not responsible for sanitization just needs to assign passed value to XSS sinks as-is.

By following these guidelines, developers would know when they are responsible for sanitization (via Trusted Types violations), and security reviewers can validate safeness by just reviewing Trusted Types policies.

I'm really happy to see Sanitizer API to complement Trusted Types for actual HTML sanitization tasks (and hopefully solving mXSS!), which should eliminate some of the complexity around enforcing Trusted Types.

But DOM-based XSS is a complex problem (e.g. things covered by TrustedScript/TrustedScriptURL), and therefore deployment of Trusted Types will require some work to solve the complex problem. However, the work is minimal for the problem, way smaller than rewriting a C++ application to Rust 😉

@mikewest
Copy link
Member Author

Hey folks! I appreciate the conversation here!

That said, I'd like to keep this issue focused on the question of whether or not we should publish Trusted Types as a product of this working group. There's been a hearty exchange of ideas on that specific topic, and it's currently running through the W3C's process of evaluating formal objections to the group's charter in light of that exchange. Rather than retreading that ground here, I'd suggest that we wait for that decision, and make a decision here accordingly.

@lukewarlow
Copy link
Member

This was done a long time ago so closing out this issue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests