New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Trusted Types appendix. #55
Conversation
```js | ||
const my_config = { ... }; | ||
const sanitizing_policy = trustedTypes.createPolicy("apolicyname", { | ||
createHTML: new Sanitizer(my_config).bound() }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you want to refer to the arguments explicitly?
E.g. createHTML(input) {return new Sanitizer(my_config}.bound(input) }
. Policies createXYZ functions have variadic arguments after the first one, and with mentioning the argument explicitly we don't have to tie both TT and Sanitizer bound() signature (and are free to, for example, add arguments to bound()
later).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That was on purpose. Not sure if I'm being too clever here. :)
Minor misunderstanding: bound doesn't take parameters, but the function it retuns does. We do have the exact same problem there, though. E.g., if we were to redefine sanitizeToString to take two parameters that get concatenated, then my current definition would do weird things.
We could define bound() like so: return x => this.sanitizeToString(x);
IMHO, main 'contra' would be 1, that sanitizeToString doesn't really lend itself to extensions, so planning for such a case might not make that much sense, and 2, if it does change, we could change bound() to match. Main 'pro' is IMHO that... it's super easy to do, and additional clarity doesn't hurt.
I think I'll do an update for this case, by being more explicit about what bound() does.
auto-sanitize any HTML assignments. | ||
|
||
```js | ||
trustedTypes.createPolicy("default", { createHTML: new Sanitizer().bound() }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same issue, but the default policy is actually already called with three arguments, so the potential extension of bound()
would be complex if there's existing code like that.
auto-sanitize any HTML assignments. | ||
|
||
```js | ||
trustedTypes.createPolicy("default", { createHTML: new Sanitizer().bound() }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Again, this doesn't make sense to me. Why can't we let Sanitizer return TrustedHTML? I don't see a reason why developers have to create a policy when Sanitizer is guaranteed to not have dangerous output?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm indifferent here. Since the sanitizer can directly produce nodes that don't XSS, many applications won't need a TrustedHTML type (nodes also let you avoid the parser-roundtrip). For those that do need strings, creating a policy is relatively simple (and we can improve that even, e.g. trustedTypes.createPolicy('name', sanitizerInstance)
or sanitizerInstance.createPolicy('name')
).
OTOH, we do have trustedTypes.emptyHTML
because it's useful. And it's not terrible if Sanitizer produced TrustedHTML. Having trusted-types 'none'; require-trusted-types-for 'script'
guaranteeing no DOM XSS is very compelling.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we require policy creation for Sanitizer API, then it becomes really hard for people to roll out the Perfect Types. Because any dependency could be using innerHTML
and that'll break Perfect Types (because policy creation is required). On the other hand, if we allow Sanitizer to product TrustedHTML without policy creation (I'm okay with 'sanitizer'
keyword), they will help reduce the breakage.
index.bs
Outdated
However, it may not always be in the developers interest to allow direct | ||
creation of HTML from any Sanitizer instance, especially if the Trusted Type | ||
policies intended for use with the document are meant to block content beyond | ||
direct script applications. To accomodate this case, Trusted Types' CSP |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure this would have the effect we're seeking for. Applications that want to put stringer-than-XSS restrictions on their DOM, already have an issue with regular DOM APIs. Guarding the sanitizer input with TT doesn't buy them much, as the problem they face is different - how to make sure that all access to the DOM is limited. This pattern would suggest they go through TT(custompolicy)->Sanitizer(customconfig)->DOM
, but the the easier access pattern, and probably a source of vulns is to do strings->DOM
.
The challenge in enforcing such restrictions is to make sure you never skip calling a sanitizer, and the input type parameter doesn't give you that.
There's also a secondary concern that TrustedHTML
already communicates that your data is ready for DOM, even more than strings, and that might give a suggestion not to sanitize.
I think the sanitizer always accept a string on input, the data is always suspicious, that's what the sanitizer is for.
Okay, so no consensus here just yet. I suspect we're discussing different use cases in disguise. We should support all of them. I think the first is, XSS-safe applications with minimal ceremony. I think both Koto and Jun are arguing for this. In this view, a Sanitizer that is default-.safe for XSS should siimply be available everywhere, and should be able to produce TrustedScript. Then the app (& dependencies) could just use them, without much planning and infrastructure, and one could enable Trusted Type to complete enforcement. I think the second is to use Trusted Types for app-specific restrictions. Say, you have custom elements that have "interesting" behaviours attached to them. Or you want to ensure that only specific styling is used (e.g. I'm assuming both use cases are legitimate... not sure if everyone agrees. I'll add sanitizeToTrustedHTML back, because I think that supports the first use case better, and it doesn't hinder anything in the second. |
Careful coding already allows you to both protect from XSS and app-restrictions. TT try to make it robust and mistake-resistant for XSS, but would fail for app-restrictions I think. Sanitizer solves a different issue - how to safely process user input (for XSS, but also for app-specific restrictions). You still have to remember to call it, and with the right config, so it doesn't guarantee that all user input will be sanitized. Requiring TrustedHTML on sanitizer input will not solve the app-specific problem either, as there's nothing to guarantee that the TT policy rules would not actually get violated later by the sanitizer config. For example, if your policy for some reason really wants uppercase attribute names, and the sanitizer would normalize the names, technically this is a violation (the output of the sanitizer would not pass the policy). I think if the app-specific rules are to be enforced on user input, than it has to be in a sanitizer config rather than in TT - or, alternatively, make sure not to produce nodes and use a
|
I've only skimmed through this thread, so I'm sure I'm missing nuance. Please do point it out if so! At a high level, I see us creating two tools that layer together to form a pretty substantial mitigation for cross-site scripting. The sanitizer API can process a string, and remove content known to the browser to cause code execution (whether user-provided or not). Trusted Types can ensure that strings go through some arbitrary processing function before being "executed" via one or more DOM sinks. Assuming a sanitizer configuration that's appropriate for a given app, it seems clear that they could be composed in either direction with similar practical effect. That is, both of the following code snippets should produce the same result: // Example 1
const s = new Sanitizer({ config: goes, here: true });
el.replaceChildren(s.sanitize(input)); // Example 2
const s = new Sanitizer({ config: goes, here: true });
const policy = trustedTypes.createPolicy("whatever", { createHTML: i => { return s.sanitizeToString(i); } });
el.innerHTML = policy.createHTML(input); The former seems initially preferable, given that there's a real risk that the snippets don't produce the same state, as the latter require re-parsing a sanitized string post-sanitization, leading to some risk of mutation-based XSS. So, one potential option is to not integrate TT and the Sanitizer at all, but expect all sanitizer usage on TT-enforced pages to generate I'm not sure that this option is sufficient, given Trusted Types' threat model, which (I believe) aims to protect confused-but-not-malicious developers from shooting themselves in the foot while parsing untrusted input. Clever Security Folks™ create policies which are known-good for a given application, and require them to be used by preventing others from being created (via the policy naming constraints). It seems to me that allowing a trivial route around those carefully-constructed policies could be dangerous in the face of custom elements and/or script gadgets that can convert attributes which appear inert to the browser into executable code. Our confused-but-not-malicious developer could do their best to do The Right Thing™, constructing a Sanitizer that doesn't quite cover all the bases, and relying upon it in security-relevant contexts with disastrous results. A Two and a half alternative approaches might be interesting to explore:
1 is trivial. 2 appeals to me. 3 is somewhat complicated, and reinvents at least part of the wheel. WDYT? P.S. I'd note also that the analysis above becomes weirder if Trusted Types ever grows support for style injection points and/or the sanitizer does the same. |
I think Sanitizer API needs to provide a way to defeat mXSS. We can't make mXSSes a developer's problem, because we (browsers) are creating the mutations. So we are in a better position to provide a safe way to avoid mXSS than developers. We are discussing potential solutions at #37.
If the custom elements and/or script gadgets is under a Trusted Types enforcement, Trusted Types should block those. If not, confused-but-not-malicious developers will create a bug which can bypass Trusted Types anyways by using DOM APIs, which will make Trusted Types not much valuable. AFAIK, the only sinks I know of are the Blob API and Dynamic import, in which the APIs itself needs to be fixed.
Given my above comments, I think something like
If we are talking about the DOM-XSS here, I think that the output coming out of the Sanitizer API with default config needs to be DOM-XSS free (assuming that TT is enforced). In that sense, the output ought to be something that "does not need an audit" in terms of DOM-XSS. If we can come to this point, all web community can try breaking Sanitizer API instead of auditing each places where they use Sanitizer API, which seems like the better place to be. And with something like Examples:
Now imagine that more frameworks will support Trusted Types, and these kind of changes to assign static HTML using TT policy would create a noise to all developers and security teams using those frameworks, just to review and be compatible (in case of Perfect Types) static HTML assignments. I think and hope that the Sanitizer API can help solve this problem. |
For the sake of argument, let's assume for the moment that the sanitizer completely solves mXSS. In this world, re-parsing is no longer a security concern, but is probably still a somewhat-substantial performance concern. Different in kind, but still interesting to consider.
I think it's useful to examine the path to exploitation, given some unknown-to-the-browser sink. Let's assume something like That seems substantially more straightforward, and therefore more likely, than a developer intentionally working around the type system via DOM APIs.
It's unclear to me how this can be the sanitizer's responsibility in a world where framework developers continue to invent new and exciting ways to convert inert text into code. The sanitizer can get rid of things the browser knows will cause code execution. I don't see how we can expect it to get rid of everything else as well (especially in a world where the default configuration can't be overridden).
If the sanitizer is a reasonable way of ensuring that these static bits of code are indeed reasonable to inject into a page, great! We have two ways of spelling that, one which integrates TT into the Sanitizer (via If we're simply left with this aesthetic choice, my suggestion is that the story around the latter is more clear: Trusted Types enforcement ensures that untrusted strings run through a policy before they're injected into a page. That policy can (should!) be simply defined via a sanitizer configuration, or it can be more complicated and interesting. But it's the policy's responsibility to ensure application-specific rules are enforced, and the Sanitizer is a tool which can be used to perform that enforcement.
What is "Perfect Types"? |
The issue I see is that the DOM API is intentionally simpler to work with than the policies, so a bypass via DOM does not "feel" like a bypass. For me, the idiomatic way for a developer (or a 3rd party lib) outputs data to the page is DOM+strings. TT policy, with sanitizer or not is only successful is stopping bugs if the natural way doesn't work at all. As such, I'm not convinced that TT could eradicate app-defined XSS vectors. It might be helpful if complemented with other controls (grep for
A single policy can be made more strict. When limiting policy creation perhaps this could be said about all running policies too (I don't think this would compose well, as you probably would have a single policy with no rules whatsoever for various reasons), but the problem still remains that the raw DOM doesn't care too much about whether sanitizer or TT was used at all for custom sinks. So the imposed limitation holds only on request - if you remember to call a policy or a sanitizer, and ensure not to call the DOM instead. In practice, it's the latter that leads to bugs (after all, we had sanitizers and were grepping code for years) and neither TT nor Sanitizer can help here.
+100.
I think I side with @mikewest here. While it would be ok for the sanitizer instance to return
|
I think we're saying the same thing here?
This still seems pretty straightforward to me. Define a set of prefixes/suffixes/whatever, write them down somewhere, implement them in the sanitizer API (and maybe teach DOMPurify about them as well), and use those implementations to justify asking folks to start using them.
If we agree on this, then I think potential differences of opinion in the discussion above don't really matter. So I'll just concede it all for the moment, and agree with you agreeing with me about a potential approach. :)
Got it, thanks. |
Yes, for new sinks it's easy. For existing sinks (e.g.
@shhnjk - wdyt? |
Yes, guarding sinks in the frameworks is the job of Trusted Types, and not the job of Sanitizer API. This is why it's fine for Sanitizer API to return output which includes
I said the opposite :) So your mental model is right. Sanitizer API is more strict than TT.
As I explained above, Sanitizer API's output with default config should be natively DOM-XSS free. If the document is TT enforced, other framework-level sinks will be handled by TT in the framework level.
I think the change I'm suggesting is difficult to revert back later, so I'm okay with going with you guys' approach :) But, I would love to see what I suggested in v2 or v3 or whenever we see the demand. I can already see a future where 3rd party libraries will be creating many TT policies, and allowed TT policy list in the header will be as much as what we see today in the allow-list of domains for CSP's script-src. In that world, I see security auditors crying, as they would need to create a grep tool for changes happening in allowed 3rd party TT policies and review changes in those (which are mostly static HTML assignments). |
Re-reading all of this... and still trying to form an opinion. I think there is one substantial decision to make; and most of the follow-ons seem to be more about API aesthetics.
I think that's the matter of substance. The remainders are largely a matter of taste. I think # 1 would look like this:
While # 2 could look like this:
Have I correctly captured those options? |
Yup, overall agreed, and I want # 2, if possible :) Few comments:
I think that's a good idea. But then, we probably want to allow the following
Otherwise, frameworks has to have multiple if statements (and policy creation will still be necessary for functions that gets called more than once, unless the document allows But if we decided to do this, I do have another idea that might conflict with this, so I would love to discuss about Sanitizer API in some sort of a meeting. |
The main problem with option 2 is a lazy dev creating a sanitizer with allowed elements and allowed attributes set to include XSS sinks (like "script" or "onerror") can now completely bypass the protection that TT was supposed to offer. |
Yes, that's the point I want to discuss in the meeting. So far, I think @mozfreddyb would never want to allow Sanitizer API to return such an untrusted output. So the concerns goes away. But I think it's useful for frameworks to have sanitizer which can include arbitrary allow-list. So my idea so far is that we should expose another thing called CustomSanitizer API, where it'll allow that, but it has no way to return TrustedHTML without a TT policy. |
BTW @mikewest, if we don't integrate TT with Sanitizer API at all, |
Hey, @otherdaniel!
I mostly agree with this distinction (with the caveat @shhnjk notes above), though I don't think it's merely aesthetic. We're telling developers a story about how they can secure their sites. In that story, my core assumption is that it's possible for the sanitizer to be configured unsafely in the context of a given application. This seems likely even if the sanitizer's configuration options are substantially limited, given the number of frameworks out there on the world wide web. With that in mind, the story that makes sense to me is something along the following lines:
The most practical approach to supporting that story is, IMO, something like the following:
We could, of course, go further and improve the integration for this case, gently pushing developers towards the sanitizer as a good way of addressing their policy needs and avoiding the performance overhead of reparsing at the same time. As a strawman for bikeshedding, let's do that via new const superSafePolicy = trustedTypes.createPolicy("super-safe", {
sanitizeFragment: { super: safe, sanitizer: configuration }
});
el.appendChild = superSafePolicy.createFragment(maliciousInput); That seems like a pretty good place to end up, and would support the WDYT? |
I missed this bit of @shhnjk's note:
This sounds good to me too. @otherdaniel / @lyf, perhaps y'all could move your next sync with @mozfreddyb to a @shhnjk -friendly time? I'd be happy to hop in, as I'm sure would @koto. |
To clarify, the sanitizer API will always remove XSS sinks, that's the baseline and this issue relies on that fact. I don't think there are plans to regress to allow developers to allow Thanks for all the comments, I feel we're getting somewhere :) Let me put another option out there...
TT are not a good fit for guarding all strings->DOM transformations currently. I think we should enact boundaries that are strong, focused and consistent. Trusted Types is designed to guard carefully chosen sinks (which might include user-specified sinks in the future), and not to guard DOM node production. The only reason DOM manipulation is guarded is that some of the sinks happen to be in DOM. Further, the only reason So far it looks like I'm arguing for option 1. But actually, some form of 2. is beneficial, because we might have custom sinks that the Sanitizer API is not aware of. Also, the sanitizer output is made to be written to There's a few invariants I think we should preserve:
It suggests that maybe the sanitizer should wrap TT; it could call a provided TT policy for all the sinks that are TT-enforceable if they were not already rejected by the sanitizer, via its baseline or config: // require-trusted-types-for 'script'; trusted-types 'foo'
trustedTypes.strawman.requireTrustedTypeFor('foo[data-xss-scriptUrl]', TrustedScriptURL);
trustedTypes.strawman.requireTrustedTypeFor('foo[data-foobar]', TrustedFoobar);
const sanitizer = new Sanitizer({
some: config,
trustedTypesPolicy: trustedTypes.createPolicy('foo', {
createScriptURL: (i) => {
// called by the sanitizer when it encounters foo[data-xss-scriptUrl]
// not called for script.src as it gets removed by sanitizer itself.
},
createFoobar: (i) => {
// my custom type validation, called on 'foo[data-foobar]'
}
}),
})
// Calls policy on all known sinks that were not already removed
// i.e. - only custom sinks.
nodes = sanitizer.sanitize(dirty)
// Same, the output type doesn't matter, the processing respects TT.
htmlString = sanitizer.sanitizeToString(dirty).
// Does not require createHTML in the policy, as all the TT sinks went through a policy anyway.
trustedHTML = sanitizer.sanitizeToTrustedHTML(dirty) The default policy also works: // require-trusted-types-for 'script'; trusted-types 'foo'
trustedTypes.strawman.requireTrustedTypeFor('foo[data-xss-scriptUrl]', TrustedScriptURL);
trustedTypes.strawman.requireTrustedTypeFor('foo[data-foobar]', TrustedFoobar);
trustedTypes.createPolicy('default', {
createScriptURL: (i) => { ... },
createFoobar: (i) => { ... }
}
);
const sanitizer = new Sanitizer({
some: config,
});
// Calls the default policy on all recognized sinks.
trustedHTML = sanitizer.sanitizeToTrustedHTML(dirty) The advantage is that in the base case (no custom sinks defined in TT) the sanitizer never calls any policy, as all the sinks are already removed: // require-trusted-types-for 'script'; trusted-types 'none'
// Just works. native sinks are never called, policy is not needed.
document.body.innerHTML = sanitizer.sanitizeToTrustedHTML(dirty) ... but the moment TT configuration gets more strict, you either need to adjust the sanitizer config (to reject the sinks early), provide a policy to the sanitizer (to filter the values for the sinks), or adjust your default policy for a fallback. |
[Meeting still needs to happen, of course.] I've update the proposal, largely by removing stuff. The goal was to put a minimal version of choice # 2 in words, while not blocking any specific use case or future development. So:
Given where this currently is, this should arguably not go in this spec at all, but rather as a "monkeypatch" in to the Trusted Types one. Made a note of that. |
I think we should expose some way for developers to detect this on/off switch. |
Makes sense. The potato way of detection is of course to just try-catch new Sanitizer().sanitizeToTrustedHtml("<p>"). But I think a read-only bool property on trustedTypes would make sense. (And would be easy to do.) |
Re switch, it's complicated :) w3c/trusted-types#36 I have a new sketch on how to make the default easy, to discuss at the meeting: https://gist.github.com/koto/7ad3b636a48a4fa39760dd2b88187162 |
Can we all agree on the following first? That will make things little clear :)
Thumbs up if you agree, or comment if you disagree :) |
I agree, and I think this was generally agreed upon already. For quite a while now, the spec is written to have non-overridable built-ins, although we never quite agreed on their exact values. |
So... I read, and re-read the various proposals. It doesn't strike me that we're that far apart. 1, "Sanitizer never outputs XSS natively" is agreed. The main differences I'm seeing is On both items, I guess I still think that giving whoever configured the TT enforcement an option to decide for or against automatic Santiizer-to-TrustedHTML is a good idea. |
Most of the options proposed are only opt-ins, and not opt-outs. Instead, I think the real options are:
As stated above, I'm okay with 1 or 2. But I would love to avoid 3. |
I have a slight preference, not to return a TrustedHTML from within the Sanitizer. |
For the reasons outlined in #59, I think the opt-out could be not about whether Sanitizer returns TT, but whether Sanitizer (or Sanitizer with a certain config) can be created in a first place. This detangles the sanitizer creation from the TT logic and decouples the sanitizer from TT policies. #59 proposes some mechanisms for doing that, which allow authors to construct guards that fit your application, including ones that limit Sanitizer to be only used within TT policies. I also think that, given the above, it's OK to unconditionally make the Sanitizer return So, to answer @otherdaniel':
Yes, but on @shhnjk's options:
That would be a combination of 1 and 2 (with every sanitizer you can get
Not quite sure I follow. Developers who can't adopt TT would just not call the function (or would not have it present at all). How is adding a function (that is useful for TT-y developers) makes the Sanitizer less usable for the others? FWIW using TT in parts of your application without yet enforcing TT (e.g. because your dependencies are not yet compliant) is pretty much how you usually use TT, dependencies don't block you from migrating the code you control to TT. I think we should have a |
It seems like we're not converging. At the risk of making things worse, maybe there's another way to slice our onion... :) My main pivot here is around the "Perfect Types" use case. As I understand it, at its core it means that the developer writes an XSS-free app, and then uses Trusted Types to lock it down. However, if we rely on With this in mind, how about we structure it like so:
In more formal terms:
This would require a reasonably clearly written TT+Sanitizer document, but I don't think that would be too difficult to write. Advantages I'm seeing:
Any thoughts? |
Why is that not the case? The XSS free app with Perfect Types cannot have custom sinks, as the data passed to those sinks would eventually need to reach native sinks (some application code takes
What would
So it's a union of trustedTypes.createSanitizingPolicy = function(name, config) {
const sanitizer = new Sanitizer(config);
return trustedTypes.createPolicy(name, {
createHTML: (s) => sanitizer.sanitizeToString(s);
})
}
Yeah, that's the one I don't like too much. It sounds like to cater to edge cases (custom sinks) we put roadblocks onto using the sanitizer. I think we would be better off if TT and Sanitizer were widely available, even if that means we're not making it easy to secure applications with custom sinks.
I think that's the core difference. I don't think TT should sit on top of the Sanitizer. There's only two parts of TT that make it relate to Sanitizer. (1) perhaps Sanitizer should create a (1) is a nice to have. If this ends up impossible, so be it, one can always create a policy for the sanitizer. Not all apps sanitize too, so it's not "killing" Perfect Types for a lot of applications. I think (2) would be misusing the TT API. Without a significant rewrite, and quite DOM-invasive patching it's not built to guard all DOM operations (including ones that change custom sinks). Customizing the TT guards (e.g. defining which sinks need which type) makes the applications less portable, and libraries difficult to write (can I do But the more important part is that I think 2) is detrimental to Sanitizer, as it means that sometimes the API is not available at all, unless you reach it through TT, which complicates everything, including polyfillability. And this model (TT on top of sanitizer) only brings some security benefits to authors with custom sinks who use sanitizer and TT. The facts are - if you have custom sinks, you're going to have a bad time. No platform feature can save you from errors - even TT + sanitizer with their most advanced integration still require you to make sure you're not interacting with raw DOM. But, even without any integration you still get useful primitives. TT make sure raw sinks are guarded, TT policies let you add some extra rules, reviewing sanitizer configs (potentially with #59) makes sure that user data doesn't trigger your custom sinks. You could combine policies with sanitizers if you want, but the API doesn't force you to, because otherwise you're putting an "integration tax" on everyone else. |
It's not the case because you can't write an app and then lock it down. What you have to do is write an app that works without TT; then TT-ify it by using I'm assuming the goal is to provide an easy (as in: easily explained, and also easy to do) way to write an XSS-safe app, which should surely work in all browsers, and then us TT to lock it down when that is supported. Ideally, one could just add the CSP directive(s) and everything should continue to work. (I guess we can argue whether "perfect types" is the proper name for this, or whether I'm describing something else and I should make up my own term for it.) To re-iterate: What I'd like to do is provide "a well-lit path" to XSS-free apps. I don't much care what we call it. Here's a strawman of what I'd like to write in a well-lit-path-to-XSS-free-apps-document:
If you absolutely require string-to-innerHTML assignment, then:
So.. this is a strawman. I'm happy with anything else. But it should have similar (or better) complexity. In particular, the base case should be simple. |
I'd assume same behaviour as omitting
Yes. My intention is that I could have a construct that - except for initialization - works the same with and without TT. The problem is without TT there are no policies, so I want to save the developer from having to invent some wrapper that will either call a plain sanitizer, or a policy that wraps a sanitizer. So if we have a policy that disguises itself as a sanitizer, we'd be there. (Admittedly, this particular idea is far from elegant. But it was the best I could think of.) |
Oh, I see. To rephrase how I understand what you meant, About the well-lit path for the XSS-free app, you need only TT to achieve it:
I don't think the sanitizer changes anything here, regardless of the integration model. Even without any integration the sanitizer output cannot cause XSS without the data passing somehow through one of the policies later (the final XSS sink will always be native, and these are guarded). If you can get a Your code may incorrectly assume that all well-lit pathWith the above in mind, the well-lit path is:
If you need string-to-innerHTML:
With TT
+1, that's important, but there will always be some form of branching for TT and non-TT case. Either based on
I'm not sure I got this. How can you have the code that does not require a wrapper, such that it works both for Chrome+TT+TT headers and Firefox? The simplest thing is branching on |
Sorry, because of many comments above, I'm not going to reply all of it and just dump my opinion. Just create a
This way, JS libraries just have to branch on Sanitizer availability. Because the only place where the app would break is when the site block sanitizer with |
This would be an antipattern I think. Polymorphic return types force callers into making explicit checks for the return value type anyway in order to have any type safety. For example, in TypeScript this would require to type the function like
which prohibits you from using the value as either type (you can't call |
As koto said, we've had customer bugs with this in the Trusted Type launch, where people called e.g. |
So are you saying that calling |
Comment on interactions between Trusted Types and the Sanitizer API, in an appendix.