-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consider a different process for standardizing registerProtocolHandler schemes #9158
Comments
IMHO RFC schemes should be approved. For the rest, there should be a discussion prior to implementation. |
I'd like to hear more about this. What process would you prefer to use for landing code in https://searchfox.org/mozilla-central/source/dom/base/Navigator.cpp#886 , which does not involve consulting anyone at Mozilla? |
There are two ways to go about this question:
If we would like to strictly follow the standards we should follow: the Internet Engineering Task Force (IETF), the Internet Research Task Force (IRTF), the Internet Architecture Board (IAB), Independent Submissions Another way to go (which is my favorite) is to produce the DAO entity or project, which will be based on the voting system, or even the new repo here on GitHub. It will collect all the pros/cons to implement the proposed protocol. I assume the formal way is now out of the question because there are introduced protocols, such as: Then the only way will be not to take out the freedom of choice but require enough explanation and use cases. Obviously, RFC is a use case as well. Just my 10 cents, but I will be happy to elaborate. |
@domenic Once it has landed in the spec it's fine to need engineering and review from Mozilla to land in Gecko, as it's a trivial change to add a new scheme. For preferred process for adding schemes to the spec, maybe something like:
|
So you're saying, Gecko would pledge to automatically lend engineering support for anything in that kind of category? And would like to have the editors be responsible for gathering and collating that information (or asking the proposers to do so)? That seems reasonable, but is largely a Gecko policy for how they want to manager their codebase. For Chromium, it seems like there's very little willingness to add schemes. I think at least three Chromium engineers have bounced off trying to do so, unable to get appropriate approvals: @fred-wang, @mgiuca, and @mustaqahmed, if I recall correctly. (I know @javifernandez is also working in this area but I'm not sure if he's tried to add any new schemes.) So I'm not hopeful for Chromium making a similar policy in how they manage https://source.chromium.org/chromium/chromium/src/+/refs/heads/main:third_party/blink/common/custom_handlers/protocol_handler_utils.cc;l=66;drc=837cc12de25a288edf3ac222f7265c9936e69552;bpv=1;bpt=1 . |
So as the instigator here, I should probably weigh in. The concern here is not really about preserving engineering resources on our end; if things started getting high volume, we might automate something. The concern as a few aspects:
This part of the specification isn't central to HTML, it is a point where it interfaces with the messy, non-inoperable world outside of HTML. A different process is appropriate. Our experience with this sort of list (a registry if you will) in the IETF is that the higher the bar you set1, the more damage it causes. That's counterintuitive, because bad designs are bad and the use of a registry seems like a great way to head off bad ideas. But when the bar is high, people just don't register their scheme. The harm that comes from not registering something is immediately obvious here. After all, if it is not on the list, browsers won't implement, because browsers are good at tracking WHATWG decisions and specifications. However, the IETF has a somewhat looser policy for registering new URI schemes (though the registration process is pretty onerous, to the point that only Dave Thaler seems to have bothered to register many of them). But the IETF policy isn't the operative one: operating systems exercise no such control, so there are many URI schemes that are in use. But web applications cannot access those schemes. That harms the web, at least relative to native applications. So I think that this allowlist would be better off as a blocklist (that lists "http", "https", "file", "about", "chrome", and anything we can identify as being outright dangerous). Then we would either have no further restriction or a restriction to only permit registration of schemes from the IANA registry. The advantage of having no allowlist is that maintenance is much easier. The advantage of letting IANA/IETF manage this is that they have procedures that already deal with the business of requesting specifications and whatnot2, so maybe you get better adherence to standards. I personally have a preference for nothing on the basis that IANA/IETF have not been able to attract registrations for many of the schemes that are in wide use3. However, neither option addresses the risk that a scheme might appear on the list that is manifestly unsafe to expose to the web. That's a risk that increases here. Though URI schemes are supposed to be safe, they manifestly are not. For instance, capability URIs are a thing, regardless of scheme and having a web site intercept a capability URI intended for a native application might cause real problems that are not adequately counterbalanced by any prompting that a user agent might do in response to a call to Footnotes
|
Just a note. Instead of In your point 2 I somehow agree that the opinion is tendentious, but strongly disagree with "The same goes for the next planet-destroying technology that is referenced in this way." I assume this is off-topic now. |
My colleague @fred-wang was working in the past on some proposal to add a few dweb schemes to the HTML spec's safe-list. Ultimately, I followed up, but splitting the effort and focus first on ipfs. I've filed standard position requests for Firefox already. However, regarding the issue of leave aside browsers from the scheme standarization process. I don't have a strong opinion. I see good points on both positions, tbh. |
I only have a little context in this topic and will have to get caught up with the positions of other experts in Chrome, especially around security. But I wanted to say that @martinthomson's points really resonate with me. I think we should always actively work to avoid the temptation to gatekeep unless there's a very compelling and concrete reason. I think we can protect users from abuse without having to explicitly decide to endorse each experiment with a new scheme, and we can always add stronger warnings or outright block schemes that turn out to be a problem for our users in practice. I don't really like it, but note that Chrome already has an extremely wide open model on Android where any website can invoke an intent: URL to talk to any Android app, usually without any browser UI warning the user. Having a standardized list of schemes for registerProtocolHandler while having a completely wide open communication channel to Android's equivalent propriety mechanism seems to me to be at best inconsistent, and at worst one more reason to encourage exploration and innovation to occur in proprietary app platforms instead of on the web. |
Some housekeeping: we have a still-open issue #3998 for converting the allowlist to a blocklist (essentially allowing all schemes to be used as handlers, other than a handful of harmful or problematic schemes like This bug was originally filed as streamlining the process for adding schemes to the allowlist, but @martinthomson and @RByers comments seem to be moving into the direction of converting the allowlist to a blocklist. Just so we're clear, both of those options are on the table here. (FWIW: I agree with Martin and Rick and I think we'd be in a much better place if we just switch to a blocklist, and I made several arguments for that on #3998. As I said there, I don't think the current scheme buys us any security advantage, though I think we need to consult with security folks on that. There are potential compat risks with opening it wide up, but I think that ship has already sailed by allowing native apps to register whatever they want. This just allows web apps to catch up.) |
Is the expectation here that end users understand arbitrary schemes and can make reasonable decisions about them? |
The understanding is that the content of arbitrary schemes is not an inherent risk to users if it is passed to an agent of their choice. The main risk is if the link itself carries information that the site should not be able to access (see also capability URIs). That is, the user erroneously decides to allow a site to receive URIs for a given scheme, when those URIs carry information intended for a different entity. This is a risk that exists regardless of the scheme. mailto URIs carry this risk, too. All URI schemes potentially do. That makes the user decision less about the security of links and more about delegation of agent authority, annoyance mitigation, and other factors. Delegating authority to act as an agent for a URI scheme is a powerful capability, certainly, but I'd suggest that the only way to deal with that is to disable |
So giving arbitrary websites control over say, |
It depends on what you think you are protecting. Zoom might be unhappy if someone built a client that handled There is potential for this to be used by sites in an attempt to hijack access, which a user might accidentally permit. That would be annoying, but we need mechanisms for correcting that sort of error, even when the choice was deliberate. For that, operating systems do provide some interfaces for high-value stuff and browsers offer UX for managing the small number of schemes that they have registered. Firefox has a list in settings that covers both links to apps (like |
I'm warming up to the idea of switching to a blocklist. Disallowing anything not in the IANA registry means there's at least some friction to start using a new scheme, which makes it harder to hijack typo/lookalike schemes. The disadvantage is that there's still the delay from minting a new scheme to being able to use it in browsers. I also think we should add all browser-specific schemes like "chrome" to the spec's blocklist, for reasons @annevk outlined in #3998 (improve interop and web compat). If Zoom would like |
I have to say that it's still very much unclear to me you can have a good user interface beyond a limited set of schemes that correspond to tasks end users understand.
So you are comfortable forcing this on all apps (which I think includes most macOS/iOS apps)? Presumably they would now all have to start caring about websites in Chrome or Firefox hijacking their schemes. |
I tend to think that we should deny Zoom the ability to be added to a blocklist. We're not in the business of protecting apps from their competition. Also, that would stop the Zoom website itself from handling those links, which is something that it could do, should it choose to. Finally, in this case at least, there are probably enough technical hurdles to clear for a functional site to intercept the link, we're just looking at an abuse scenario. Better UX seems to be the answer to most of these issues. For instance, browsers might add a confirmation notice before opening the handler site on the first launch. That might help with accidental interceptions. And I don't see why a browser couldn't do more, maybe with a short list of schemes that they understand and for which they loosen any stricter handling. Schemes that are natively understood can maybe be explained better ("Allow this to become your default email sender? [y/n]"), so that makes sense. Is there anything other than the potential for users to understand a scheme that is getting in the way here? |
Yes, but they already have to care about other native apps hijacking their schemes. (e.g. what's to stop a native app from claiming |
So with my editor hat on, what I'm seeing here is implementer interest from both Mozilla and Chromium to switch to a blocklist-based approach. That meets our criteria for changing the standard. (Especially since those are the only two implementers of registerProtocolHandler!) It seems like the remaining work is for someone to drive the creation of the blocklist, and updates to the spec and web platform tests. Do we have any volunteers? |
Should |
For the record, I'm supportive of this for chromium and pretty sure it would pass an I2S (though we'll need a spec change to go through the formal process). Yes I think More important than the specific list of schemes to block is, I think, the principles we'd use for deciding what should be on the blocklist. Perhaps just "any scheme that are implemented strictly internally by major browser engines and host OSes"? Critically, I agree that schemes commonly used by applications should not be on the blocklist. |
Searching chromium code it's unfortunately not likely we can come up with an exhaustive list, but this seems to be a pretty good superset. There's a bunch of obscure things, eg. ChromeOS-specific implementation details, I'm not sure how much we should worry about trying to figure out if they need to be on the blocklist or not? Since they are impl details, if we find a conflict with a website we could consider treating it as a chrome bug and just rename to use the chrome- prefix. |
Thanks @RByers!
This makes sense to me. Outside of Android, are there OS-specific schemes we should add? |
As you can see from the link I shared, it looks like we have some
ChromeOS-specific ones. Frankly, my hunch is that it's not worth adding any
of them and instead we should treat any conflicts / issues that causes as a
Chrome / ChromeOS bug instead. I don't know about other OSes. I've seen eg.
iTunes / appstore have some special URLs on MacOS IIRC, but again I think
these are cases of apps (like Zoom) which if a user really wanted to
substitute with a clone (via explicit permission grant), I don't see why
browsers should block.
…On Thu, Jun 29, 2023 at 3:14 PM Simon Pieters ***@***.***> wrote:
Thanks @RByers <https://github.com/RByers>!
More important than the specific list of schemes to block is, I think, the
principles we'd use for deciding what should be on the blocklist. Perhaps
just "any scheme that are implemented strictly internally by major browser
engines and host OSes"? Critically, I agree that schemes commonly used by
applications should not be on the blocklist.
This makes sense to me.
Outside of Android, are there OS-specific schemes we should add?
—
Reply to this email directly, view it on GitHub
<#9158 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAJYTIYJECBJGT3KXYITR7LXNXHXRANCNFSM6AAAAAAW4YS37U>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Looking through those search results @RByers posted, with an eye to the ChromeOS ones, they feel to me like they won't be affected. This will only be a problem for URL schemes that the system uses for navigations in the browser, but I suspect most or all of these are used by other systems outside of the browser. The list of ChromeOS-specific schemes that I can find in that list:
We would probably need to go through each one in detail, but I doubt most of these are things that you click links to in the browser and are intercepted by the OS. I'm not sure if some of these are used for internal UI pages (similar to Either way, I wouldn't want a blocklist in the spec that blocks ChromeOS-specific things, nor common things like |
ChromeOS is maybe not the benchmark I would use. A site that handles It is possible at least that the OS would also act to protect My general position here is that if there is an identifiable risk to users, a scheme is a candidate for the blocklist. Not blocking something on because sites might need access should be assessed on the basis of the risk to users of standard browsers, not something like ChromeOS. ChromeOS can selectively loosen the list if it deems things to be safe, but it can do that in the same way that it exposes other APIs that should not be exposed to the web (I don't want to start a fight here, so I won't start listing examples). I consider the above to be sufficient risk, but I can see ChromeOS might need a very different threshold. I would prefer if ChromeOS maintained an exception list. Similar logic applies to I have insufficient information on the other items that are listed there. None appear on the list so far. I agree that quic/socks/direct/etc... don't belong here. They appear as a consequence of being used in proxy.pac. Those aren't URI schemes though. |
Removed |
@martinthomson sorry, I didn't mean to imply we should special-case all those. I was responding to @RByers by going through the schemes used by ChromeOS, concluding that they are either schemes that aren't used in the browser at all (but by other parts of the OS) or that we shouldn't interfere with sites from registering as an alternative. I generally agree that we should not blocklist schemes just because some OS or other makes special use of the scheme. But we should (perhaps obviously) leave it open for a user agent or OS to add its own blocklist, in case there are special schemes that, in the context of that OS, should not be registrable by sites.
For Consider: In 2006, Firefox added
I think that is too high a bar. We can always identify a risk to users of allowing a website to intercept clicked URLs, for any scheme, especially if we consider merely leaking the contents of a URI to a site as a "risk". I think the bar for the blocklist should be "having any application (website or a "real" app) intercepting this URL would break basic assumptions about how the web works" (e.g.
This seems to suggest that we have a standard blocklist, but user agents can selectively remove schemes from the blocklist if they think they are safe. I think we could do that, but it would be akin to adding non-standard APIs to the web. My preference would be to have a minimal blocklist, and let user agents selectively add schemes to it, if they have a system-specific reason why it is unsafe, which would be akin to a user agent choosing not to implement a given API, in general much less of a compatibility problem. Looking at Simon's list, I would put on the blocklist just the ones that are common to Chrome, Firefox and Safari: |
That's a pretty reasonable position. I do want to push back on your stated inclusion principle a little though:
On the basis that registered handlers involve changing how the entire system operates, not just the browser, we do need to consider the system as a whole as part of this. That is, "having any application intercept this URL would break basic assumptions about how the web - or operating system - operates". I think that
I tend to think that Perhaps then the answer is that there is, as you say, a relatively short core blocklist plus an adjunct list of things that we say "should" be blocked in most cases, so that sites cannot expect those to work. I don't think that Firefox would have any problem with blocking What I don't want to have is wide divergence in lists - that's not a great experience for site authors. I definitely agree regarding |
We can allow UAs to add more to the blocklist, but I think some interop on the blocklist is useful. The reason is to prevent a situation where one browser allows a scheme to be registered and sites start using that scheme (e.g. |
In my opinion there are more possibilities:
Other questions can arise:
Also note:
|
I also agree with point 2 of #9158 (comment) from @martinthomson. This can be seen in the policital way too. Maybe some people can conclude that a specific scheme is much more used in the rightwing or leftwing and therefore want to disendorse it. I don't think we should want that. |
Related: #8503 #9154
It's not clear that Chromium, Gecko and WebKit should be gatekeepers for adding schemes to
registerProtocolHandler()
, since the interest to add a new scheme may be for something that are not relevant for browsers. I received feedback internally at Mozilla that we'd prefer a process where we don't need to be consulted for adding to the list of supported schemes.Thoughts?
The text was updated successfully, but these errors were encountered: