Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possible use of //@include: for intricate registry namespaces within ICANN section #981

Closed
dnsguru opened this issue Feb 26, 2020 · 2 comments
Assignees
Labels
❌wontfix Will not be merged. Reason typically included in PR/Issue as to why

Comments

@dnsguru
Copy link
Member

dnsguru commented Feb 26, 2020

PR #277 and #276 were closed, as were Issues #270 and #274 - and the PR were not implemented.

Were these a problem that needed to be solved? Would the ability to define an include URI or referral to another resource with deeper detail be something that could solve this?

This is floating an idea to perhaps help with scaling challenges related to the PSL, where there are intricate registry namespaces within the ICANN section of the PSL.

Specifically using the .NAME TLD, with its many surnames that can have third level registrations, but there are current and future namespaces within an ICANN/IANA section where there may be substantial and intricate subspace that the registry defines, and users might benefit from having that namespace being fully articulated for the respective properties, as is done in the PSL.

The problem that comes from such situations, is that the ICANN section might exponentially grow (#277 proposed adding nearly 20k lines to PSL) by doing this.

This ties to namespaces which have massively defined internal structures defined by the TLD administrator, such as .POST #270, .US #274 and #276 , .NAME #277, .BR or .JP (certainly others exist)

In working on ideas for Roadmap discussions, scaling comes into focus as something that merits discussion, and this is a roadmap idea to feed into #671

@sleevi
Copy link
Contributor

sleevi commented Feb 26, 2020

I'm not sure how any form of syntax was related to the reasons why these were closed.

The 'simple' reason these weren't usable is that the primary consuming software simply cannot handle such large lists. Were such lists accepted in the PSL, then for Chrome, we'd simply have to fork the PSL at that point: because the complexity of such a large structured lists imposes lookup costs for all domains, and requires the list be distributed to all users, at tremendous cost (the incremental binary cost distributed among a billion users is not trivial).

Rather than over-indexing on a technical solution, perhaps it's best to focus on first defining the problem, and making sure there's a common understanding about the reason for rejections?

I'd like to suggest we close this issue as WontFix, and perhaps have some discussion about the challenges, such as rate of growth, performance constraints (e.g. for cookie lookups), distributed costs, etc.

There are alternative solutions that could have been possible. For example, imagine if names that were too complex to reasonably accomodate simply had cookies disabled in them by UAs. This would allow UAs to safely elide these entries ("forking" the list), while still allowing their expression within the PSL, and simply ignoring them. However, we can only entertain those by first understanding the problem.

@dnsguru dnsguru added the ❌wontfix Will not be merged. Reason typically included in PR/Issue as to why label Feb 26, 2020
@dnsguru dnsguru self-assigned this Feb 26, 2020
@dnsguru
Copy link
Member Author

dnsguru commented Feb 26, 2020

I'd like to suggest we close this issue as WontFix, and perhaps have some discussion about the challenges, such as rate of growth, performance constraints (e.g. for cookie lookups), distributed costs, etc.

Closing as wontfix

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
❌wontfix Will not be merged. Reason typically included in PR/Issue as to why
Projects
None yet
Development

No branches or pull requests

2 participants