Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[consensus] gTLDs are always reserved #256

Closed
wants to merge 1 commit into from

Conversation

pinheadmz
Copy link
Member

After a discussion with @jacobhaven, a good deal of concern was brought up that root zone TLDs like com, org, and info could be auctioned and owned by users after the claim period ends (4 years on mainnet).

I'm not sure I entirely agree with this position, but came up with this patch to implement the rule and maybe we can have public discussion about it here.

@codecov-io
Copy link

Codecov Report

Merging #256 into master will increase coverage by <.01%.
The diff coverage is 100%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #256      +/-   ##
==========================================
+ Coverage   53.12%   53.12%   +<.01%     
==========================================
  Files         129      129              
  Lines       35751    35754       +3     
  Branches     6023     6024       +1     
==========================================
+ Hits        18993    18996       +3     
  Misses      16758    16758
Impacted Files Coverage Δ
lib/covenants/rules.js 73.3% <100%> (+0.11%) ⬆️
lib/utils/binary.js 56.41% <0%> (-2.57%) ⬇️
lib/covenants/reserved.js 97.72% <0%> (+1.13%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 17db508...58a40eb. Read the comment docs.

@tynes
Copy link
Contributor

tynes commented Sep 26, 2019

I feel like allowing anybody to register com after 4 years could result in a lot of scams. Even if com is blacklisted by the mainline software at the DNS resolver level, its possible to change the blacklists, so users would need to run their fullnodes and also run signed releases. I think its more likely that Verisign doesn't claim com within 4 years. Also think about TLDs that do not have the infrastructure in place to claim their names because they do not support DNSSEC - its possible that they do not have the infrastructure ready in 4 years.

@kilpatty
Copy link
Contributor

Follow up from Telegram chat -

While I agree that it's pretty unlikely Verisign or many TLD owners claim their names within 4 years, I don't know if making the TLDs permanently reserved is a good solution to this.

There is a possibility Handshake fails to find any product market fit in the DNS/Root Zone ecosystem, but does find a fit in something completely unrelated. If we make DNS specific preferences permanent, those will still exist should the community no longer use Handshake for DNS.

As far as I'm aware, we can change the reserved time with a soft fork? If we get 4 years down the road, and Handshake hasn't either A. attracted enough TLD owners to claim or B. Found a fit in some other use case, then I would suggest we increase the reserved time at that point.

That being said, I think 4 years is really too soon to start for the reserved names. I think we should consider increasing that time to start, and go from there.

I think it may also be a good idea to have 2 reserved times. One for the Alexa top 100k names, and another for the current TLDs/gTLDs. Something along the lines of 5 years for the first and 10 years for the second.

@pinheadmz
Copy link
Member Author

pinheadmz commented Sep 26, 2019

then I would suggest we increase the reserved time at that point.

That would be making the rules less restrictive, a hard fork. We can reduce the claimPeriod as a soft fork (older software won't care the claims have stopped early) but not extend it.

Also, from the w.p.:

Additionally, in extreme circumstances, the community can institute a hard-fork to manually assign rightsholders their names with sufficient consensus.

@kilpatty
Copy link
Contributor

kilpatty commented Sep 26, 2019

@pinheadmz I thought it would be making the rules more restrictive?

We would be saying you can't open these names for an even longer period of time than previously thought, which to me is a more restrictive change.

Mining nodes would reject any opens to the names, and the old nodes would still see that as valid (since there would be no opens to the names).

edit: whoops, I see why this would be a hard fork, since the old nodes would not accept claims past the reserved period. So it's not just about rejecting opens on the names, but about accepting claims.

Perhaps then we should set the claim time to something extremely large? 10-20 years, and then we can soft fork it down if wanted.

Edit 2: On second thought, if we want the ability to soft fork an increase in the reserved name list time period, we could allow for those owners of their names to claim at any point granted the name has not already been opened by someone else.

So if 4 years passes, but no one has issued an "OPEN" yet for amazon, then they can still submit a claim and have it be valid. That way, we only have to increase the time period where we reject "OPEN"s on those names.

@pinheadmz
Copy link
Member Author

Huh yeah between OPENs being allowed and CLAIMs being rejected, that value is something that can not be soft-forked in either direction ...?!

@0xhaven
Copy link

0xhaven commented Oct 10, 2019

Having given more thought, if we're not going to protect users at the consensus level, then we should not reserver any names (besides the existing blacklist: example, invalid, local, localhost, and test).

This would make it clear to users that all of the governance on conflicts actually happens at the resolver/user-agent level, not the blockchain level.

I think it's very confusing and dangerous for HS users to have names treated as a protected class only to have that protection stripped away.

@tynes
Copy link
Contributor

tynes commented Oct 10, 2019

Having given more thought, if we're not going to protect users at the consensus level, then we should not reserver any names (besides the existing blacklist: example, invalid, local, localhost, and test).

Could you clarify? Are you talking about the Alexa Top 100k as well as the claimed trademarks as well as the reserved ICANN names?

@0xhaven
Copy link

0xhaven commented Oct 10, 2019

I'm mainly just talking about the ICANN TLDs. That's the real attack vector.
As I understood it, the Trademark claims don't expire like the other reserved names. They exist on the chain and can't be OPENed. Do they have to be renewed?
And it might make economic sense to temporarily reserve the Alexa 100k to encourage them to claim their names.

@pinheadmz
Copy link
Member Author

As far as I know, gTLDs, reserved names, and trademarked names are all in the same class of reserved names. They all require DNSSEC proofs to claim, and are all un-claimable after 4 years (when they can be OPEN'ed and auctioned). You can see the entire list at https://github.com/handshake-org/hs-names/blob/master/build/names.json

Root names (gTLDs) are identified by flags like 1, for example:

  "00034c407484cdb33f3552247f0fae3d4a38c9e537fc8f39ddb51f6aa4b438c5": ["nec.", 1],

A root name claim pays the miner a higher fee:

let value = root ? this.rootValue : this.nameValue;

All names on this list are otherwise treated equally:

hsd/lib/covenants/rules.js

Lines 363 to 375 in af86d48

rules.isReserved = function isReserved(nameHash, height, network) {
assert(Buffer.isBuffer(nameHash));
assert((height >>> 0) === height);
assert(network && network.names);
if (network.names.noReserved)
return false;
if (height >= network.names.claimPeriod)
return false;
return reserved.has(nameHash);
};

@0xhaven
Copy link

0xhaven commented Oct 10, 2019

Thanks @pinheadmz I knew there were different flags, but not how they were all handled.

I think it would be in the best interest of user-safety to no pretend to handle ICANN collisions with the this reserved list (it clearly falls short, and always will). Putting a 4-year delay on some aspects of this problem isn't doing anyone any good (and will just delay coming up with real solutions).

The user should be able to choose (via their user-agent or DNS resolver) how to prioritize HS or ICANN names, especially when ICANN introduces new TLDs, someone registers a homograph (#255), or a HS name is serving some other deceptive or unwanted content.

@pinheadmz
Copy link
Member Author

By design, these consensus rules will not be changed and it will be the responsibility of the user/application layer to handle blacklisting past the consensus layer.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants