Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: node/http: dumpzone #280

Closed
wants to merge 4 commits into from
Closed

Conversation

tynes
Copy link
Contributor

@tynes tynes commented Oct 16, 2019

Experimenting with dumping the entire Handshake tree as a zonefile. This would allow a traditional DNS server to serve the Handshake zone. A tradeoff in security for scalability.

We need to figure out how often this would be called and ensure that its performant enough that it will not cause any problems with staying in consensus. It needs to be tested on a very large tree, planning on testing it on the current testnet.

Closes #152

}
}
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Log number of records seen here

@kilpatty
Copy link
Contributor

kilpatty commented Oct 16, 2019

Quick thought: I was playing around with this, and just doing some back of the napkin math - I feel that rewriting the zone file each time on this api call will be:
A. a DoS vector
B. A lot of processing, and need to be called at least every 10 minutes.
Edit: Every 360 minutes, so not as bad as I thought in terms of frequency.

For some context, we currently iterate through the tree to see how many names are "open" for the new hnscan plugin. This iteration, with the current "8041" names on testnet, was taking roughly 300ms (doing just a count). If this iteration scales linearly, then at a high number of open names in the tree, iterating through the tree will take seconds.

Possible workarounds:
A. "Memorize/Cache" zonefiles. We could either cache the zone file on the first call OR cache the file on each new block indexed if that block has covenants which effect name state. Second thought, we would only need to cache every 36 blocks since that is the tree root interval. (We implemented this in hnscan. We memorize the count of names on each new indexed block, and then serve that memorized value over http).

B. Implement the zone file code in such a way where it can be individually updated, rather than having to be rewritten entirely. I think this would add substantial complexity - likely requiring a new package - but in the long term, I think this might be the most viable solution. This would allow us to only change the names in the zone file that have changed over the past 36 blocks.

I'm not suggesting we ditch this PR, as I think getting something working for those who want to use it is likely more important, but just wanted to put down some of thoughts about how this scales into the future.

@tynes tynes changed the title node/http: dumpzone WIP: node/http: dumpzone Oct 16, 2019
@0xhaven
Copy link

0xhaven commented Nov 4, 2019

@tynes @chjj Do we know how this will change when RR types are removed for #125 ?

Also, to build a working public resolver, it would be really nice to support iterating every HS name that has changed since a given tree interval (potentially ALL HS names) and dumping them all as zones for a normal DNS NS to serve. We've been working towards this with @mikedamm

@tynes
Copy link
Contributor Author

tynes commented Nov 5, 2019

@jacobhaven

The Resource object should handle the version internally and not leak that to the consumer of the Resource object. If the Resource has a version that does not support a particular resource record type, it should not create any resource records in the toDNS method. In the case of the code in this PR, msg.records() should return an empty iterator:

const msg = resource.toDNS(fqdn, rrtype);

for (const rr of msg.records()) {
  const entry = rr.toString();
}

Note that this PR is naive in the sense that it uses brute force, there could be better ways to do it. Also see #125 (comment) as it is impossible to truly remove v0 serialization unless serialization versions become part of consensus.

@0xhaven
Copy link

0xhaven commented Nov 5, 2019

@tynes I don't think doing this in a brute force way is really even that bad, as it's only used every 36 blocks. The problem you also need some way to enumerate all names that have changed since you last synced. It's not clear to me how to do that.

You can "truly remove" and old version of DNS serialization by just no longer resolving it. I doubt that simultaneously supporting multiple types of DNS Zones would be a good idea. Each resolver can just deprecate old ones as it sees fit.

@tynes
Copy link
Contributor Author

tynes commented Nov 5, 2019

The problem you also need some way to enumerate all names that have changed since you last synced.

This is a very interesting problem to solve, I think a closer inspection of the database would be helpful in solving this:

Note that there is also an implementation written in Golang by Oasis Labs


You can "truly remove" and old version of DNS serialization by just no longer resolving it.

This is an inherently subjective removal. The only "objective truth" is found through Nakamoto Consensus + Proof of Work, assuming minority byzantine miners (51% attack) and at least one honest peer (eclipse attack). Whatever is on chain is on chain. Removing the ability to resolve (encode/decode the on chain data) a particular Resource version would effectively "brick" the usage of the name for usage in DNS. The price of exit is pretty low, assuming that users can run their own full nodes or its relatively easy for new service providers to run a set of full nodes. Users will exit to the resolvers that best serve them.

@pinheadmz
Copy link
Member

Closing this to focus development on #534 thank you @tynes ! Handsome !

@pinheadmz pinheadmz closed this Mar 13, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

dumpzone RPC Call
4 participants