-
Notifications
You must be signed in to change notification settings - Fork 278
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP: node/http: dumpzone #280
Conversation
} | ||
} | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Log number of records seen here
Quick thought: I was playing around with this, and just doing some back of the napkin math - I feel that rewriting the zone file each time on this api call will be: For some context, we currently iterate through the tree to see how many names are "open" for the new hnscan plugin. This iteration, with the current "8041" names on testnet, was taking roughly 300ms (doing just a count). If this iteration scales linearly, then at a high number of open names in the tree, iterating through the tree will take seconds. Possible workarounds: B. Implement the zone file code in such a way where it can be individually updated, rather than having to be rewritten entirely. I think this would add substantial complexity - likely requiring a new package - but in the long term, I think this might be the most viable solution. This would allow us to only change the names in the zone file that have changed over the past 36 blocks. I'm not suggesting we ditch this PR, as I think getting something working for those who want to use it is likely more important, but just wanted to put down some of thoughts about how this scales into the future. |
@tynes @chjj Do we know how this will change when RR types are removed for #125 ? Also, to build a working public resolver, it would be really nice to support iterating every HS name that has changed since a given tree interval (potentially ALL HS names) and dumping them all as zones for a normal DNS NS to serve. We've been working towards this with @mikedamm |
@jacobhaven The const msg = resource.toDNS(fqdn, rrtype);
for (const rr of msg.records()) {
const entry = rr.toString();
} Note that this PR is naive in the sense that it uses brute force, there could be better ways to do it. Also see #125 (comment) as it is impossible to truly remove v0 serialization unless serialization versions become part of consensus. |
@tynes I don't think doing this in a brute force way is really even that bad, as it's only used every 36 blocks. The problem you also need some way to enumerate all names that have changed since you last synced. It's not clear to me how to do that. You can "truly remove" and old version of DNS serialization by just no longer resolving it. I doubt that simultaneously supporting multiple types of DNS Zones would be a good idea. Each resolver can just deprecate old ones as it sees fit. |
This is a very interesting problem to solve, I think a closer inspection of the database would be helpful in solving this: Note that there is also an implementation written in Golang by Oasis Labs
This is an inherently subjective removal. The only "objective truth" is found through Nakamoto Consensus + Proof of Work, assuming minority byzantine miners (51% attack) and at least one honest peer (eclipse attack). Whatever is on chain is on chain. Removing the ability to resolve (encode/decode the on chain data) a particular |
Experimenting with dumping the entire Handshake tree as a zonefile. This would allow a traditional DNS server to serve the Handshake zone. A tradeoff in security for scalability.
We need to figure out how often this would be called and ensure that its performant enough that it will not cause any problems with staying in consensus. It needs to be tested on a very large tree, planning on testing it on the current testnet.
Closes #152