New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HeaderSync optimization (#1372) #1400
Conversation
This is a great performance improvement! However I'd like to propose a couple changes:
|
Please look my comment in #1402 , Regarding:
Why you have this feeling? This new code's not related to any buffering reads.
|
I just realized it's better to split Another reason is that Will remove it from this PR and submit a new PR for it. |
…for security review
Sounds good, thank you for removing the
And thank you for the great work! |
Thanks for a good review! |
Just meant to amend my comments first as I realized I made a mistake. Point 3 is actually incorrect as we allow Cuckoo sizes greater than 30 now. So the proof of work part of the header could be 30*42 bits, 31*42 bits, 32*42 bits, etc. Sorry for that. |
p2p/src/protocol.rs
Outdated
@@ -39,6 +41,60 @@ impl Protocol { | |||
pub fn new(adapter: Arc<NetAdapter>, addr: SocketAddr) -> Protocol { | |||
Protocol { adapter, addr } | |||
} | |||
|
|||
/// Read the Headers Vec size from the underlying connection, and calculate the header_size of one Header | |||
pub fn headers_header_size(conn: &mut TcpStream, msg_len: u64) -> Result<u64, Error> { |
This comment was marked as spam.
This comment was marked as spam.
Sorry, something went wrong.
Done for comments of moving out of |
Have you seen my previous comment? Looks like header size calculation will fail in presence of headers with different Cuckoo sizes. |
Sorry, I don't know why I thought that doesn't impact :) (it could because you firstly said The Headers Vector with different size Header is totally out of control, both in this new code and in the old code, I can't imagine how possible to deserialize a Vector with each different size of element. I will read more code to confirm. If it's an existing issue, that will be a major bug and better to be handled in an independent new PR. |
The good thing: After reading Now the bad thing: a very upset finding, we don't have a This fact could make this optimization not feasible anymore! Any suggestion to this? perhaps it's too late to add |
It's too late and in theory not needed, as each header has enough information to get deserialized (the cuckoo size is in them). One idea would be to read enough bytes for say Cuckoo34 headers. You'll overshoot, but then you can reuse the unused bytes in the next iteration, placing them first in the buffer. |
sounds a feasible solution. but :) a little bit ugly, and need rewrite a deserialize function for that. Even I agree it could be too late to add When a Grin node receive a Block, currently we complete this whole block receiving from network, then start block validation, if find this received block already received before (or Header validation fail), we throw this block. That could be a hole for attacking. A better solution is to receive its Header firstly, than start a validation for this Header, if not needed, we don't need receive the bigger body part of this block. Does this make sense to worth a hard fork? And remember we have a security bug in #1358, which also need a hard fork. Anyway, up to your decision:) |
You're arguing for header-first propagation. We already have compact blocks, which are pretty much the same, maybe better. By the way none of these are strictly hard forks (nothing forks), they're just breaking protocol changes that can lead to network partitioning if not properly handled. Also we only receive a single block/header in that cases, the total size is the same as the message size. I may be mistaking but it doesn't seem the lack of header size requries requires rewriting a deserialization. You can just use the same deserialization, it'll do the right thing from a |
Complete the enhancement to support variable BlockHeader size in one Headers message, from Cuckoo30 to Cuckoo36. |
core/src/core/block.rs
Outdated
@@ -265,6 +299,20 @@ impl BlockHeader { | |||
pub fn total_kernel_offset(&self) -> BlindingFactor { | |||
self.total_kernel_offset | |||
} | |||
|
|||
/// Ser size of this header | |||
pub fn size_of_ser(&self) -> usize { |
This comment was marked as spam.
This comment was marked as spam.
Sorry, something went wrong.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor comment but looking good otherwise! Restarted the failing test to see if it's just a transient issue.
Thanks for approving! And function name has been changed. The travis-ci panic at |
I think you're running into an issue with the fast sync test that I fixed a few days ago. Can you merge master back to double-check? |
Linux test in local also OK.
then check
Please let me think out a solution to be compatible with this. |
…koo10 will be used for AutomatedTesting chain
Starting to lose track with other PRs looking at Travis failures. Are there still test regressions caused by this PR? I've restarted tests many times but they always fail (while master mostly passes). |
@ignopeverell on my machine I see a pr issue:
|
@ignopeverell please let me finish #1434 firstly, to fix the boring false alarm of |
…ulate serialized_size_of_header
Finally! the last fix works and pass the Now this PR is ready to merge. BTW: |
Indeed finally! Sorry it was so difficult, but thanks for another great PR! |
With this optimization for #1372, the HeaderSync procedure can be 200% faster than before on my mac air(early 2015), i.e, 3 minutes compared to 6 minutes before: