New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Caches the reply to request tx to avoid redundant processing #1078

Merged
merged 2 commits into from Oct 21, 2018

Conversation

Projects
None yet
3 participants
@alon-e
Copy link
Member

alon-e commented Oct 18, 2018

Description

analyzing what happens when a new node tries to sync:

let N be the synced node & n be the new node.

N                      n
    <- ?|h(tx)   -      //n requests tx
    - tx|h(?)   ->      //N replies with tx
    <- tx|h(tx2) -      //n, which just found out about `tx` broadcasts it to every one (and request something else)

so in this case N will analyze the last packet - i.e. will hash it to check MWM etc.
this is redundant in this case, and is a burdun on N.
in my test (8 core) both nodes consumed 400%.
and profiling showed 85% of the CPU is spent in Curl.transform

proposed solution:
however, it's simple to see that n->N tx can be anticipated, I mean N just sent it to n.
so y proposal is for N to add tx to its recentSeenBytes list as if it already received it.
this would make N skip the hashing and validation.


this result is dramatic:

  1. CPU consumption: N ~75% ; n ~550%
  2. profiling: 27% transform, 22% rock.get, 15% UDP socket! - much more balanced.
  3. n syncs 150% faster.
  4. while toReply still gets high, the toProcess stays low.

Fixes #586

Type of change

  • Enhancement (a non-breaking change which adds functionality)

How Has This Been Tested?

  • Testing 2 nodes - N synced and n a fresh node, and comparing CPU usage and hotspots.

Checklist:

  • My code follows the style guidelines for this project
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes
MessageDigest digest = MessageDigest.getInstance("SHA-256");
digest.update(receivedData, 0, TransactionViewModel.SIZE);
ByteBuffer byteHash = ByteBuffer.wrap(digest.digest());
ByteBuffer digest = getBytesDigest(receivedData);

This comment has been minimized.

@GalRogozinski

@GalRogozinski GalRogozinski merged commit 0dbab4a into iotaledger:dev Oct 21, 2018

2 checks passed

Codacy/PR Quality Review Up to standards. A positive pull request.
Details
continuous-integration/travis-ci/pr The Travis CI build passed
Details

@jakubcech jakubcech added this to the Hogwart's milestone Oct 22, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment