Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[optimization] for Streamer.js #92

Closed
4Ykw opened this issue Aug 20, 2021 · 3 comments
Closed

[optimization] for Streamer.js #92

4Ykw opened this issue Aug 20, 2021 · 3 comments

Comments

@4Ykw
Copy link

4Ykw commented Aug 20, 2021

I have been running/experimenting with different parameters on the streamer plugin for quite some time and being apart from most nodes (physically) gives me the opportunity to test these in a more real-world scenario.

For almost 6 months... these have been the best combination compared with the current 100 buffer and a single connection to each node API. Of course, that's an increase of connection count, but most of the time (once the node is in sync) that does not happen. Anyhow, currently this does not prevent anyone from doing differently. I would be keen on seeing any of you testing these parameters while we don't get some "more robust" code. This will at least in my view help other less skilled nodes.

$ git diff Streamer.js
diff --git a/plugins/Streamer.js b/plugins/Streamer.js
index 60da1d0..e13ca19 100644
--- a/plugins/Streamer.js
+++ b/plugins/Streamer.js
@@ -31,7 +31,7 @@ let updaterGlobalPropsHandler = null;
 let lastBlockSentToBlockchain = 0;
 
 // For block prefetch mechanism
-const maxQps = 1;
+const maxQps = 3;
 let capacity = 0;
 let totalInFlightRequests = 0;
 const inFlightRequests = {};
@@ -386,7 +386,7 @@ const throttledGetBlock = async (blockNumber) => {
 
 
 // start at index 1, and rotate.
-const lookaheadBufferSize = 100;
+const lookaheadBufferSize = 10;
 let lookaheadStartIndex = 0;
 let lookaheadStartBlock = currentHiveBlock;
 let blockLookaheadBuffer = Array(lookaheadBufferSize);

Views? Feedback?

@ervin-lemark
Copy link

I am keen on trying this ... on Monday.

Can you translate the diff to plain English, please?

TNX!

@4Ykw
Copy link
Author

4Ykw commented Aug 23, 2021

The changes above are over two variables:

  1. maxQps - maximum connections per target per second.
  2. lookaheadBufferSize - maximum of buffer to request before starting to process them. In my view reducing this one increases CPU usage but will also increase "responsiveness". The CPU increased usage in my case was not much noticeable although I can't quantify it.

The end result for my node was that under inconsistent situations or having slow responses, the impact on my nodes was smaller.

@4Ykw
Copy link
Author

4Ykw commented Nov 15, 2022

Let's close this issue since lots have changed from when I tested this last time.

Currently maxQps might improve up to 2 or 3, but after that its just nuisance for the HIVE nodes being targeted. The lookaheadBufferSize has no impact anymore after HF26.

@4Ykw 4Ykw closed this as completed Nov 15, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants