-
Notifications
You must be signed in to change notification settings - Fork 77
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
network: batch transactions on broadcast #681
Conversation
Codecov Report
@@ Coverage Diff @@
## master #681 +/- ##
=========================================
- Coverage 65.22% 65.03% -0.2%
=========================================
Files 125 125
Lines 10743 10775 +32
=========================================
Hits 7007 7007
- Misses 3457 3489 +32
Partials 279 279
Continue to review full report at Codecov.
|
Codecov Report
@@ Coverage Diff @@
## master #681 +/- ##
==========================================
- Coverage 65.96% 65.72% -0.25%
==========================================
Files 128 128
Lines 10992 11033 +41
==========================================
Hits 7251 7251
- Misses 3459 3500 +41
Partials 282 282
Continue to review full report at Codecov.
|
Some results on benchmarking: Noticeable changes: after increasing mempool size to 500k, this bench showed interesting results. Before the patch (on the current |
Because transactions a iterated in an increasing order, we can filter slice in-place.
From what I see in my testing of it, RPS is improved and block generation is changed a bit in that it really tends to produce bigger and bigger blocks over time. But at the same time I don't see any clear negative effect on TPS, so even if it just improves RPS it's a good change. |
Closes #665 .