-
Notifications
You must be signed in to change notification settings - Fork 35.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
replace absolute sleep with conditional wait #5971
Conversation
I like the simpler nature of this patch, but doesn't this mean that that the transaction trickling stuff for privacy turns itself off when the network is sufficiently busy? |
Agree. If we want the trickling behaviour still, we should make it
conditional on sufficient time having elapsed?
|
The trickle logic is essentially completely broken as it is. |
latency_fix both nodes latency_fix sync/master ibd node |
Is there any substantial change in CPU load for a busy, listening, full On Sun, Apr 5, 2015, 6:08 PM Patrick Strateman notifications@github.com
|
I haven't measured that, but I cant see there being any substantial cpu overhead. |
latency_fix both nodes: 3591 seconds latency_fix sync/master ibd node: 4338 seconds master sync/master ibd: 4952 |
That is about a 28% speedup. Awesome. Care to get started on a separate PR based on this which improves the trickle logic? |
lightly-tested-ACK |
I've been playing with this a little bit and it looks pretty good. I ran under a test harness that simulated 125 clients that sent 1-2 messages/sec each (ping/pongs) in order to take a look at changes in the load profile and it does roughly double the overhead, but on my system (2014 era decent consumer PC) I only see about 6% CPU utilization jumping to about 12% so it's probably negligible. (This doubling seems pretty constant even under much higher load, so something to keep in mind for the future.) I'm also playing with running this as a listening node with so-far, so-good results. These sorts of changes tend to scare me a bit as they can have nonobvious network effects, so I'm going to keep playing and will report if I find anything... |
We can likely back off the timeout in the future, the current value is probably excessively conservative. |
Wait, wait. It looks like this pull is completely subsumed by #5989? I suppose they need to go in together anyway... |
@ajweiss it's just building on it. |
@@ -1417,7 +1423,7 @@ void ThreadMessageHandler() | |||
} | |||
|
|||
if (fSleep) | |||
MilliSleep(100); | |||
messageHandlerCondition.timed_wait(lock, boost::posix_time::microsec_clock::universal_time()+boost::posix_time::milliseconds(100)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Spaces around +
ACK. Code looks good (really tiny nit: messageHandlerCondition could be declared static). |
Code changes look good to me, going to test this. |
ACK. Squash please? |
Squashed |
Tested ACK |
351593b replace absolute sleep with conditional wait (pstratem)
benchmarks pending...