-
Notifications
You must be signed in to change notification settings - Fork 212
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Targeted Bloom Filters #36
Targeted Bloom Filters #36
Conversation
Forgot to mention, this is an almost complete refactor of BuildSeededBloomFilter. The algorithm for increasing the false positive rates was changed to a standard exponential growth function over time rather than using a linear function which was based on number of mempool entries. Using time will be better moving forward as we expect tx rates to increase and mempool sizes to increase as a result. |
bfc4727
to
5d99fea
Compare
getnetworkinfo now shows: Total Thinbock bandwidth saved % compression InBound thinblocks % compression OutBound thinblocks
Added them for both the thinblock response time and the block validation time
Only update thinblock statistics when the chain has finished IBD and only when thinblocks is enabled.
getnetworkinfo now has inbound and outbound bloom filter sizes for last 24hrs. Inbound bloom filters are for outbound thinblocks and outbound bloom filters refer to inbound thinblocks. Added re-request rates and number re-requested to getnetworkinfo Also added number of inbound/oubound xthins rather than just the percentage compression.
Bytes saved total and also overall compression percentages now account properly for in and outbound bloom filter bytes.
This prevents us from adding response times for a regular block received to the thinblock statistics. Sometimes when we are invoking the prefential thinblock timer and the timer is exceeded we end up getting a regular block however we are still updating the thinblock stats which we don't want to do.
If the net message of of the following type then move the message from the back of the deque to the front of the deque for both inbound and outbound messages. GET_XTHIN XTHINBLOCK THINBLOCK GET_XBLOCKTX XBLOCKTX
This prevents the millisleep if we havn't attempted to read data from a socket because it could not aquire a lock. This way we immediately spin around and make the second attempt which is usually successful. Also changed the millisleep to 5ms from 50ms. This helps us speed through xthinblocks in the event one is waiting to arrive while we're still sleeping.
This way we know which block the bloom filter is created for and we can later strip the data out of the log files and associated bloom filter size with the correct block we are requesting. Also added whether the header was fully validated at the time of the request Also added thinblock has message "waiting for" which helps to parse out performance related information after test runs
Also fixed capitalization on Requesting thinblock message
1) Move the intialization of the Thinblock service to init.cpp. This way no peers that connect before we receive a block will be in SendHeaders first mode. 2) Fixed a problem where initial headers and more headers were being downloaded when a peer had an unsynced chain and hence fewer blocks than our own chain. This would trigger an unnecessary chain of events where the remote peer would send us all of their headers.
Here we just bypass putting the return xthinblock into the INV gedata queue. We just send it directly back with a call to SendXthinBlock() Also, Both the XTHIN and BLOOM service need to be turned on for Xthins to work We always needed both of these services however we were only checking for the XTHIN service when in fact we need both to be on.
If connect-thinblock-force is true then we have to check that this node is in fact a connect-thinblock node. When -connect-thinblock-force is true we will only download thinblocks from a peer or peers that are using -connect-thinblock=<ip>. This is an undocumented setting used for setting up performance testing of thinblocks, such as, going over the GFC and needing to have thinblocks always come from the same peer or group of peers. Also, this is a one way street. Thinblocks will flow ONLY from the remote peer to the peer that has invoked -connect-thinblock.
This is a feature in Core which XTHIN does not rely on anymore. Bloom filtering can be turned on or off with no effect to XTHIN's because XTHIN's do not relay on the Core p2p message system to transmit and load it's bloom filter. The ability to turn OFF the bloom service is something which a miner could do, because of the vulnerability in Core's p2p messaging, but still retain the ability to transmit and recieve XTHIN's
If a peer attempts to make more than 20 get_xthin requests in a 10 minute period they will get disconnected.
-Inspired by @hfinger Here we are seeding the bloom filter with a much smaller subset of the memory pool basing that set on what are the most probable to be mined tx's. We include a list of high priority tx's high score high fee tx's and then a list of orphans. This helps to keep bloom filters down to the smallest possible size even when then memory pool gets overrun and extremely large. Reduced the default maxlimitertxfee to 3.0. With Targeted Bloom filters there is no longer a danger of the mempool getting overrun and causing us to generate overly large bloom filters and therefore we can comfortably reduce this value and allow more low fee transactions through. Completely reworked the fprate growth algorithm. Using an exponential growth algorithm to adjust the fprate by time rather than number of tx in the mempool. A 6 hour period is used to adjust the fprate upwards whenever a bloom filter is created. Also, removed the decay algorithm for the number of elements in the bloom filter, and using the fprate growth algorithm only.
This has been changed to a LogPrint message which can be viewed when debug=mempool is turned on. It is still a valuable message for debugging purposes but is generally a nuisance and causes the debug.log to fill up unnecessarily
…e over. Increased the growth over 72 hours with a maximum fprate of 0.005
5d99fea
to
5ffc180
Compare
In order to see the list of candidates evicted use the debug=evict setting.
8976beb
to
e8371c3
Compare
LogPrint("net", "initial getheaders (%d) to peer=%d (startheight:%d)\n", pindexStart->nHeight, pto->id, pto->nStartingHeight); | ||
pto->PushMessage(NetMsgType::GETHEADERS, chainActive.GetLocator(pindexStart), uint256()); | ||
if (pindexStart->nHeight < pto->nStartingHeight) { // BU Bug fix for Core: Don't start downloading headers unless our chain is shorter | ||
LogPrint("net", "initial getheaders (%d) to peer=%d (startheight:%d)\n", pindexStart->nHeight, pto->id, pto->nStartingHeight); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you're right, this does need fixing in core (still! first noticed the bug in 2012), but this isn't the way to do it - what if you're on the wrong chain (i.e. less work than the correct chain) but the correct chain is at a lower height? (unlikely to happen, but possible I think).
I don't see much of a problem right now. But you bring up and On 17/09/2016 3:50 AM, R E Broadley wrote:
|
restore legacy miner code, implement the basic subblock mining code
Creating the Xthin request Bloom Filter by targeting the most likely to be mined transactions in the memory pool rather than using every tx in the mempool. This allows anyone to run a mempool as large as they like while keeping the average bloom filter size down to between 2 and 3 KB.