Join GitHub today
core/bloombits, eth/filter: transformed bloom bitmap based log search #3749
This PR is based on #14431
This PR optimizes log searching by creating a data structure (BloomBits) that makes it cheaper to retrieve bloom filter data relevant to a specific filter. When searching in a long section of the block history, we are checking three specific bits of each bloom filter per address/topic. In order to do that, currently we read/retrieve a cca. 500 byte block header for each block. The implemented structure optimizes this by a "bitwise 90 degree rotation" of the bloom filters. Blocks are grouped into sections (SectionSize is 4096 blocks at the moment), BloomBits[bitIdx][sectionIdx] is a 4096 bit (512 byte) long bit vector that contains a single bit of each bloom filter from the block range [sectionIdx*SectionSize ... (sectionIdx+1)*SectionSize-1]. (Since bloom filters are usually sparse, a simple data compression makes this structure even more efficient, especially for ODR retrieval.) By reading and binary AND-ing three BloomBits sections, we can filter for an address/topic in 4096 blocks at once ("1" bits in the binary AND result mean bloom matches).
Implementation and design rationale of the matcher logic
The matcher was designed with the needs of both full and light nodes in mind. A simpler architecture would probably be satisfactory for full nodes (where the bit vectors are available in the local database) but the network retrieval bottleneck of light clients justifies a more sophisticated algorithm that tries to minimize the amount of retrieved data and return results as soon as possible. The current implementation is a pipelined structure based on input and output channels (receiving section indexes and sending potential matches). The matcher is built from sub-matchers, one for the addresses and one for each topic group. Since we are interested in matches that each sub-matcher signals as positive, they are daisy-chained in a way that subsequent sub-matchers are only retrieving and matching the bit vectors of sections where the previous matchers have found a potential match. The "1" bits of the output of the last sub-matcher are returned as bloom filter matches.
Light clients retrieve the bit vectors with merkle proofs, which makes it much more efficient to retrieve batches of vectors (whose merkle proofs share most of their trie nodes) in a single request. Also, it is preferable to prioritize requests based on their section index (regardless of bit index) in order to ensure that matches are found and returned as soon as possible (and in a sequential order). Prioritizing and batching are realized by a common request distributor that receives individual bit index/section index requests from fetchers and keeps an ordered list of section indexes to be requested, grouped by bit index. It does not call any retrieval backend function but it is called by a "server" process (see serveMatcher in filter.go). NextRequest returns the next batch to be requested, retrieved vectors are returned through the Deliver function. This method ensures that the bloombits package should only care about implementing the matching logic. The caller can retain full control over the resources (CPU/disk/network bandwidth) assigned to this task.
I took a quick peek at this PR, but I have some issues with it in general.
It adds a lot of stuff all over
If this approach is superior to the current mip-map blooms that we have, we should simply swap it out as is, with the one proposed in the current PR, ready for production. We already have an experimental discovery v5 that kind of works, but isn't actually usable outside of the light client; we have the light client itself which kind of works, but isn't really production ready and now this PR would mix another layer of experimental stuff that kind of works, but doesn't actually.
My proposal for moving forward with this PR is to:
If performance is good, we can polish the code to be production ready and enable it for everyone. There's no point in further adding experimental stuff into geth, we have way too much already.
Apr 19, 2017
I ran some benchmarks testing different section sizes and compression methods:
Regarding snappy vs. own, I'm leaning towards snappy tbh. For the simple practical reason that we don't have to maintain it. How large is the different between yours and snappy? And why do you think yours it better for light client (I mean we can do yours too, just it would need to be worth the extra effort).