Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

server: Optimize map limiting in block manager. #259

Merged

Conversation

davecgh
Copy link
Member

@davecgh davecgh commented May 30, 2016

Upstream commit 23f5914.

dajohi and others added 3 commits April 10, 2016 01:12
This prevents the node from repeatedly requesting and rejecting the
same transaction as different peers inv the same transaction.

Idea from Bitcoin Core commit 0847d9cb5fcd2fdd5a21bde699944d966cf5add9

Also, limit the number of both requested blocks and transactions.
This simply exports and adds some comments to the fields of the
BlockTemplate struct.

This is primarily being done as a step toward being able to separate the
mining code into its own package, but also it makes sense on its own
because code that requests new block template necessarily examines the
returned fields which implies they should be exported.
This optimizes the way in which the maps are limited by the block
manager.

Previously the code would read a cryptographically random value large
enough to construct a hash, find the first entry larger than that value,
and evict it.

That approach is quite inefficient and could easily become a bottleneck
when processing transactions due to the need to read from a source such
as /dev/urandom and all of the subsequent hash comparisons.

Luckily, strong cryptographic randomness is not needed here. The primary
intent of limiting the maps is to control memory usage with a secondary
concern of making it difficult for adversaries to force eviction of
specific entries.

Consequently, this changes the code to make use of the pseudorandom
iteration order of Go's maps along with the preimage resistance of the
hashing function to provide the desired functionality.  It has
previously been discussed that the specific pseudorandom iteration order
is not guaranteed by the Go spec even though in practice that is how it
is implemented.  This is not a concern however because even if the
specific compiler doesn't implement that, the preimage resistance of the
hashing function alone is enough.

Thanks to @Roasbeef for pointing out the efficiency concerns and the
fact that strong cryptographic randomness is not necessary.
@jcvernaleo jcvernaleo added this to the v0.1.5 milestone May 31, 2016
Upstream commit a3fa066.

Also updates all of the new references for the newly exported fields as
part of the merge commit.
@davecgh davecgh force-pushed the merge_server_improve_limitmap_efficiency branch from d63276a to c21db7a Compare June 1, 2016 18:30
@cjepson
Copy link
Contributor

cjepson commented Jun 1, 2016

OK

@alexlyp
Copy link
Member

alexlyp commented Jun 1, 2016

tACK

@alexlyp alexlyp merged commit c21db7a into decred:master Jun 1, 2016
@davecgh davecgh deleted the merge_server_improve_limitmap_efficiency branch June 1, 2016 19:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants