Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make mempool cache a proper LRU #2399

Closed
ValarDragon opened this issue Sep 14, 2018 · 7 comments
Closed

Make mempool cache a proper LRU #2399

ValarDragon opened this issue Sep 14, 2018 · 7 comments
Labels
C:libs Component: Library C:mempool Component: Mempool T:perf Type: Performance
Milestone

Comments

@ValarDragon
Copy link
Contributor

Currently the mempool cache is just a linked list + a hashmap. Continued access won't change your location in the linked list. This is very easy to ensure that it stays frequency based. (Basically 4 pointer sets on top of what we already have) We should add a function to our linked list to allow moving a node, and then convert the mempools cache into a full LRU. This would mean that suppose I receive a tx now, and then 1000 txs later, I suddenly receive alot of that same original tx. In our implementation, that original tx would get evicted once the list is full, but instead we can have it move the front rather easily.

@ValarDragon ValarDragon added C:mempool Component: Mempool T:perf Type: Performance C:libs Component: Library labels Sep 14, 2018
@xla xla added this to the post-launch milestone Sep 17, 2018
@bradyjoestar
Copy link
Contributor

Thanks for guidance of ValarDragon. 😄

melekes pushed a commit that referenced this issue Sep 19, 2018
@melekes
Copy link
Contributor

melekes commented Sep 19, 2018

Let's also benchmark #2407 just to make sure perf. did not go down drastically.

@ValarDragon
Copy link
Contributor Author

I'd expect it to be unchanged, since the mempool hashmap hashing, plus sha2'ing the tx should dominate by an order of magnitude. But always worth checking!

@melekes
Copy link
Contributor

melekes commented Sep 19, 2018

I can do that tomorrow.

@melekes
Copy link
Contributor

melekes commented Sep 20, 2018

BenchmarkCacheInsertTime
goos: linux
goarch: amd64
pkg: github.com/tendermint/tendermint/mempool
BenchmarkCacheInsertTime-2         10000           5396158 ns/op
PASS
ok      github.com/tendermint/tendermint/mempool        57.260s
vagrant@ubuntu-xenial:~/go/src/github.com/tendermint/tendermint/mempool$ go test -bench=BenchmarkCacheInsertTime
goos: linux
goarch: amd64
pkg: github.com/tendermint/tendermint/mempool
BenchmarkCacheInsertTime-2         10000           5741434 ns/op
PASS
ok      github.com/tendermint/tendermint/mempool        61.051s

👍 given this should occur pretty infrequently

@melekes
Copy link
Contributor

melekes commented Sep 20, 2018

Merged to develop.

@melekes melekes closed this as completed Sep 20, 2018
@ValarDragon
Copy link
Contributor Author

Oh wow, that is a surprisingly large difference, thanks for benchmarking it! I'm actually surprised its that high.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C:libs Component: Library C:mempool Component: Mempool T:perf Type: Performance
Projects
None yet
Development

No branches or pull requests

4 participants