Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: Faster row hashing during delete #229

Closed
wants to merge 1 commit into from

Conversation

dergoegge
Copy link
Contributor

Benchmarks indicate that this is worth it:

$ go test -bench=RowHasher. -run=Bench
goos: darwin
goarch: amd64
pkg: github.com/mit-dci/utreexo/accumulator
BenchmarkRowHasherSequential-4           1876930               600 ns/op
BenchmarkRowHasherParallel-4             3982798               335 ns/op
PASS
ok      github.com/mit-dci/utreexo/accumulator  4.349s

@dergoegge dergoegge force-pushed the faster_rowhash branch 3 times, most recently from a865012 to 345f73a Compare November 19, 2020 16:32
@dergoegge
Copy link
Contributor Author

After some investigation i found that most of the time its not worth it to parallelize the hashing as there are not enough hashes to be done on a row. On testnet for example the avg. number of hashes per worker with 4 workers is 0.2 (up to block 240k). I could imagine though that this is not the same on mainnet when blocks are full.
hashRow now only splits up the hashing if there is atleast a certain number of hashes available for each worker.

Will do some profiling...

@dergoegge dergoegge closed this Dec 17, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant