New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Parity db migration #786
Parity db migration #786
Conversation
989b324
to
e3691d2
Compare
Here is the issue related to the release of |
Benchmarked changes, and it seems that performance has decreased: Before: $ cargo run --release -p subspace-farmer -- --base-path ... bench --plot-size 3G --write-pieces-size 6G
Finished benchmarking.
3.13G allocated for farming
2.79G actual space pledged (which is 89.25%)
344.44M of overhead (which is 10.75%)
1m 26s plotting time
34.69M/s average plotting throughput
Recommitment took 1.239412406s After: $ cargo run --release -p subspace-farmer -- --base-path ... bench --plot-size 3G --write-pieces-size 6G
Finished benchmarking.
3.20G allocated for farming
2.79G actual space pledged (which is 87.31%)
415.85M of overhead (which is 12.69%)
2m 24s plotting time
20.72M/s average plotting throughput
Recommitment took 1.071120161s |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm... looks like it is using a bit more space, slower to write, but faster to read 🤔
Overall makes sense to me.
crates/subspace-farmer/src/plot/piece_index_hash_to_offset_db.rs
Outdated
Show resolved
Hide resolved
crates/subspace-farmer/src/plot/piece_index_hash_to_offset_db.rs
Outdated
Show resolved
Hide resolved
4b3f5ea
to
61a2730
Compare
@nazar-pc reverted last commit with that abstraction for commitments database. Despite that history is the same |
I'd wait for upstream PR to be accepted though or at least address their concerns as we can't afford have stack overflows. |
Fixes #567
This pr migrates to parity db from rocksdb. I decided to migrate only piece index hash db (copying data over to parity db) and just remove all commitments (as we can regenerate them, and it shouldn't take much time).
Code contributor checklist: