-
Notifications
You must be signed in to change notification settings - Fork 211
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cleanup: remove redundant session counter from db #4159
Conversation
From our call: make revert of 62b6df4 more explicit and split the PR into two commits. |
.await | ||
.map(|entry| (entry.0 .0) + 1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
.map(|entry| (entry.0 .0) + 1) | |
.map(|entry| (entry.0.0) + 1) |
that space really messed up my brain
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, but cargo fmt insists on it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While better than counting, I think the underlying implementation will race ahead and start reading more than just one key, making this call actually generate a heavier IO load than expected.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rocksdb should be caching pages that have been recently written to. This entry is written every session (~5mins) right? Hard to know the actual IO cost without benchmarking it at all but if its significant IO that would be surprising to me
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would slap in-memory cache on top of the whole thing, so this code doesn't even need to do any IO except a first time each session.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's see how it behaves.
@@ -744,8 +734,11 @@ impl ConsensusServer { | |||
} | |||
|
|||
pub(crate) async fn get_finished_session_count_static(dbtx: &mut DatabaseTransaction<'_>) -> u64 { | |||
dbtx.get_value(&SignedSessionOutcomeCountKey) | |||
dbtx.find_by_prefix_sorted_descending(&SignedSessionOutcomePrefix) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
make sure to measure it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Even if it responds here fast, it might have a higher underlying cost which will be hard to measure here. (see other comment).
Codecov ReportAttention:
Additional details and impacted files@@ Coverage Diff @@
## master #4159 +/- ##
==========================================
+ Coverage 58.01% 58.07% +0.05%
==========================================
Files 192 192
Lines 42990 42958 -32
==========================================
+ Hits 24941 24946 +5
+ Misses 18049 18012 -37 ☔ View full report in Codecov by Sentry. |
@elsirion We need to figure out how to backport this |
No description provided.