-
Notifications
You must be signed in to change notification settings - Fork 6.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WritePrepared: reduce prepared_mutex_ overhead #5420
Changes from 1 commit
1a4a243
bfd44d8
f6949a8
2813eb7
b5c8fed
b111a1a
24d6458
6e6d68b
2fe7955
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -815,7 +815,7 @@ class AddPreparedCallback : public PreReleaseCallback { | |
uint64_t log_number, size_t index, | ||
size_t total) override { | ||
assert(index < total); | ||
// To reduce lock intention with the conccurrent prepare requests, lock on | ||
// To reduce lock intention with the concurrent prepare requests, lock on | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Lock contention? Also, I don't know if I'd call it lock contention because it looks like you actually increase lock contention by holding the lock for longer periods of time (whereas before, it was shorter, but more frequent). I think you're just saving CPU? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Perhaps I can say "reduce lock acquisition cost". |
||
// the first callback and unlock on the last. | ||
const bool do_lock = !two_write_queues_ || index == 0; | ||
const bool do_unlock = !two_write_queues_ || index + 1 == total; | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"fresh" values