-
Notifications
You must be signed in to change notification settings - Fork 871
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: fix queue backpressure access #2000
Conversation
Signed-off-by: Vladislav Oleshko <vlad@dragonflydb.io>
src/facade/dragonfly_connection.cc
Outdated
queue_backpressure_ = &tl_queue_backpressure_; | ||
if (queue_backpressure_->limit == 0) { | ||
} | ||
queue_backpressure_->limit = absl::GetFlag(FLAGS_pipeline_queue_limit); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
reshuffle the lines :)
Signed-off-by: Vladislav Oleshko <vlad@dragonflydb.io>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we somehow run the regressions tests just to be sure before we merge this?
(I know this from a separate fork, so I don't think it's possible -- so I guess let's merge if the failures pass locally)
if (queue_backpressure_.limit == 0) | ||
queue_backpressure_.limit = absl::GetFlag(FLAGS_pipeline_queue_limit); | ||
queue_backpressure_ = &tl_queue_backpressure_; | ||
if (queue_backpressure_->limit == 0) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
dead if :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh you fixed it :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
dead if :)
Yeah that was a demonstration of my quick auto-replace capabilities 馃ぃ
I tested it locally, I can run it on my fork. But we'll see on the main pipeline either way if it works or not (I mean it should) |
I had a very silly bug in my last PR, the pytests catched it馃槶 In EnusreAsyncMemoryBudget I was calculating the budget not for the intended dispatch thread, but the current one by accessing the thread_local queue limits struct
Instead, we need to access the owning thread's queue memory stats