New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
reef: mClockScheduler: Set priority cutoff in the mClock Scheduler #51666
Conversation
We check the priority of an op before deciding if it gets enqueued in the high_priority_queue or the mClock scheduler queue. Instead of checking what osd_op_queue_cut_off is set to each time, we should be checking the cutoff only once and set that as the priority_cutoff. This will avoid any issues when osd_op_queue_cut_off is set to debug_random. Fixes: https://tracker.ceph.com/issues/58940 Signed-off-by: Aishwarya Mathuria <amathuri@redhat.com> (cherry picked from commit cf5df7c)
@amathuria this failure appeared in the test run. I'm thinking that it's from #51663, which was also tested, but would still be good if you could have a look.
I will have Yuri drop #51663 from the batch and rerun this test to see if it reproduces. |
Rados suite review: https://tracker.ceph.com/projects/rados/wiki/REEF#httpstrellocomc3wFHWku31758-wip-yuri-testing-2023-05-23-0909-reef-old-wip-yuri-testing-2023-05-22-0845-reef Thanks @sseshasa for confirming the above bug is not related to mclock. |
backport tracker: https://tracker.ceph.com/issues/61303
backport of #50691
parent tracker: https://tracker.ceph.com/issues/58940
this backport was staged using ceph-backport.sh version 16.0.0.6848
find the latest version at https://github.com/ceph/ceph/blob/main/src/script/ceph-backport.sh