Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

reef: mClockScheduler: Set priority cutoff in the mClock Scheduler #51666

Merged
merged 1 commit into from May 23, 2023

Conversation

amathuria
Copy link
Contributor

backport tracker: https://tracker.ceph.com/issues/61303


backport of #50691
parent tracker: https://tracker.ceph.com/issues/58940

this backport was staged using ceph-backport.sh version 16.0.0.6848
find the latest version at https://github.com/ceph/ceph/blob/main/src/script/ceph-backport.sh

@amathuria amathuria requested a review from a team as a code owner May 22, 2023 11:57
@amathuria amathuria added this to the reef milestone May 22, 2023
We check the priority of an op before deciding if it gets enqueued in
the high_priority_queue or the mClock scheduler queue. Instead of
checking what osd_op_queue_cut_off is set to each time, we should be
checking the cutoff only once and set that as the priority_cutoff. This
will avoid any issues when osd_op_queue_cut_off is set to debug_random.

Fixes: https://tracker.ceph.com/issues/58940
Signed-off-by: Aishwarya Mathuria <amathuri@redhat.com>
(cherry picked from commit cf5df7c)
@ljflores
Copy link
Contributor

@amathuria this failure appeared in the test run. I'm thinking that it's from #51663, which was also tested, but would still be good if you could have a look.

https://pulpito.ceph.com/yuriw-2023-05-22_23:22:00-rados-wip-yuri-testing-2023-05-22-0845-reef-distro-default-smithi/7282843/

2023-05-23T02:04:32.160 INFO:tasks.workunit.client.0.smithi169.stdout:--> loop:  9 ~ false / 55 / 54  / true / 2023-05-23T02:04:24.674519+0000 / scrub scheduled %%% query_active
2023-05-23T02:04:32.161 INFO:tasks.workunit.client.0.smithi169.stdout:key is query_active: negation:0 # expected: true # in actual: false
2023-05-23T02:04:32.957 INFO:tasks.workunit.client.0.smithi169.stderr:dumped pgs
2023-05-23T02:04:33.191 INFO:tasks.workunit.client.0.smithi169.stdout:(
2023-05-23T02:04:33.191 INFO:tasks.workunit.client.0.smithi169.stdout:[query_epoch]=21
2023-05-23T02:04:33.191 INFO:tasks.workunit.client.0.smithi169.stdout:[query_seq]=55
2023-05-23T02:04:33.191 INFO:tasks.workunit.client.0.smithi169.stdout:[query_active]=false
2023-05-23T02:04:33.192 INFO:tasks.workunit.client.0.smithi169.stdout:[query_schedule]='scrub scheduled'
2023-05-23T02:04:33.192 INFO:tasks.workunit.client.0.smithi169.stdout:[query_schedule_at]='2023-05-24T02:04:24.674'
2023-05-23T02:04:33.192 INFO:tasks.workunit.client.0.smithi169.stdout:[query_last_duration]=2
2023-05-23T02:04:33.192 INFO:tasks.workunit.client.0.smithi169.stdout:[query_last_stamp]='2023-05-23T02:04:24.674519+0000'
2023-05-23T02:04:33.192 INFO:tasks.workunit.client.0.smithi169.stdout:[query_last_scrub]='19x15'
2023-05-23T02:04:33.192 INFO:tasks.workunit.client.0.smithi169.stdout:[query_is_future]=true
2023-05-23T02:04:33.192 INFO:tasks.workunit.client.0.smithi169.stdout:[query_vs_date]=true
2023-05-23T02:04:33.193 INFO:tasks.workunit.client.0.smithi169.stdout:[query_scrub_seq]=null
2023-05-23T02:04:33.193 INFO:tasks.workunit.client.0.smithi169.stdout:[dmp_pg_state]='active+clean'
2023-05-23T02:04:33.193 INFO:tasks.workunit.client.0.smithi169.stdout:[dmp_state_has_scrubbing]=false
2023-05-23T02:04:33.193 INFO:tasks.workunit.client.0.smithi169.stdout:[dmp_last_duration]=2
2023-05-23T02:04:33.193 INFO:tasks.workunit.client.0.smithi169.stdout:[dmp_schedule]='periodic scrub scheduled'
2023-05-23T02:04:33.193 INFO:tasks.workunit.client.0.smithi169.stdout:[dmp_schedule_at]='2023-05-24T02:04:24.674519+0000'
2023-05-23T02:04:33.194 INFO:tasks.workunit.client.0.smithi169.stdout:[dmp_is_future]=true
2023-05-23T02:04:33.194 INFO:tasks.workunit.client.0.smithi169.stdout:[dmp_vs_date]=true
2023-05-23T02:04:33.194 INFO:tasks.workunit.client.0.smithi169.stdout:[dmp_reported_epoch]=21
2023-05-23T02:04:33.194 INFO:tasks.workunit.client.0.smithi169.stdout:[dmp_seq]=54
2023-05-23T02:04:33.194 INFO:tasks.workunit.client.0.smithi169.stdout:)
2023-05-23T02:04:33.238 INFO:tasks.workunit.client.0.smithi169.stdout:--> loop:  10 ~ false / 55 / 54  / true / 2023-05-23T02:04:24.674519+0000 / scrub scheduled %%% query_active
2023-05-23T02:04:33.238 INFO:tasks.workunit.client.0.smithi169.stdout:key is query_active: negation:0 # expected: true # in actual: false
2023-05-23T02:04:33.239 INFO:tasks.workunit.client.0.smithi169.stdout:WaitingActive : wait_any_cond(): failure. Note: query-active=false
2023-05-23T02:04:33.241 INFO:tasks.workunit.client.0.smithi169.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/scrub-helpers.sh:156: wait_any_cond:  return 1
2023-05-23T02:04:33.241 INFO:tasks.workunit.client.0.smithi169.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/osd-scrub-test.sh:613: TEST_dump_scrub_schedule:  return 1

I will have Yuri drop #51663 from the batch and rerun this test to see if it reproduces.

@ljflores
Copy link
Contributor

@yuriw yuriw merged commit 8f67a19 into ceph:reef May 23, 2023
11 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
4 participants