-
Notifications
You must be signed in to change notification settings - Fork 5.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
reef: common: counter dump
command revision
#52469
Conversation
`counter dump` now emits an array of <labels,counters> pairs for each individual key. Commit includes revisions to perf counters unit test. Fixes: https://tracker.ceph.com/issues/61587 Signed-off-by: Ali Maredia <amaredia@redhat.com> (cherry picked from commit 78a1488)
Adopt the counter dump format changes in exporter for extracting the counters. Removed the condition for `PERFCOUNTER_TIME` as counter dump already does tranformation internally. Signed-off-by: Avan Thakkar <athakkar@redhat.com> (cherry picked from commit 3e8ef70)
@avanthakkar Please repeat the metrics endpoint test with two RBD images here and post the results. |
@avanthakkar do you have the metrics endpoint test with two RBD images here and post the results yet? |
@avanthakkar This got merged but I would still like you to produce the dumps and post them here. |
Sorry I've been busy with other priority tasks. I've started the build now, will paste the output for dumps and metrics once build is ready! |
I tried to do build locally on reef but it fails for me while starting mstart cluster.
This the script i use to setup rbd-mirror https://paste.sh/EqKoidUt#dwz97kAPkfZ0Djmh6wpO6iYm |
It looks like |
I ran the rbd-mirror daemon on reef, and here's the counter and ceph-exporter dump: "rbd_mirror_snapshot_image": [
{
"labels": {
"image": "image1",
"namespace": "",
"pool": "data"
},
"counters": {
"snapshots": 1,
"sync_time": {
"avgcount": 1,
"sum": 4.418287199,
"avgtime": 4.418287199
},
"sync_bytes": 524288000,
"remote_timestamp": 1691403499.591300109,
"local_timestamp": 1691403499.591300109,
"last_sync_time": 4.418287199,
"last_sync_bytes": 524288000
}
},
{
"labels": {
"image": "image2",
"namespace": "",
"pool": "data"
},
"counters": {
"snapshots": 1,
"sync_time": {
"avgcount": 1,
"sum": 6.176838533,
"avgtime": 6.176838533
},
"sync_bytes": 524288000,
"remote_timestamp": 1691403728.669556493,
"local_timestamp": 1691403728.669556493,
"last_sync_time": 6.176838533,
"last_sync_bytes": 524288000
}
},
{
"labels": {
"image": "image2",
"namespace": "testing",
"pool": "data"
},
"counters": {
"snapshots": 1,
"sync_time": {
"avgcount": 1,
"sum": 6.078289046,
"avgtime": 6.078289046
},
"sync_bytes": 524288000,
"remote_timestamp": 1691403507.484206508,
"local_timestamp": 1691403507.484206508,
"last_sync_time": 6.078289046,
"last_sync_bytes": 524288000
}
}
], |
@weirdwiz The posted |
i think there were more snapshots taken on schedule when i was capturing the output to paste it here. let me run it again and remove the schedule |
updated output: "rbd_mirror_snapshot_image": [
{
"labels": {
"image": "image1",
"namespace": "",
"pool": "data"
},
"counters": {
"snapshots": 1,
"sync_time": {
"avgcount": 1,
"sum": 4.889262373,
"avgtime": 4.889262373
},
"sync_bytes": 524288000,
"remote_timestamp": 1691406763.567461918,
"local_timestamp": 1691406763.567461918,
"last_sync_time": 4.889262373,
"last_sync_bytes": 524288000
}
},
{
"labels": {
"image": "image2",
"namespace": "",
"pool": "data"
},
"counters": {
"snapshots": 2,
"sync_time": {
"avgcount": 2,
"sum": 5.565792467,
"avgtime": 2.782896233
},
"sync_bytes": 524288000,
"remote_timestamp": 1691406898.801305766,
"local_timestamp": 1691406898.801305766,
"last_sync_time": 0.003495392,
"last_sync_bytes": 0
}
},
{
"labels": {
"image": "image2",
"namespace": "testing",
"pool": "data"
},
"counters": {
"snapshots": 2,
"sync_time": {
"avgcount": 2,
"sum": 4.896217627,
"avgtime": 2.448108813
},
"sync_bytes": 524288000,
"remote_timestamp": 1691406832.619604003,
"local_timestamp": 1691406832.619604003,
"last_sync_time": 0.003299133,
"last_sync_bytes": 0
}
}
], |
LGTM |
backport tracker: https://tracker.ceph.com/issues/62024
backport of #51947
parent tracker: https://tracker.ceph.com/issues/61587