Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mon/PGMap: add pg count for pools in the ceph df command #36819

Merged
merged 1 commit into from Aug 31, 2020

Conversation

vumrao
Copy link
Contributor

@vumrao vumrao commented Aug 26, 2020

mon/PGMap: add pg count for pools in the ceph df command
Fixes: https://tracker.ceph.com/issues/46663

Signed-off-by: Vikhyat Umrao vikhyat@redhat.com

@vumrao
Copy link
Contributor Author

vumrao commented Aug 26, 2020

  • From the vstart cluster.
[vikhyat@redhat build]$ bin/ceph df
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
2020-08-26T03:23:13.071-0700 7fa992b24700 -1 WARNING: all dangerous and experimental features are enabled.
2020-08-26T03:23:13.087-0700 7fa992b24700 -1 WARNING: all dangerous and experimental features are enabled.
--- RAW STORAGE ---
CLASS  SIZE     AVAIL    USED     RAW USED  %RAW USED
ssd    303 GiB  300 GiB  3.0 GiB   3.0 GiB       0.99
TOTAL  303 GiB  300 GiB  3.0 GiB   3.0 GiB       0.99
 
--- POOLS ---
POOL                   ID  PGS  STORED   OBJECTS  USED     %USED  MAX AVAIL
device_health_metrics   1    1      0 B        1      0 B      0     99 GiB
cephfs.a.meta           2   32  2.2 KiB       22   96 KiB      0     99 GiB
cephfs.a.data           3   32      0 B        0      0 B      0     99 GiB
.rgw.root               4   32  1.3 KiB        4   48 KiB      0     99 GiB
default.rgw.log         5   32  3.4 KiB      206  384 KiB      0     99 GiB
default.rgw.control     6   32      0 B        8      0 B      0     99 GiB
default.rgw.meta        7    8  2.6 KiB       14  168 KiB      0     99 GiB
[vikhyat@redhat build]$ bin/ceph osd pool ls detail
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
2020-08-26T03:23:48.676-0700 7fd0c8603700 -1 WARNING: all dangerous and experimental features are enabled.
2020-08-26T03:23:48.699-0700 7fd0c8603700 -1 WARNING: all dangerous and experimental features are enabled.
pool 1 'device_health_metrics' replicated size 3 min_size 1 crush_rule 0 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 10 flags hashpspool stripe_width 0 pg_num_min 1 application mgr_devicehealth
pool 2 'cephfs.a.meta' replicated size 3 min_size 1 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 23 flags hashpspool stripe_width 0 pg_autoscale_bias 4 pg_num_min 16 recovery_priority 5 application cephfs
pool 3 'cephfs.a.data' replicated size 3 min_size 1 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 24 flags hashpspool stripe_width 0 application cephfs
pool 4 '.rgw.root' replicated size 3 min_size 1 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 26 flags hashpspool stripe_width 0 application rgw
pool 5 'default.rgw.log' replicated size 3 min_size 1 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 28 flags hashpspool stripe_width 0 application rgw
pool 6 'default.rgw.control' replicated size 3 min_size 1 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 30 flags hashpspool stripe_width 0 application rgw
pool 7 'default.rgw.meta' replicated size 3 min_size 1 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode on last_change 142 lfor 0/142/140 flags hashpspool stripe_width 0 pg_autoscale_bias 4 pg_num_min 8 application rgw

@neha-ojha
Copy link
Member

@vumrao we should add a note about this in PendingReleaseNotes

@vumrao
Copy link
Contributor Author

vumrao commented Aug 27, 2020

Thanks @neha-ojha done.

PendingReleaseNotes Outdated Show resolved Hide resolved
@tchaikov tchaikov merged commit 9c4d179 into ceph:master Aug 31, 2020
@vumrao
Copy link
Contributor Author

vumrao commented Aug 31, 2020

thank you @tchaikov.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
3 participants