Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ceph-volume: sort and align lvm list output #21812

Merged
merged 1 commit into from Jun 1, 2018

Conversation

thmour
Copy link

@thmour thmour commented May 4, 2018

This commit targets the ceph-volume lvm list command.
The output is sorted by the osd id, so ceph operators
can find relevant information easier. Also the listing
of the physical devices can help operators find the
underlying malfunctioned device of a failed osd.

Signed-off-by: Theofilos Mouratidis t.mour@cern.ch

Copy link
Contributor

@alfredodeza alfredodeza left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are already reporting on devices, see pr #21654

Those changes are included in your PR as well. I think the sorting makes sense though!

@@ -199,7 +199,7 @@ def get_api_lvs():
;/dev/ubuntubox-vg/swap_1;swap_1;ubuntubox-vg

"""
fields = 'lv_tags,lv_path,lv_name,vg_name,lv_uuid'
fields = 'lv_tags,lv_path,lv_name,vg_name,lv_uuid,devices'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is going to cause issues for systems where logical volumes have been created from parts of different underlying pvs. For example, this sliced/lv is using /dev/sdi and /dev/sdh, will report the same lv and same vg:

root@node9:/home/vagrant# lvs sliced/lv -o +devices
  LV   VG     Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices
  lv   sliced -wi-a----- 400.00m                                                     /dev/sdi(50)
  lv   sliced -wi-a----- 400.00m                                                     /dev/sdh(0)

This is problematic because it will break uniqueness on reporting. This is how the actual API call in ceph-volume would see in this case for all lvs (note how 'sliced' appears twice)

 lvs --noheadings --readonly --separator=";" -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,devices
  ;/dev/ceph/test;test;ceph;Ea7Kj7-HSKe-EFrp-Ht8A-HRQV-zPMt-ktExtq;/dev/sdd(0)
  ;/dev/sliced/lv;lv;sliced;5vVQ2s-acKA-9vCP-Dafo-9c8e-d7gB-HtcTdb;/dev/sdi(50)
  ;/dev/sliced/lv;lv;sliced;5vVQ2s-acKA-9vCP-Dafo-9c8e-d7gB-HtcTdb;/dev/sdh(0)
  ;/dev/ubuntubox-vg/root;root;ubuntubox-vg;Z2gyTg-bQau-GvZb-Uqty-WHur-xcP3-v2usYb;/dev/sda5(0)
  ;/dev/ubuntubox-vg/swap_1;swap_1;ubuntubox-vg;fbzgJ6-BqEm-Rb99-ou3N-gqek-liGR-p2ZI6h;/dev/sda5(76163)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is why I've sent you the email asking about weird lvm setups like this, because it makes pvs reporting more complicated than the "one-liner" solution I have on this PR. I will update the pr on monday and remove this part.

@@ -31,18 +31,18 @@ def readable_tag(tag):

def pretty_report(report):
output = []
for _id, devices in report.items():
for _id, devices in sorted(report.items()):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this looks OK to me, would you mind sharing some output to see how that looks?

Copy link
Author

@thmour thmour May 5, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can ignore the data device property, I did a sort on osd id and also on the properties so that you can find what you want easier. I also properly aligned the values of the devices (block, db) to 14 characters

...
===== osd.163 ======

  [block]     /dev/ceph-658c470d-438f-42fc-856f-05c66bbaa536/osd-block-c1cab1e7-1c18-41a6-b4bd-6f056272de52

      block device              /dev/ceph-658c470d-438f-42fc-856f-05c66bbaa536/osd-block-c1cab1e7-1c18-41a6-b4bd-6f056272de52
      block uuid                Bh1NrB-vU5s-t4V2-3Cxb-lI1k-61bo-9Hiywu
      cephx lockbox secret
      cluster fsid              a3ce9f2a-8ad1-4138-ad52-52180dded0e4
      cluster name              ceph
      crush device class        None
      data device               /dev/sdab(0)
      db device                 /dev/sdf2
      db uuid                   bd5aa111-7811-467b-bd46-d088f44b10e2
      encrypted                 0
      osd fsid                  c1cab1e7-1c18-41a6-b4bd-6f056272de52
      osd id                    163
      type                      block

  [db]        /dev/sdf2

      PARTUUID                  bd5aa111-7811-467b-bd46-d088f44b10e2

===== osd.164 ======

  [block]     /dev/ceph-2a68c270-da1c-4b06-9dae-a5970a427511/osd-block-e179cd26-9644-4b48-abd5-f8d85b96c567

      block device              /dev/ceph-2a68c270-da1c-4b06-9dae-a5970a427511/osd-block-e179cd26-9644-4b48-abd5-f8d85b96c567
      block uuid                OZmtBW-dlx7-OSAq-WUDd-EYQJ-DVme-UIjKB3
      cephx lockbox secret
      cluster fsid              a3ce9f2a-8ad1-4138-ad52-52180dded0e4
      cluster name              ceph
      crush device class        None
      data device               /dev/sdac(0)
      db device                 /dev/sdf3
      db uuid                   dc275f31-184c-43ea-b157-4f559e4d3114
      encrypted                 0
      osd fsid                  e179cd26-9644-4b48-abd5-f8d85b96c567
      osd id                    164
      type                      block

  [db]        /dev/sdf3

      PARTUUID                  dc275f31-184c-43ea-b157-4f559e4d3114

===== osd.165 ======

  [block]     /dev/ceph-33862198-92f0-4c69-9857-fbf731f8383a/osd-block-a8711256-7b67-4f54-8b33-cf0170d36072

      block device              /dev/ceph-33862198-92f0-4c69-9857-fbf731f8383a/osd-block-a8711256-7b67-4f54-8b33-cf0170d36072
      block uuid                EUj19C-g6Wl-5w1o-08dG-rcBK-bhNp-xKIqqH
      cephx lockbox secret
      cluster fsid              a3ce9f2a-8ad1-4138-ad52-52180dded0e4
      cluster name              ceph
      crush device class        None
      data device               /dev/sdad(0)
      db device                 /dev/sdf4
      db uuid                   e6269926-2eaa-409c-86ea-c58a4f19fd57
      encrypted                 0
      osd fsid                  a8711256-7b67-4f54-8b33-cf0170d36072
      osd id                    165
      type                      block

  [db]        /dev/sdf4

      PARTUUID                  e6269926-2eaa-409c-86ea-c58a4f19fd57

...

@thmour thmour changed the title ceph-volume: include physical devices, sort output ceph-volume: sort and align lvm list output May 5, 2018
@thmour
Copy link
Author

thmour commented May 10, 2018

bump

@@ -17,7 +17,7 @@

osd_device_header_template = """

[{type: >4}] {path}
{type: <11} {path}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would you help me understand why this change is needed? since it is all in one commit I can't tell

Copy link
Author

@thmour thmour May 17, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the characters of the devices are more than 4, the [journal]'s, [block]'s and [ db]'s path values are not aligned properly. I guess I should change command outputs of some tests, because the spacing is changed.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I am missing something here. The output example in the docs have more than four chars and I don't see them misaligned: http://docs.ceph.com/docs/master/ceph-volume/lvm/list/#full-reporting

Can you clarify where is the difference?

Copy link
Author

@thmour thmour May 17, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While journal uuid, osd id, cluster fsid...etc values are aligned, the [data] and [journal] are not. It is just a minor fix, I can exclude it if you want.
screenshot from 2018-05-17 13-40-00

@alfredodeza
Copy link
Contributor

jenkins test ceph-volume tox

@alfredodeza
Copy link
Contributor

Please run the tests for ceph-volume, preferably with tox, otherwise refer to the failing tests link in the checks

@thmour
Copy link
Author

thmour commented May 28, 2018

jenkins test ceph-volume tox

@thmour
Copy link
Author

thmour commented May 29, 2018

@alfredodeza all good now?

output.append(
osd_list_header_template.format(osd_id=" osd.%s " % _id)
)
for device in devices:
output.append(
osd_device_header_template.format(
type=device['type'],
type='[{}]'.format(device['type']),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we don't tend to use .format() with {}, because it makes it too verbose if it is only replacing one item. Could you please update this to be:

type='[%s]' % device['type']

@alfredodeza
Copy link
Contributor

This looks great, only one minor nit. Thanks!

This commit targets the `ceph-volume lvm list` command.
The output is sorted by the osd id and each device's
attributed are sorted, so the ceph operators can find
relevant information easier. The devices (block,db,..etc)
are now properly aligned.

Signed-off-by: Theofilos Mouratidis <t.mour@cern.ch>
@thmour
Copy link
Author

thmour commented Jun 1, 2018

done

@alfredodeza alfredodeza merged commit 0da1b40 into ceph:master Jun 1, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
3 participants