Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ceph-volume: sort and align lvm list output #21812

Merged
merged 1 commit into from Jun 1, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
9 changes: 4 additions & 5 deletions src/ceph-volume/ceph_volume/devices/lvm/listing.py
Expand Up @@ -17,7 +17,7 @@

osd_device_header_template = """

[{type: >4}] {path}
{type: <13} {path}
"""

device_metadata_item_template = """
Expand All @@ -31,18 +31,18 @@ def readable_tag(tag):

def pretty_report(report):
output = []
for _id, devices in report.items():
for _id, devices in sorted(report.items()):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this looks OK to me, would you mind sharing some output to see how that looks?

Copy link
Author

@thmour thmour May 5, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can ignore the data device property, I did a sort on osd id and also on the properties so that you can find what you want easier. I also properly aligned the values of the devices (block, db) to 14 characters

...
===== osd.163 ======

  [block]     /dev/ceph-658c470d-438f-42fc-856f-05c66bbaa536/osd-block-c1cab1e7-1c18-41a6-b4bd-6f056272de52

      block device              /dev/ceph-658c470d-438f-42fc-856f-05c66bbaa536/osd-block-c1cab1e7-1c18-41a6-b4bd-6f056272de52
      block uuid                Bh1NrB-vU5s-t4V2-3Cxb-lI1k-61bo-9Hiywu
      cephx lockbox secret
      cluster fsid              a3ce9f2a-8ad1-4138-ad52-52180dded0e4
      cluster name              ceph
      crush device class        None
      data device               /dev/sdab(0)
      db device                 /dev/sdf2
      db uuid                   bd5aa111-7811-467b-bd46-d088f44b10e2
      encrypted                 0
      osd fsid                  c1cab1e7-1c18-41a6-b4bd-6f056272de52
      osd id                    163
      type                      block

  [db]        /dev/sdf2

      PARTUUID                  bd5aa111-7811-467b-bd46-d088f44b10e2

===== osd.164 ======

  [block]     /dev/ceph-2a68c270-da1c-4b06-9dae-a5970a427511/osd-block-e179cd26-9644-4b48-abd5-f8d85b96c567

      block device              /dev/ceph-2a68c270-da1c-4b06-9dae-a5970a427511/osd-block-e179cd26-9644-4b48-abd5-f8d85b96c567
      block uuid                OZmtBW-dlx7-OSAq-WUDd-EYQJ-DVme-UIjKB3
      cephx lockbox secret
      cluster fsid              a3ce9f2a-8ad1-4138-ad52-52180dded0e4
      cluster name              ceph
      crush device class        None
      data device               /dev/sdac(0)
      db device                 /dev/sdf3
      db uuid                   dc275f31-184c-43ea-b157-4f559e4d3114
      encrypted                 0
      osd fsid                  e179cd26-9644-4b48-abd5-f8d85b96c567
      osd id                    164
      type                      block

  [db]        /dev/sdf3

      PARTUUID                  dc275f31-184c-43ea-b157-4f559e4d3114

===== osd.165 ======

  [block]     /dev/ceph-33862198-92f0-4c69-9857-fbf731f8383a/osd-block-a8711256-7b67-4f54-8b33-cf0170d36072

      block device              /dev/ceph-33862198-92f0-4c69-9857-fbf731f8383a/osd-block-a8711256-7b67-4f54-8b33-cf0170d36072
      block uuid                EUj19C-g6Wl-5w1o-08dG-rcBK-bhNp-xKIqqH
      cephx lockbox secret
      cluster fsid              a3ce9f2a-8ad1-4138-ad52-52180dded0e4
      cluster name              ceph
      crush device class        None
      data device               /dev/sdad(0)
      db device                 /dev/sdf4
      db uuid                   e6269926-2eaa-409c-86ea-c58a4f19fd57
      encrypted                 0
      osd fsid                  a8711256-7b67-4f54-8b33-cf0170d36072
      osd id                    165
      type                      block

  [db]        /dev/sdf4

      PARTUUID                  e6269926-2eaa-409c-86ea-c58a4f19fd57

...

output.append(
osd_list_header_template.format(osd_id=" osd.%s " % _id)
)
for device in devices:
output.append(
osd_device_header_template.format(
type=device['type'],
type='[%s]' % device['type'],
path=device['path']
)
)
for tag_name, value in device.get('tags', {}).items():
for tag_name, value in sorted(device.get('tags', {}).items()):
output.append(
device_metadata_item_template.format(
tag_name=readable_tag(tag_name),
Expand Down Expand Up @@ -179,7 +179,6 @@ def single_report(self, device):
return self.full_report(lvs=lvs)

if lv:

try:
_id = lv.tags['ceph.osd_id']
except KeyError:
Expand Down
Expand Up @@ -26,7 +26,7 @@ def test_type_and_path_are_reported(self, capsys):
{'type': 'data', 'path': '/dev/sda1', 'devices': ['/dev/sda']}
]})
stdout, stderr = capsys.readouterr()
assert '[data] /dev/sda1' in stdout
assert '[data] /dev/sda1' in stdout

def test_osd_id_header_is_reported(self, capsys):
lvm.listing.pretty_report({0: [
Expand Down