Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ansible fails on Debian 10 with ceph_repository_type: distro (stable-4.0) #6890

Closed
RubenGarcia opened this issue Sep 14, 2021 · 7 comments
Closed
Labels

Comments

@RubenGarcia
Copy link

Bug Report

What happened:

Ansible fails with error when running the line
ceph-volume --cluster ceph lvm batch --bluestore DEVICES --report --format=json

When running the line in the command line, I get
--> DEPRECATION NOTICE
--> You are using the legacy automatic disk sorting behavior
--> The Pacific release will change the default to --no-auto
--> passed data devices: 0 physical, 180 LVM
--> relative data size: 1.0
--> IndexError: list index out of range

What you expected to happen:

Correct installation

How to reproduce it (minimal and precise):

Install debian 10.
Enable backports and install ceph with

apt install ceph/buster-backports ceph-mgr/buster-backports ceph-base/buster-backports ceph-osd/buster-backports ceph-mon/buster-backports ceph-common/buster-backports libradosstriper1/buster-backports smartmontools/buster-backports

Setup branch to stable-4.0
Use ceph_repository_type: distro
Run the ansible

Share your group_vars files, inventory and full ceph-ansibe log

Environment:

  • OS (e.g. from /etc/os-release): Debian GNU/Linux 10 (buster)
  • Kernel (e.g. uname -a): Linux lexis-expstorage-2 4.19.0-17-amd64 Add Ceph Playbook #1 SMP Debian 4.19.194-3 (2021-07-18) x86_64 GNU/Linux
  • Docker version if applicable (e.g. docker version): N/A
  • Ansible version (e.g. ansible-playbook --version): 2.9.26
  • ceph-ansible version (e.g. git head or tag or stable branch): stable-4.0
  • Ceph version (e.g. ceph -v): ceph version 14.2.21 (5ef401921d7a88aea18ec7558f7f9374ebd8f5a6) nautilus (stable)

Possible solutions:
-Indicate in the documentation that this configuration is not working at the moment.
-Ask Debian to update their package to the version of ceph which no longer has the bug.

@RubenGarcia
Copy link
Author

@RubenGarcia
Copy link
Author

RubenGarcia commented Sep 16, 2021

The problem was that the devices were multipath.
I recommend that a better error message like

No supported disks found. Only raw disks and lvm volumes are supported (e.g. no raw multipath).

is added in a try_catch clause at the proper location.

@guits
Copy link
Collaborator

guits commented Sep 16, 2021

@RubenGarcia At first glance, yes, there's a bug anyway here, it should throw a 'nice' error message rather than a meaningless error from user perspective.

By the way, are the 180 devices reported in the following output well LVM devices?

--> passed data devices: 0 physical, 180 LVM

@RubenGarcia
Copy link
Author

RubenGarcia commented Sep 18, 2021

These 180 lvm devices were created using this code:

cd /dev/mapper
for i in mpath*; do pvcreate $i; done
for i in mpath*; do vgcreate $i /dev/mapper/$i; done
for i in mpath*; do lvcreate -l 100%FREE -n $i $i; done

@github-actions
Copy link

github-actions bot commented Oct 3, 2021

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions.

@RubenGarcia
Copy link
Author

Seriously, you need to configure your github-actions to avoid closing issues which are not fixed.

@guits
Copy link
Collaborator

guits commented Oct 6, 2021

@RubenGarcia this isn't a ceph-ansible issue.
Do you mind opening a tracker at https://tracker.ceph.com/projects/ceph-volume ? Thanks!

@guits guits closed this as completed Oct 6, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants