Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot provision OSDs in Luminous and Mimic #3388

Closed
alfredodeza opened this issue Nov 29, 2018 · 0 comments
Closed

Cannot provision OSDs in Luminous and Mimic #3388

alfredodeza opened this issue Nov 29, 2018 · 0 comments

Comments

@alfredodeza
Copy link
Contributor

Bug Report

What happened: A refactor (commit 4cc1506 ) introduced changes that attempt to start an OSD by its device path, which is incorrect for non-containerized environments.

Errors show up similar to:

Unable to start service ceph-osd@sdb: Job for ceph-osd@sdb.service failed because the control process exited with error code. See "systemctl status ceph-osd@sdb.service" and "journalctl -xe" for details.

What you expected to happen: This 'start osd' with a device name in it should not happen for non-containerized environments, and a test should be run to ensure that OSDs start in supported branches (3.2) for related Ceph versions (Luminous)

How to reproduce it (minimal and precise): Try to deploy ceph-disk OSDs in Luminous using the stable-3.2 branch

Share your group_vars files, inventory

Environment:

  • OS (e.g. from /etc/os-release): Both CentOS7 and Xenial
  • Kernel (e.g. uname -a):
  • Ansible version (e.g. ansible-playbook --version): ansible-2.6.8
  • ceph-ansible version (e.g. git head or tag or stable branch): stable-3.2
  • Ceph version (e.g. ceph -v): latest dev version of Luminous (same can be seen in Mimic)
leseb added a commit that referenced this issue Nov 29, 2018
The code is now able (again) to start osds that where configured with
ceph-disk on a non-container scenario.

Closes: #3388
Signed-off-by: Sébastien Han <seb@redhat.com>
leseb added a commit that referenced this issue Nov 29, 2018
The code is now able (again) to start osds that where configured with
ceph-disk on a non-container scenario.

Closes: #3388
Signed-off-by: Sébastien Han <seb@redhat.com>
guits pushed a commit that referenced this issue Nov 29, 2018
The code is now able (again) to start osds that where configured with
ceph-disk on a non-container scenario.

Closes: #3388
Signed-off-by: Sébastien Han <seb@redhat.com>
@guits guits closed this as completed Nov 29, 2018
@guits guits reopened this Nov 29, 2018
leseb added a commit that referenced this issue Nov 29, 2018
The code is now able (again) to start osds that where configured with
ceph-disk on a non-container scenario.

Closes: #3388
Signed-off-by: Sébastien Han <seb@redhat.com>
@guits guits closed this as completed Nov 30, 2018
guits pushed a commit that referenced this issue Nov 30, 2018
The code is now able (again) to start osds that where configured with
ceph-disk on a non-container scenario.

Closes: #3388
Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit 452069c)
guits pushed a commit that referenced this issue Nov 30, 2018
The code is now able (again) to start osds that where configured with
ceph-disk on a non-container scenario.

Closes: #3388
Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit 452069c)
guits pushed a commit that referenced this issue Nov 30, 2018
The code is now able (again) to start osds that where configured with
ceph-disk on a non-container scenario.

Closes: #3388
Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit 452069c)
(cherry picked from commit 8716643)
leseb added a commit that referenced this issue Dec 3, 2018
The code is now able (again) to start osds that where configured with
ceph-disk on a non-container scenario.

Closes: #3388
Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit 452069c)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants