Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ceph-ansible: purge cluster fails with error #1877

Closed
tmuthamizhan opened this issue Sep 8, 2017 · 8 comments
Closed

ceph-ansible: purge cluster fails with error #1877

tmuthamizhan opened this issue Sep 8, 2017 · 8 comments

Comments

@tmuthamizhan
Copy link

purge-cluster playbook fails with the following error "rmtree failed: [Errno 30] Read-only file system: '/var/lib/ceph/osd-lockbox/51394322-4e79-4c67-9a81-898c4a8ff1d4/lost+found'"

logs: http://qa-proxy.ceph.com/teuthology/vasu-2017-09-08_17:30:43-ceph-ansible-master-distro-basic-vps/1609655/teuthology.log

teuthology run: http://pulpito.ceph.com/vasu-2017-09-08_17:30:43-ceph-ansible-master-distro-basic-vps/

@leseb
Copy link
Member

leseb commented Sep 8, 2017

Fixed in: #1842

@leseb leseb closed this as completed Sep 8, 2017
@vasukulkarni vasukulkarni reopened this Sep 11, 2017
@vasukulkarni
Copy link
Contributor

Reopening as we are seeing this in master ( logs are in the original issue)

@vasukulkarni
Copy link
Contributor

Any update on this one? I can ignore the results of purge-cluster but that would end up masking everything.

@leseb
Copy link
Member

leseb commented Sep 13, 2017

@vasukulkarni do you have a recent log with the error? Thanks!

@vasukulkarni
Copy link
Contributor

@leseb here is from yesterday's run
http://qa-proxy.ceph.com/teuthology/vasu-2017-09-12_20:21:30-ceph-ansible-master-distro-basic-ovh/1624069/teuthology.log

2017-09-12T20:49:28.939 INFO:teuthology.orchestra.run.ovh008.stdout:TASK [stop ceph mons] **********************************************************
2017-09-12T20:49:28.939 INFO:teuthology.orchestra.run.ovh008.stdout:task path: /home/ubuntu/ceph-ansible/infrastructure-playbooks/purge-cluster.yml:442
2017-09-12T20:49:28.998 INFO:teuthology.orchestra.run.ovh008.stdout:skipping: [ovh008.front.sepia.ceph.com] => {
2017-09-12T20:49:28.999 INFO:teuthology.orchestra.run.ovh008.stdout:    "changed": false,
2017-09-12T20:49:28.999 INFO:teuthology.orchestra.run.ovh008.stdout:    "skip_reason": "Conditional result was False",
2017-09-12T20:49:28.999 INFO:teuthology.orchestra.run.ovh008.stdout:    "skipped": true
2017-09-12T20:49:28.999 INFO:teuthology.orchestra.run.ovh008.stdout:}
2017-09-12T20:49:29.005 INFO:teuthology.orchestra.run.ovh008.stdout:skipping: [ovh059.front.sepia.ceph.com] => {
2017-09-12T20:49:29.005 INFO:teuthology.orchestra.run.ovh008.stdout:    "changed": false,
2017-09-12T20:49:29.005 INFO:teuthology.orchestra.run.ovh008.stdout:    "skip_reason": "Conditional result was False",
2017-09-12T20:49:29.005 INFO:teuthology.orchestra.run.ovh008.stdout:    "skipped": true
2017-09-12T20:49:29.006 INFO:teuthology.orchestra.run.ovh008.stdout:}
2017-09-12T20:49:29.027 INFO:teuthology.orchestra.run.ovh008.stdout:skipping: [ovh069.front.sepia.ceph.com] => {
2017-09-12T20:49:29.028 INFO:teuthology.orchestra.run.ovh008.stdout:    "changed": false,
2017-09-12T20:49:29.028 INFO:teuthology.orchestra.run.ovh008.stdout:    "skip_reason": "Conditional result was False",
2017-09-12T20:49:29.028 INFO:teuthology.orchestra.run.ovh008.stdout:    "skipped": true
2017-09-12T20:49:29.028 INFO:teuthology.orchestra.run.ovh008.stdout:}
2017-09-12T20:49:29.028 INFO:teuthology.orchestra.run.ovh008.stdout:
2017-09-12T20:49:29.028 INFO:teuthology.orchestra.run.ovh008.stdout:TASK [stop ceph mons on ubuntu] ************************************************
2017-09-12T20:49:29.028 INFO:teuthology.orchestra.run.ovh008.stdout:task path: /home/ubuntu/ceph-ansible/infrastructure-playbooks/purge-cluster.yml:446
2017-09-12T20:49:29.083 INFO:teuthology.orchestra.run.ovh008.stdout:skipping: [ovh008.front.sepia.ceph.com] => {
2017-09-12T20:49:29.084 INFO:teuthology.orchestra.run.ovh008.stdout:    "changed": false,
2017-09-12T20:49:29.084 INFO:teuthology.orchestra.run.ovh008.stdout:    "skip_reason": "Conditional result was False",
2017-09-12T20:49:29.084 INFO:teuthology.orchestra.run.ovh008.stdout:    "skipped": true
2017-09-12T20:49:29.084 INFO:teuthology.orchestra.run.ovh008.stdout:}
2017-09-12T20:49:29.089 INFO:teuthology.orchestra.run.ovh008.stdout:skipping: [ovh059.front.sepia.ceph.com] => {
2017-09-12T20:49:29.089 INFO:teuthology.orchestra.run.ovh008.stdout:    "changed": false,
2017-09-12T20:49:29.089 INFO:teuthology.orchestra.run.ovh008.stdout:    "skip_reason": "Conditional result was False",
2017-09-12T20:49:29.089 INFO:teuthology.orchestra.run.ovh008.stdout:    "skipped": true
2017-09-12T20:49:29.089 INFO:teuthology.orchestra.run.ovh008.stdout:}
2017-09-12T20:49:29.109 INFO:teuthology.orchestra.run.ovh008.stdout:skipping: [ovh069.front.sepia.ceph.com] => {
2017-09-12T20:49:29.110 INFO:teuthology.orchestra.run.ovh008.stdout:    "changed": false,
2017-09-12T20:49:29.110 INFO:teuthology.orchestra.run.ovh008.stdout:    "skip_reason": "Conditional result was False",
2017-09-12T20:49:29.110 INFO:teuthology.orchestra.run.ovh008.stdout:    "skipped": true
2017-09-12T20:49:29.110 INFO:teuthology.orchestra.run.ovh008.stdout:}
2017-09-12T20:49:29.117 INFO:teuthology.orchestra.run.ovh008.stdout:
2017-09-12T20:49:29.118 INFO:teuthology.orchestra.run.ovh008.stdout:TASK [remove monitor store and bootstrap keys] *********************************
2017-09-12T20:49:29.118 INFO:teuthology.orchestra.run.ovh008.stdout:task path: /home/ubuntu/ceph-ansible/infrastructure-playbooks/purge-cluster.yml:451
2017-09-12T20:49:29.445 INFO:teuthology.orchestra.run.ovh008.stdout:fatal: [ovh059.front.sepia.ceph.com]: FAILED! => {
2017-09-12T20:49:29.445 INFO:teuthology.orchestra.run.ovh008.stdout:    "changed": false,
2017-09-12T20:49:29.445 INFO:teuthology.orchestra.run.ovh008.stdout:    "failed": true
2017-09-12T20:49:29.445 INFO:teuthology.orchestra.run.ovh008.stdout:}
2017-09-12T20:49:29.445 INFO:teuthology.orchestra.run.ovh008.stdout:
2017-09-12T20:49:29.445 INFO:teuthology.orchestra.run.ovh008.stdout:MSG:
2017-09-12T20:49:29.445 INFO:teuthology.orchestra.run.ovh008.stdout:
2017-09-12T20:49:29.445 INFO:teuthology.orchestra.run.ovh008.stdout:rmtree failed: [Errno 30] Read-only file system: '/var/lib/ceph/osd-lockbox/60abf2f0-ed16-4092-b991-a48e4c3872ec/key-management-mode'
2017-09-12T20:49:29.501 INFO:teuthology.orchestra.run.ovh008.stdout:fatal: [ovh008.front.sepia.ceph.com]: FAILED! => {
2017-09-12T20:49:29.501 INFO:teuthology.orchestra.run.ovh008.stdout:    "changed": false,
2017-09-12T20:49:29.501 INFO:teuthology.orchestra.run.ovh008.stdout:    "failed": true
2017-09-12T20:49:29.502 INFO:teuthology.orchestra.run.ovh008.stdout:}
2017-09-12T20:49:29.502 INFO:teuthology.orchestra.run.ovh008.stdout:
2017-09-12T20:49:29.502 INFO:teuthology.orchestra.run.ovh008.stdout:MSG:
2017-09-12T20:49:29.502 INFO:teuthology.orchestra.run.ovh008.stdout:
2017-09-12T20:49:29.502 INFO:teuthology.orchestra.run.ovh008.stdout:rmtree failed: [Errno 30] Read-only file system: '/var/lib/ceph/osd-lockbox/a3d1a24f-8c4b-423c-b3c8-f723b7f0b785/keyring'
2017-09-12T20:49:29.538 INFO:teuthology.orchestra.run.ovh008.stdout:fatal: [ovh069.front.sepia.ceph.com]: FAILED! => {
2017-09-12T20:49:29.538 INFO:teuthology.orchestra.run.ovh008.stdout:    "changed": false,
2017-09-12T20:49:29.538 INFO:teuthology.orchestra.run.ovh008.stdout:    "failed": true
2017-09-12T20:49:29.538 INFO:teuthology.orchestra.run.ovh008.stdout:}
2017-09-12T20:49:29.538 INFO:teuthology.orchestra.run.ovh008.stdout:
2017-09-12T20:49:29.538 INFO:teuthology.orchestra.run.ovh008.stdout:MSG:
2017-09-12T20:49:29.538 INFO:teuthology.orchestra.run.ovh008.stdout:
2017-09-12T20:49:29.538 INFO:teuthology.orchestra.run.ovh008.stdout:rmtree failed: [Errno 30] Read-only file system: '/var/lib/ceph/osd-lockbox/8f6d82f6-c1a8-4eb8-b713-059bd1dc504d/lost+found'
2017-09-12T20:49:29.540 INFO:teuthology.orchestra.run.ovh008.stdout:
2017-09-12T20:49:29.540 INFO:teuthology.orchestra.run.ovh008.stdout:PLAY RECAP *********************************************************************
2017-09-12T20:49:29.540 INFO:teuthology.orchestra.run.ovh008.stdout:localhost                  : ok=0    changed=0    unreachable=0    failed=0
2017-09-12T20:49:29.540 INFO:teuthology.orchestra.run.ovh008.stdout:ovh008.front.sepia.ceph.com : ok=23   changed=18   unreachable=0    failed=1
2017-09-12T20:49:29.540 INFO:teuthology.orchestra.run.ovh008.stdout:ovh059.front.sepia.ceph.com : ok=22   changed=17   unreachable=0    failed=1
2017-09-12T20:49:29.540 INFO:teuthology.orchestra.run.ovh008.stdout:ovh069.front.sepia.ceph.com : ok=22   changed=17   unreachable=0    failed=1

@leseb
Copy link
Member

leseb commented Sep 13, 2017

I get it , you are doing collocation of mons and osds... Working on a fix.

@vasukulkarni
Copy link
Contributor

@leseb thanks, also I dont understand what is that localhost thing at the end in "PLAY RECAP"

localhost : ok=0 changed=0 unreachable=0 failed=0

leseb added a commit that referenced this issue Sep 13, 2017
Handles the case when a mon is collocated with an OSD.

Closes: #1877
Signed-off-by: Sébastien Han <seb@redhat.com>
@leseb
Copy link
Member

leseb commented Sep 13, 2017

@vasukulkarni #1892

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants