Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bluestore osd configuration intermittently fails with (22) Invalid argument #105

Closed
travisn opened this issue Oct 24, 2016 · 2 comments
Closed
Assignees
Labels
ceph-rados Ceph core components / functionality

Comments

@travisn
Copy link
Member

travisn commented Oct 24, 2016

Starting the bluestore osds in the demo vagrant environment, we intermittently see a the osd fail with the following error.

Oct 24 16:13:31 castle01 rkt[1568]: 2016-10-24 16:13:31.147976 c937080 -1 WARNING: the following dangerous and experimental features are enabled: bluestore,rocksdb
Oct 24 16:13:31 castle01 rkt[1568]: 2016-10-24 16:13:31.177217 c937080 -1 bluestore(/var/lib/castled/osd0) _read_fsid unparsable uuid
Oct 24 16:13:31 castle01 rkt[1568]: 2016-10-24 16:13:31.189627 c937080 -1 bdev(/var/lib/castled/osd0/block) open open got: (22) Invalid argument
Oct 24 16:13:31 castle01 rkt[1568]: 2016-10-24 16:13:31.189686 c937080 -1 OSD::mkfs: ObjectStore::mkfs failed with error -22
Oct 24 16:13:31 castle01 rkt[1568]: 2016-10-24 16:13:31.189758 c937080 -1  ** ERROR: error creating empty object store in /var/lib/castled/osd0: (22) Invalid argument
Oct 24 16:13:31 castle01 rkt[1568]: 2016-10-24 16:13:31.191071 I | ERROR: failed to config osd on device sdd. failed to initialize OSD at /var/lib/castled/osd0: failed osd mkfs for OSD ID 0, UUID 27eaf968-5e1c-4ddd-a967-6e02291c3c4e, dataDir /var/lib/castled/osd0: failed to run osd: exit status 1
@jbw976
Copy link
Member

jbw976 commented Oct 25, 2016

While #110 seems to fix this issue temporarily, we will need to do something more reliable in the future. It definitely seems that the issue is that after partitioning, the devices aren't quite ready for ceph --mkfs to access them. The workaround of waiting for 2 seconds helps with that but we should have a more reliable way to verify that the partitions are ready have ceph access them. Perhaps some sort of device access test.

@travisn travisn self-assigned this Oct 25, 2016
@travisn travisn self-assigned this Oct 26, 2016
@travisn travisn modified the milestones: initial public release, initial public preview Oct 27, 2016
@travisn travisn added the ceph-rados Ceph core components / functionality label Dec 7, 2016
@travisn
Copy link
Member Author

travisn commented Feb 16, 2017

closing as this hasn't been seen recently

@travisn travisn closed this as completed Feb 16, 2017
thotz pushed a commit to thotz/rook that referenced this issue Jun 5, 2020
Create sample-how-to-write-provisioner.md
leseb added a commit to leseb/rook that referenced this issue Aug 26, 2020
Bug 1862133: Making RBD provisioner, Ceph provisioner and Ceph node optional
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ceph-rados Ceph core components / functionality
Projects
None yet
Development

No branches or pull requests

2 participants