Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deepsea can't detect if a disk is an SSD/HDD if it sits behind a RAID Controller. #70

Closed
jschmid1 opened this issue Dec 20, 2016 · 9 comments

Comments

@jschmid1
Copy link
Contributor

So far I heard from a couple of people wanting to deploy with a RAID setup. That won't work with the current method of detecting solid states..

The only tool that is capable of detecting the correct type is smartctl megaraid.

We either want to change the way we detect the the disktype or add some overwrite capabilities.

@swiftgist
Copy link
Contributor

Is this mainly for those doing pass through? That is, the drives are not in a RAID configuration, these drives are simply connected to a RAID controller.

If somebody is actually using RAID 3 or RAID 5, what is the "right" answer? Suggest that the "device" be an OSD? That seems wrong. If it's RAID 0, then it's probably fine if not down right confusing to use both RAID and Ceph for redundancy. Any thoughts?

@jschmid1
Copy link
Contributor Author

Is this mainly for those doing pass through? That is, the drives are not in a RAID configuration, these drives are simply connected to a RAID controller.

I'm pretty sure that's the case here.

If somebody is actually using RAID 3 or RAID 5, what is the "right" answer? Suggest that the "device" be an OSD? That seems wrong.

Mh, in that case they surely don't want it to be treated as a regular OSD..

If it's RAID 0, then it's probably fine if not down right confusing to use both RAID and Ceph for redundancy. Any thoughts?

IIRC one reason to use RAID-1 arised from a need for reliability in latency/recovery time... Never heard the usecase of RAID-0 in ceph though -

Manual intervention might be needed here.

@jschmid1
Copy link
Contributor Author

++ If we solely rely on SMART's output we could also include checks for potential disk issues.

++ speed ( hwinfo: real 0m2.044s vs smartctl real 0m0.445s)

@smithfarm
Copy link
Contributor

@jschmid1
Copy link
Contributor Author

thanks for the pointer @smithfarm

we might still want to consider doing a initial(before initial deployment) health check.
The regular checks should definitely go(stay) into ceph-mgr.

@jschmid1
Copy link
Contributor Author

++ adds portability as hwinfo is not available on other distros.

jschmid1 pushed a commit to jschmid1/DeepSea that referenced this issue Jan 11, 2017
Signed-off-by: Joshua Schmid <jschmid@suse.de>
jschmid1 pushed a commit to jschmid1/DeepSea that referenced this issue Jan 19, 2017
Signed-off-by: Joshua Schmid <jschmid@suse.de>
jschmid1 pushed a commit to jschmid1/DeepSea that referenced this issue Jan 20, 2017
Signed-off-by: Joshua Schmid <jschmid@suse.de>
jan--f added a commit that referenced this issue Mar 13, 2017
rewrite cephdisks / hardware detection ref #70
@jschmid1
Copy link
Contributor Author

fixed in #73

@BlaineEXE
Copy link
Contributor

BlaineEXE commented Mar 24, 2017

I realize this request has already been merged and closed, but I have something we might consider here. libstoragemgmt (https://github.com/libstorage/libstoragemgmt) could be used here as a generic disk detection subsystem; the code overhead might be much smaller, and our testing requirements would be reduced since it's already tested.

Edit: I looked through the code and tests more fully, and it seems hwinfo, lshw, and smartctl provide the data. At first glance, I thought megacli, etc. might have been used. I'm not sure libstoragemgmt would be totally applicable here, but it is a good thing to be aware of. My team at HPE has pushed a lot of changes into it to make sure it works on ProLiant.

@swiftgist
Copy link
Contributor

I cannot recall the exact post, but when I was deciding what to use for hwinfo, something I read discouraged me from using libstoragemgmt. I am all for not reinventing wheels and if this does meet the requirements, then I am fine with migrating to it. However, I will prefer custom code over workarounds for anything lacking in a library. Generally, there's less to debug.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants