-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Disable LBA weighting on files and SSDs - extend to SMR drives #10182
Comments
|
You can disable LBA weighting by setting the module option |
|
Unless/until I can sucessufully get my drives to resilver and stay in the array - see ticket #10214 and the recent block and files postings, I can't benchmark this. (I have WD40EFAX drives too) However if you take the time to sit through Manfred Berger's presentation on SMR (OpenZFS forum, Paris 2015- https://www.youtube.com/watch?v=a2lnMxMUxyc) you'll understand why LBA weighting is utterly valueless on SMR drives - and quite likely to be harmful due to it causing drives to generate massively large indirect tables Essentially a SMR zone is the mechanical equivalent of a SSD block and the drives translate every LBA you send them to something in the zones - meaning it doesn't matter where you THINK you're putting data on the drive, the drive is making up its own mind about where it ends up. There is a complete disconnect between LBA space and the actual position on the platters. Adding confusion the CMR (conventional(non-shingled) magnetic recording) landing space for writes can be analogised to TLC/QLC SSD's SLC write cache space - stuff sits there until the drive decides a final resting place during quiet periods (or until the CMR zone fills up) FWIW: Reports for resilvering on DM-SMR drives without the WD RED firmware bug all indicate that throughput either grinds to a near halt (kilobytes per second) or actually does stop for extended periods whilst the CMR zone is flushed out. As this can be up to 100GB in size, that flush can take quite a while. |
|
After some more digging, I think LBA weighting is essentially valueless beyond the first few LBAs even on CMR drives. The reason is because of the way sectors are addressed: Explanation from 1.4.2 G1 Layout In other words: LBA to platter/head allocation runs along the platters before switching heads and may (or may not) run in a serpentine manner to avoid the actuator having to seek from one extreme of the platter to the other when changing between heads on a sequential write/read Which in turn means that LBA weighting is only valuable for the first N tracks of the first platter, as beyond that you have no idea what the speed will actually be. It may be that speed decreases, to a minimum with each track step and then increases to a peak before decreasing again (serpentine pattern), or it may decrease to a minimum, then snap back to peak speeds as the head switches to track0 on the next platter (traditional linear pattern) We, as the end user do not know how the disk is laid out, are not expected to be able to find out and are likely to be prevented from doing so by any means other than by benchmarking the drive from end to end to build a response histogram (NB: The terminology "cylinder" in the document is likely to derive from the ancestral device, which was a rotating magnetic drum(cylinder), not a flat platter - a "cylinder" in this context is one face of a platter) |
|
While I have no objections that LBA weighting likely does not have much sense for SMR (the question is how to reliably detect that for device-managed SMR), as I see from the code, it is not really used for 3 years now, since 4e21fd0 switched weighting to SPACEMAP_HISTOGRAM. |
|
In which case disabling or removing it entirely really should be a no-brainer |
|
It may still be used for older pool, not upgraded to new features yet. But since it should not affect new pools, I would not focus on it too much. If there is a way to detect SMR drives (like rotating disk with TRIM support or something better), I see no problem to use it. Otherwise I would leave it as-is |
This is going to need a black list. However, even a blacklist is not reliable given Western Digital and others putting SMR into models that were previously PMR/CMR. :/ |
Are there more examples of such HDDs? Unless I'm mistaken, this seems to be a legacy product no longer made by Seagate. Unless there are more examples, this may just be an edge case worth documenting and writing a honking warning about, and not necessarily coding support for in the near term. |
|
ALL current Seagate Barracuda and Barracuda compute drives are undeclared DM-SMR without TRIM, Toshiba's drives don't declare TRIM either. It really is a mess. Apart from the Hattis law class action in California over WD REDs, there's another case filed in New York over the other drives: https://classactionsreporter.com/wp-content/uploads/Western-Digital-SMR-Hard-Drives-Compl.pdf |
I don't think this is the case. I recently deployed a 2 TB Seagate Barracuda 2.5 in HDD that is SMR with TRIM. Here's the output of |
|
This issue has been automatically marked as "stale" because it has not had any activity for a while. It will be closed in 90 days if no further activity occurs. Thank you for your contributions. |
This was originally fb40095
Owing to the way that SMR drives work, LBA weighting is extremely counterproductive.
Unfortunately there are a lot of SMR drives in CMR clothing out there and the number is increasing - making things worse is that many don't report themselves as zoned devices.
However there's a ray of sunshine: Any rotational device which reports "trim" functionality is definitely SMR - or at least using zones internally and should not be LBA-weighted. (Example WDx0EFAX (RED) drives (EFRX are CMR)
Detecting SMR-but-not-trimmable drives is a bit harder (Example: Seagate ST3000DM003)
The text was updated successfully, but these errors were encountered: