New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SD card optimized formatting #193
Comments
mkfs.vfat can be extended to any new functionality. Please describe exactly what is not supported and test case to verify it. |
Hi @pali Apologies for the long delay in replying, but it was only today that I found out exactly how to best format an SD card under Linux for best performance. I know more now because the sdcard.org formatting tool has recently had a Linux version released ( https://www.sdcard.org/downloads/sd-memory-card-formatter-for-linux/ ) and that led to this thread on Hacker News: https://news.ycombinator.com/item?id=35610243 From those comments, I found this post very helpful:
However this is the link most relevant to this feature request: It explains how to calculate the reserved sectors and where the data partition should begin for optimal performance. It would be good if |
Just to avoid confusion, the tool is named mkfs.fat, previously it was named mkdosfs and for compatibility with other tools, install script creates symlink for mkdosfs and also for mkfs.msdos and mkfs.vfat. The invoked name of the tool is not used for anything. mkfs.fat already aligns formatted sectors to chosen cluster size (unless filesystem is not too small). Also mkfs.fat already choose C/H/S geometry based on SD Card Part 2 File System Specification. mkfs.fat reads from OS offset of the partition from beginning of disk / sd card for proper filling of "hidden sector" field, but does not use it for aligning (or filling reserved sector). This could be an improvement in this area. But I'm not sure if it is really needed. Normally it is enough to create MBR partition on SD card which starts at 4MB offset from the beginning. 4MB for sure is aligned to any possible NAND erase size. When partition size is larger than 8GB then mkfs.fat formats it to FAT32 with 8kB cluster size. This should be already fine. I will comments for your points below. Based on my above comments, I think that mkfs.fat do not need other SD specific stuff. There is an area for improving cluster size selection (which also affects Fat Allocation Table size and therefore total available space for data), C/H/S geometry parameters to align with MBR (needed for some devices) or erasing/discarding sectors during formatting. Anyway if there are any other stuff which you think that needs to be handled, please let me know. But mostly performance is impacted by the writer - kernel vfat.ko driver, not formatter.
Access to CSD registers requires putting SD card to the SDHCI (or some mmc-compatible) controller which is connected to processor. Mostly available for industrial and embedded hardware, not common on x86 computers or laptops. But I have some laptops with SD card reader in form of PCIe based SDHCI controller, which allows it. USB mass-storage protocol (used in most SD card readers) does not have ability to access SDHCI controller inside, so no access to CSD registers. To check if your SD card reader is not USB based and support access to CSD registers, just look if your SD card is detected as real mmc block device /dev/mmcblk0 and not as generic (scsi) device /dev/sda.
mmc specifications are extending and new useful things are being added and then they are also used... So this is not an argument not to add it. Instead it really does not make sense if card firmware reformats card to some old fashion filesystem like (ex)FAT. The only purpose of FAT usage in these days is compatibility with MS systems and other old system. It is not filesystem designed for performance nor for flash storage.
This card area is not accessible by standard commands. It is not part of the visible block device in system. You need to use different mmc cmds for accessing it, meaning you need to talk with SDHCI controller (or other mmc compatible). Also I think that Linux kernel still does not have implemented support for it and neither exported ioctls from /dev/mmcblk... But I do not remember exact details. So no tool normally touches it. But Linux has already support for accessing eMMC boot partitions and those are exported as another block devices in /dev/. And user can use it like any other block device. |
I just had the misfortune of reading the "SD Specifications Part 2: File System Specification" too, so some comments here.
This honestly is all that needs for me consider it "SD-optimized" or whatever. Presumably we do it like exfatprogs and accept a BU size, so the program chooses a reserve size that aligns to BU size (when combined with offset).
Pedants might point to the recommended way for FAT12/16, which is to move the partition (not the reserve count) to get alignment. Don't see how that's better than using the reserve count and it's not like
The choice on the SD side is different for 1/16 vs 32 anyways, so shrug. |
@danboid has definitely touched an important and oft-misunderstood issue of using proper allocation unit sizes for flash memory. That is pretty much the problem of all the most popular filesystems in use today, like ext/NTFS. There was no notion of the concept of write-erase cycles at the time and thus formats designed to run on spinning disks are wholly incompatible with flash memory and have undesirable qualities. However, in the case of SSD's this problem was largely bypassed by implementing RAM caches and smart controllers employing smart algorithms which mitigate the disadvantages of using legacy disk formats on a medium they were never intended for.[1] However, SD cards do not do this. At most they have simple wear-leveling mechanisms but that's it. A lot is left to user, to understand the decisions regarding which filesystem to use and with which parameters. This is where everything falls off a cliff since most people do not understand any of it. Can't blame them either, since good information is hard to come by. Most of the information related to this is obsolete and simply incorrect and that information ends up being parroted everywhere in a never-ending cycle. When the first search results on Google lead to incorrect answers, most upvoted answers on sites like Stack Overflow / Serverfault[2] / SuperUser[3] are wrong, the official spec sheet from SD association is poorly written and concepts aren't properly explained, and even the wikipedia page for FAT[4] is contradicting itself and factually incorrect, it's hard to blame anyone for not having the correct information. It also doesn't help that the concepts of "sector size" (hardware, disk) and "cluster size" (software, filesystem) are often used interchangably and incorrectly. Then there are terms like "allocation unit size", "block size" thrown into the mix which mean those same two concepts but only help in adding to the confusion.
Answers like these instantly give away that the user doesn't know what they're talking about. "something like" , "or maybe even" without any further explanation. That's the problem of obsolote information, as what is the user is saying has been historically correct, but only until 2006 when SDHC specification was released along with CSD 2.0. But that answer was given in 2023, not in 2005. What the poster, and also pretty much 9/10 of answers found online doesn't understand, is that the Historically, the minimum erasable block size and thus the correct allocation unit size for a SD card was Therefore the only correct way to get the proper allocation unit size for the SD card would be to get the data from the CSD, which probably isn't possible using run-of-the-mill USB SD card readers. However, if you happen to have a SD controller and mounted the SD card as a block device in linux, you can do this:
Enter that number to this tool [9] and press "Decode CSD" If your That being said, I haven't run into a single SDXC card where the I am not saying that the default values should be changed though, as they've been set at that for two decades now without changes in Windows(format) and Linux(dosfstools). So on a widely used tools like this, I wouldn't go on changing the defaults even if it was warranted as it could break people's workflows. And also the old block size assumptions are valid for the old base SD spec. What I'm saying though that on users' part you should be mindful of this. The old wisdom of larger block size for larger files still stands, as more blocks means more filesystem overhead. So if your use case is only large files, like a digital camera taking high resolution photos/video and nothing else, using a large allocation unit size would be the best bet. In every other case though, you would most definitely want to use the allocation unit size which matches the SD card, otherwise every single write and erase would be multiplied and wear out your card exponentially faster. It would be easy to say to always use 512, but FAT32 poses a hard limit of maximum sector count of (2^28-1). That would mean that you can't use the optimal 512 on larger cards.
Even after all these decades, FAT32 is still viable for a simple filesystem. It lacks journaling and fancy features like file permissions and attributes which you wouldn't want to use on a SD card anyway. It also doesn't have any mitigations for fragmentation, which is a good thing since (de)fragmentation was a thing of spinning disks and you wouldn't want to go near that on any flash memory. Simplicity also makes it fast, easy to implement and causes very little unnecessary wear unlike many other filesystems. However, the lack of journaling makes it very prone to failure when power is cut abruptly, which obviously can happen a lot in many applications of the SD card, like battery powered mobile devices. The hard limit of sectors is also unfortunate since it doesn't allow optimal allocation size, reducing performance and lifespan. Also the constant updating of the file allocation tables themselves pose a wear problem, but its nowhere near as bad as on something like ext or NTFS. Solution to all these problems has existed since 2012, f2fs[10], which is specifically designed for commodity flash storage. However unfortunately the mainstream support for it is still pretty much nonexistent, as at least according to online information it still isn't included by default in most Linux distributions, never mind Windows or macOS. The mainline linux kernel itself has supported it since 2012 though, from v3.8 onwards. References [1] https://icrontic.com/article/how_ssds_work [6] https://ebics.net/stm32s-sd-card-internal-structure-sketch/ [9] https://gurumeditation.org/1342/sd-memory-card-register-decoder/ |
FWIW, here's CSD from an older 2G microSD card I have:
never say never I guess? Edit: oops, guess this isn't SDXC 😝 |
Good information there @Fry-kun, so that shows that 512 bytes isn't a rule, and cards which have other than 512B sector size do exist in the wild. So that'd be an SDHC, CSD2.0 card, right? Any idea how old it is? I guess the assumption of all SDXC cards in the wild having 512 byte sectors hasn't been proven false, yet. But ultimately it's needed to read the CSD to be certain of specs of any given card. |
Recently I learned there is a special tool made by the SD Association that optimally formats SD cards to maximize their performance. The problems with this are that its not open source, you have to agree to their EULA to use it and its only available for Windows 7+ and macOS 10.7+.
Is there any chance this feature could be added to mkfs.vfat (and mkfs.exfat) or do you know of a existing open source implementation of such a tool?
https://www.sdcard.org/downloads/formatter/
The text was updated successfully, but these errors were encountered: