-
Notifications
You must be signed in to change notification settings - Fork 791
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
littlefs storage utilization #661
Comments
Hi @dorsadeh thanks for opening an issue. So littlefs can store multiple files in a block but it has a caveat that these files, called "inlined files", need to fit in the device's RAM, controlled by the cache_size configuration. So if you set cache_size >= 2KB, you should no longer see one block per file. This is a tradeoff necessary to allow multiple open files, since a sync of any file might need to rewrite that block, possibly invalidating other open-but-unsynced files. |
Hi @geky , it seems inline file size is limited by 0x3fe byte besides cache size. Even if cache size is increased to 32768, 2KB file would still use a new block. Is my understanding right? static lfs_ssize_t lfs_file_flushedwrite(lfs_t *lfs, lfs_file_t *file,
const void *buffer, lfs_size_t size) {
const uint8_t *data = buffer;
lfs_size_t nsize = size;
if ((file->flags & LFS_F_INLINE) &&
lfs_max(file->pos+nsize, file->ctz.size) >
lfs_min(0x3fe, lfs_min(
lfs->cfg->cache_size,
(lfs->cfg->metadata_max ?
lfs->cfg->metadata_max : lfs->cfg->block_size) / 8))) {
// inline file doesn't fit anymore
int err = lfs_file_outline(lfs, file);
if (err) {
file->flags |= LFS_F_ERRED;
return err;
}
} |
Ah, @rabbitsaviola, you are right, I had forgotten about the field limit. On disk inline files are stored with a 10-bit length field, with 0x3ff representing a deleted file, so there is a hard limit at ~1 KiB. There are 2 extra reversed bits in the tag (it wasn't clear if these 2 bits would be more useful for ids (files per metadata block) or for the attribute length). This could raise the limit to 12-bits (~4KiB) at the cost of some complexity since these bits aren't contiguous in the tag. |
Hi @geky , thanks for your explanation. As it's limited by tag structure, it seems difficult to raise the limit to bigger value, such as 32KB? In my application there're many read-only files and most of them are smaller than 32KB. Sufficient RAM space could be provided as cache if it can help improve the disk utilization. Do you have any suggestion? |
Yes, this is a significant shortsight in littlefs. One of a number of issues (mostly performance) related to NAND. NAND support was an afterthought vs NOR and it shows in places like this. It's still possible to improve this, for example using the extra bits to indicate an "extended tag" that is multiple words long. I'm looking into making significant changes to how metadata is stored so this should improve in the long-term, but that will take time since these are more involved changes. A short-term option, which to be fair may be a "cheat", would be to use an FTL such as the Dhara FTL to convert NAND into more littlefs friendly block sizes. littlefs's wear leveling could be disabled in this case. In theory this would be a full-featured solution at a code/complexity cost, though I realize it's not ideal. |
Thanks for your suggestion, @geky. I do tried Dhara as FTL for fatfs before. But its performance is much worse than littlefs. Metadata cache might be needed to improve the performance. I'll try it with littlefs to double check it. |
I'm looking forward to progress on this as well. Not from actual storage utilization standpoint but more for efficiency and data recoverability. Examining block 64 with revcount = 00000002: The 1022(+4) chunks stand out hehe. |
With inline enabled it seems to be no way to actually utilize the first inline tag written as it is written during creation of the file upon open, and no writes to the file has happened yet. It would be very useful to be able to write both the name and the first inline data bit in that first record as it would be able to contain the initial header of my files. Examining block 64 with revcount = 00000002: |
I forgot to say @dorsadeh that any file that does not fit inline will be allocated a full eraseable block sized chunk (block_size) in the CTZ structure for the file. The ability to write 4 smaller bits to a sector will not help with the current way littlefs allocates data space. |
There are a couple scripts used during development:
$ ./scripts/readtree.py --help
$ ./scripts/readtree.py disk 4096 Though no promises they work all the time. It would be nice to make these stable and move to a typed language, but it's low-priority.
This also caused by the above mentioned tag encoding limit. CRC padding uses the same 10-bit field in the tag, so the largest amount of padding is 1022 bytes (1023 being reserved for for deleted tags). littlefs then writes multiple CRC tags to fill out the necessary padding, which is a hack, but at least keeps littlefs functional.
I can address this in #942 (comment), thanks for creating an issue.
Yes, basically. In theory you could omit the empty inline tag, but since littlefs then writes ~1000 bytes of padding you wouldn't really be gaining anything |
I just thought I'd add a quick update on this. I've been making progress on this in the background, and have a prototype working that should, in theory, be able to remove both the RAM requirement for inline files and tag encoding limitations. Though it will be a bit longer before this is usable. |
I'm using littlefs on a 128MB external NAND flash
the flash erase resolution is only in blocks sized 128KB
My application writes files ranging from 2KB - 60KB
I noticed littlefs writes every new file to a new flash block. in case of a 2KB file it is only 1.5625% utilization of block space!
Is this issue solvable?
The text was updated successfully, but these errors were encountered: