Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

littlefs: Too small heap for file cache. #31524

Closed
r2r0 opened this issue Jan 22, 2021 · 9 comments · Fixed by #31781
Closed

littlefs: Too small heap for file cache. #31524

r2r0 opened this issue Jan 22, 2021 · 9 comments · Fixed by #31781
Assignees
Labels
area: File System bug The issue is a bug, or the PR is fixing a bug priority: low Low impact/importance bug

Comments

@r2r0
Copy link
Member

r2r0 commented Jan 22, 2021

Describe the bug
fs_open for CONFIG_FS_LITTLEFS_NUM_FILESth file returns -ENOMEM.
I.e. CONFIG_FS_LITTLEFS_NUM_FILES=3 and when fs_open is caled for 3rd file it returns -ENOMEM.
After incrementing CONFIG_FS_LITTLEFS_NUM_FILES and opening only CONFIG_FS_LITTLEFS_NUM_FILES-1 files problem is not present.
After increasing file_cache_pool (in littlefs_fs.c) even by 128 bytes problem is not present (CONFIG_FS_LITTLEFS_NUM_FILES=3).

To Reproduce
Try to open at the same time CONFIG_FS_LITTLEFS_NUM_FILES number of files on littleFS.

Expected behavior
No error from fs_open.

Impact
annoyance

Logs and console output
None
Environment (please complete the following information):

  • OS: Linux
  • ToolchainZephyr SDK 0.12.1
  • Commit SHA c31ce55

Additional context
Maybe commit fcd392f caused this problem because heap size available to user != declared size of heap.

@r2r0 r2r0 added the bug The issue is a bug, or the PR is fixing a bug label Jan 22, 2021
@ioannisg ioannisg added the priority: low Low impact/importance bug label Jan 23, 2021
@pabigot
Copy link
Collaborator

pabigot commented Jan 26, 2021

#31597 may also be relevant.

pabigot added a commit to pabigot/zephyr that referenced this issue Jan 28, 2021
Disable various tests to focus on validating the file and directory
counts.  This is intended to confirm issue zephyrproject-rtos#31524, but it is not
reproducible on nrf52840dk_nrf52840.

Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
@pabigot
Copy link
Collaborator

pabigot commented Jan 28, 2021

I am unable to reproduce this problem. Using an augmented test in https://github.com/pabigot/zephyr/commits/repro/31524 I can open the required number of files, and directories, on nrf52840dk_nrf52840 without error. (Below this was done with the default count; reconfiguring to 3 also worked.)

CONFIG_FS_LITTLEFS_NUM_FILES=4                                                  
opening /sml/A                                                                  
opening /sml/B                                                                  
opening /sml/C                                                                  
opening /sml/D                                                                  
Close and unlink /sml/D                                                         
Close and unlink /sml/C                                                         
Close and unlink /sml/B                                                         
Close and unlink /sml/A                                                         
CONFIG_FS_LITTLEFS_NUM_DIRS=4                                                   
making and opening directory /sml/DA                                            
making and opening directory /sml/DB                                            
making and opening directory /sml/DC                                            
making and opening directory /sml/DD                                            
Close and rmdir /sml/DD                                                         
Close and rmdir /sml/DC                                                         
Close and rmdir /sml/DB                                                         
Close and rmdir /sml/DA                                                         
unmounting /sml

@r2r0 could you please check this on your configuration? It may be specific to a particular type of flash or platform.

@r2r0
Copy link
Member Author

r2r0 commented Jan 29, 2021

Attached test fails on QEMU x86 and similar test (the same code and the same LITTLEFS config) also fails on our board with STM32WB55 (external FLASH over QSPI).
littlefs_max_files.zip

@pabigot
Copy link
Collaborator

pabigot commented Jan 29, 2021

@r2r0 thanks, that gave me the clue. Please verify solution in #31781.

@r2r0
Copy link
Member Author

r2r0 commented Jan 29, 2021

@pabigot Thanks for quick reply.
It works but it looks for me a little weird because I expected that littleFS will allocate (in file_cache_pool):
CONFIG_FS_LITTLEFS_NUM_FILES * CONFIG_FS_LITTLEFS_CACHE_SIZE bytes
but now it allocates:
CONFIG_FS_LITTLEFS_NUM_FILES * CONFIG_FS_LITTLEFS_NUM_FILES * CONFIG_FS_LITTLEFS_CACHE_SIZE bytes
Did I miss something?

@pabigot
Copy link
Collaborator

pabigot commented Jan 29, 2021

Did I miss something?

No, apparently I did. I'll take another look later.

@pabigot
Copy link
Collaborator

pabigot commented Jan 29, 2021

So here's what's going on.

Originally the file cache used a mem_pool. With that data structure it appears metadata associated with allocations was held separately, so that the entire requested storage space was available for application use.

In #28611 the k_mem_pool API was deprecated, and existing uses switched to a heap implementation. It appears in this implementation metadata associated with allocations is taken from the pool itself, which breaks the expectation that you can reserve N blocks of size B and actually allocate N blocks of size B.

I've confirmed this by verifying that the test program above passes at commit 78614bf but breaks at the immediately following commit fcd392f where the deprecated API was replaced by a heap.

The Kconfig options used to allocate the space for per-file caches are completely obsoleted by the change in data structure.

What I've done in #31781 is deprecate the old mem_pool Kconfig options, and add a new option that provides more direct control of the size of the memory region to allocate file caches. The chain of defaults producing that size have been updated to include the required slop so that allocating N blocks of size B works again.

@r2r0
Copy link
Member Author

r2r0 commented Feb 1, 2021

So, my guessing about problem reason was correct, nice.
Thanks @pabigot , it works very well.

@r2r0 r2r0 closed this as completed Feb 1, 2021
@pabigot
Copy link
Collaborator

pabigot commented Feb 1, 2021

Let's leave this open until the fix is merged.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area: File System bug The issue is a bug, or the PR is fixing a bug priority: low Low impact/importance bug
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants