-
Notifications
You must be signed in to change notification settings - Fork 6.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NVS should allow overwriting existing index even if there's no room to keep the old value #51019
Comments
@maximevince, I understand your request. The reason nvs does not allow this is because you might loose data if there is a powerdown during the proposed operation as you correctly state. A second problem that might arise is that the new write takes up more space and so it might not fit leaving the nvs system without data. The work around you propose (first deleting, then writing) is exactly the method that should be used and the decision to do this is down to the user. |
@Laczen, thank for you quick response. I am wondering, however, if there isn't a way to avoid the potential loss of data. |
Yes, but then there would be the second problem, what if larger data is written, is there space enough? And there is another problem, suppose the old data is not in the last sector: the new data is written and the sector has insufficient space to copy the data in the last sector; this would be a gc failure. |
@maximevince, can this be closed ? There already is a solution (first erase, then write). |
I would like to elaborate a little more, if that's okay. |
For the problem of the bigger size: that's the same in both cases, no? If you want to write a bigger entry, and there's no room for it, the action should fail. That seems normal behavior to me. |
For the problem of the bigger size: It needs to decide before it writes, not after. BTW: Geniet van je zondag, vanuit een zonnig Koksijde. |
Hi, P.S: Haha, geniet van 't zeetje! Groetjes vanuit het ook zonnige Hasselt ;) |
@maximevince, if you find a solution that would guarantee no data is lost a PR is welcome. If you have other questions/proposals for nvs do not hesitate to add them as issue/PR. |
Is your enhancement proposal related to a problem? Please describe.
When the NVS pages are full (i.e. the item of the size we're trying to write doesn't fit on the existing pages anymore),
but we want to overwrite an existing data-item (which might be of smaller or equal size),
NVS starts moving (through the GC) all id-data pairs on all pages to a new page, then looping back to the first page and finally bailing out without a successful update. (because it detected the loop condition: https://github.com/zephyrproject-rtos/zephyr/blob/main/subsys/fs/nvs/nvs.c#L1073)
See below for an example log.
Describe the solution you'd like
I'd like to be able to update the data-item of the existing index, with a smaller or equally-size data item.
IMO, it should be possible by doing it this way:
Describe alternatives you've considered
As an alternative approach, you could manually delete the id/data pair first, then write the new item. In which case the operation (obviously) succeeds. However, it leaves a tiny gap where data could be lost, in case of power-off / reset / ...
Additional context
Here the log of what happens when trying to replace a big chunk (eg 1800 bytes) when 5 of those are already present:
So we see that all existing entries are shuffled around, which is causes useless page writes and erases, only to result in a failure to write data to idx 0.
What I would like to see is that index 0 is not being garbage collected / moved around, but in stead the new data entry for index 0 is written to it's new destination sector at once.
I've created a quick hack of
nvs.c
to demonstrate that this might work, but wanted to consult here of any possible side-effects or design considerations which I might not be aware of.See the patch over here: https://gist.github.com/maximevince/2e3b83f56cc0374bb01cf6e67fe5139d
This result in the follow log for the same test case as above:
So now index 0 is not being moved before writing the new index 0 entry, and it doesn't trigger the whole moving-everything-around loop, either.
If someone with the right knowledge of NVS can confirm that this could indeed be a possible improvement, I can create a proper PR out of it.
Looking forward to your feedback.
The text was updated successfully, but these errors were encountered: