Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

nvs_set_blob() hangs the MCU when +-91% of the total space of a 512KB data-nvs partition is used. #2313

Closed
pantaluna opened this issue Aug 20, 2018 · 1 comment

Comments

@pantaluna
Copy link

Environment

  • Development Kit: [Adafruit HUZZAH32]
  • Core (if using chip or module): [ESP-WROOM32]
  • IDF version (commit id): ESP-IDF v3.2-dev-518-g020ade65 Aug18,2018
  • Development Env: [Eclipse]
  • Operating System: [Windows]
  • Power Supply: [USB]

Problem Description

nvs_set_blob() hangs the MCU when +-91% of the total space of a 512KB data-nvs partition is used.

Hanging means that the call to nvs_set_blob() is executing (according to the logs) but it never returns.

The Memory Stats just before the failing method call:

I (51352) mjd:   ESP free HEAP space: 263932 bytes | FreeRTOS free STACK space (calling task): 20992 bytes

The NVS Stats just before the failing method call:

myapp: NVS STATS partition nvsmeteohub: used_entries=14734  free_entries=1394  total_entries=16128

In this configuration the hangup always happens when writing the 509th key-value pair to NVS.

This is a data-nvs partition of 512KB named "nvsmeteohub". The blob key length is always 11. The blob value length is always 860 bytes. Each blob value uses 29 entries in NVS.

@note This reproducible case contains a loop to write about 550 blob key-values (knowing it will fail at the 509th). In production that loop is not there of course; the application will add only one data record during each wakeup cycle and in exceptional situations it will go up to the 509th after a few weeks.

Expected Behavior

nvs_set_blob() should work fine until not enough space/entries are available in the partition namespace for that operation. If no more space is available then it should return the existing error ESP_ERR_NVS_NOT_ENOUGH_SPACE "there is not enough space in the underlying storage to save the value".

Actual Behavior

The call to nvs_set_blob() hangs the system and never returns. Also no CPU Watchdog messages.

Workaround

None.

Steps to repropduce

@Important Erase the flash first.

No wiring needed.

cd ~
git clone --recursive https://github.com/pantaluna/support_esp_espidf2.git
cd  support_esp_espidf2
make erase_flash; make flash monitor

Code to reproduce this issue

https://github.com/pantaluna/support_esp_espidf2

Debug Logs

...
...
***SECTION: Save (1) data record struct to flash***
D (51242) mjd: mjd_log_memory_statistics()
D (51252) mjd: mjd_get_memory_statistics()
I (51252) mjd:   ESP free HEAP space: 263932 bytes | FreeRTOS free STACK space (calling task): 20992 bytes
D (51262) myapp: exec nvs_open_from_partition()
D (51272) nvs: nvs_open_from_partition uploadqueue 1
D (51272) myapp: _log_nvs_stats()
I (51282) myapp: NVS STATS partition nvsmeteohub: used_entries=14705  free_entries=1423  total_entries=16128
D (51292) nvs: nvs_get lastrecordid 4
D (51292) myapp: NVS key=lastrecordid val=507
I (51292) myapp: _save_data_record_to_flash(): Storing the data record in the NVS nvs_key=record00508 ...
D (51302) nvs: nvs_set_blob record00508 860
D (51322) nvs: nvs_set lastrecordid 4 508
D (51322) myapp: exec nvs_commit()
D (51322) myapp: _log_nvs_stats()
I (51322) myapp: NVS STATS partition nvsmeteohub: used_entries=14734  free_entries=1394  total_entries=16128
D (51332) nvs: nvs_close 508
D (51332) myapp: _save_data_record_to_flash()

I (51342) myapp: ***SECTION: Save (1) data record struct to flash***
D (51342) mjd: mjd_log_memory_statistics()
D (51352) mjd: mjd_get_memory_statistics()
I (51352) mjd:   ESP free HEAP space: 263932 bytes | FreeRTOS free STACK space (calling task): 20992 bytes
D (51362) myapp: exec nvs_open_from_partition()
D (51372) nvs: nvs_open_from_partition uploadqueue 1
D (51372) myapp: _log_nvs_stats()
I (51382) myapp: NVS STATS partition nvsmeteohub: used_entries=14734  free_entries=1394  total_entries=16128
D (51392) nvs: nvs_get lastrecordid 4
D (51392) myapp: NVS key=lastrecordid val=508
I (51392) myapp: _save_data_record_to_flash(): Storing the data record in the NVS nvs_key=record00509 ...
D (51402) nvs: nvs_set_blob record00509 860
@sagb2015
Copy link
Contributor

@pantaluna Thanks for reporting this. The fix will be available soon.

@igrr igrr closed this as completed in 911f96a Aug 24, 2018
catalinio pushed a commit to catalinio/pycom-esp-idf that referenced this issue Jun 28, 2019
Marking a page full does not skip it from page selection process and the
same page might get returned if there is no other page with more unused
entries. Added a check for the same while storing blobs.

Fixes: espressif/esp-idf#2313
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants