Join GitHub today
GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together.Sign up
Implement atomic writes for EEPROM emulation #871
This is a rework the EEPROM emulation that stores individual bytes in 2 pages of Flash memory to implement atomic multi-byte writes, improve performance and testability as discussed in #803
Core uses 2 pages of 1024 bytes and can store up to 255 bytes of EEPROM.
Photon and Electron use 1 page of 16 Kb and 1 page of 64 Kb and can store up to 4095 bytes of EEPROM.
Single byte or multi byte reads and writes with a freshly erased EEPROM page take less than 0.1 ms. Reads and writes with a nearly full EEPROM page take 3 ms, regardless of the number of bytes read/written. Previously these operations took 3 ms per byte read/written.
On the Photon/Electron the page swap time went from 1400 ms to 500 ms. If the alternate page is already erased by using
Because the full 64 Kb page is used on the Photon/Electron, flash life is 2.5x the previous algorithm (20475 byte writes for 2 erases instead of 8190 byte writes for 2 erases).
Amazing work! This really does tick all the boxes bar none.
We'd like to have this as part of an automated test suite. Off-device testing is already a done deal - how about on-device testing?
The ideal result would be that we can run a test suite and be confident the new code functions as expected. I'd like to hear your thoughts for on-device testing to exercise and prove the new eeprom code!
There are already existing test cases for the EEPROM emulation. It looked like a good series of integration tests doing reads and writes of the whole range (they did catch an off by 1 error the first time I ran those tests against my new implementation), so I don't think it's necessary to add additional on-device tests beyond these.
One point of discussion is the mechanism for making page swaps O(n) runtime uses O(n) memory. There is a
This can be reduced by decreasing the EEPROM capacity (it used to be 2047, so it would be OK to change this PR to cap the EEPROM capacity to 1/2 of the theoretical capacity which is 2047) and by making the
In case the
The other alternative is to put back the O(n^2) runtime and O(1) memory page swap, but this will make the worst case page swap 12 seconds (4095 used indexes x 3 ms to read 1 index) plus page erase time (500 ms). Typical time would be 300 ms (100 indexes x 3 ms to read 1 index) plus page erase time (500 ms).
I don't believe vector fails gracefully since it's expecting exceptions in response to allocation errors, but I've not studied the code so could be mistaken. The best way to know is to write a small integration test to find out.
Reducing the capacity to 2k and switching to uint16_t indices sounds like a fair balance between features and resources used.
Longer term, since the indices stored in ram are "merely" a cache - fully recoverable from data in the EEPROM, it could later be replaced by various caching strategies, including no cache, MRU cache, soft cache (reclaimable when memory is exhausted) etc. to allow the app developer to set the balance between performance and memory use.
I had another idea: cluster the reads to a max of indices (like 128) and loop until there are no new indices. This perform the copy in one go for simple cases (a few EEPROM cells used), constrain the size of the vector and degrade the worst case performance to 3 ms * 32 loops vs 3 ms * 4096 cells for individual cell reads.
I rebased to
Unit and on-device tests pass. This is ready for 0.5.0.
@m-mcgowan note that the cache of indices is only needed during page swap. During normal run get/put always do a linear search through the EEPROM. An index cache during normal run would use too much RAM for the benefits it would provide IMHO.