Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PSRAM Cache Issue stills exist (IDFGH-31) #2892

Closed
ThomasRogg opened this issue Dec 27, 2018 · 178 comments
Closed

PSRAM Cache Issue stills exist (IDFGH-31) #2892

ThomasRogg opened this issue Dec 27, 2018 · 178 comments

Comments

@ThomasRogg
Copy link

@ThomasRogg ThomasRogg commented Dec 27, 2018

We stumbled upon the fact that cache issue with PSRAM still exist, even in the newest development environment. This can produce random crashes, even if the code is 100 % valid.

This very small example program reproduces the problem easily, at least if compiled with newest ESP-IDF and toolchain under Mac OS X (did not try other environments):
https://github.com/neonious/memcrash-esp32/

(As a side note: We noticed this problem when we implemented the dlmalloc memory allocator in a fork of ESP-IDF. We worked around this problem (hopefully you can fix it correctly), and now have an ESP-IDF with far faster allocations. Take a look at the blog post here: https://www.neonious-basics.com/index.php/2018/12/27/faster-optimized-esp-idf-fork-psram-issues/ ).

@Alvin1Zhang Alvin1Zhang changed the title PSRAM Cache Issue stills exist [TW#28180] PSRAM Cache Issue stills exist Dec 28, 2018
@Alvin1Zhang
Copy link
Collaborator

@Alvin1Zhang Alvin1Zhang commented Dec 29, 2018

@neoniousTR Hi, neoniousTR, thanks for reporting this, we will look into this and update if any feedbacks. Also there is a topic about the issue on our forum at http://bbs.esp32.com/viewtopic.php?f=13&t=8628&sid=1acc8bd897e72cf450ad9eb71491d732. Thanks.

@ThomasRogg
Copy link
Author

@ThomasRogg ThomasRogg commented Jan 26, 2019

We updated the example project at https://github.com/neonious/memcrash-esp32
It is now leaner, more to the point, and most importantly, compiles out of the box.

We think this problem is urgent to fix, as random crashes can occur to anyone using the PSRAM of the ESP32.

There only seems to be two workarounds:

  • Use only the first 2 MB of 4 MB of PSRAM (big penalty)
  • End every function which stores to PSRAM with a memw instruction (slow). nops do not help.

Please take a look at the project, and hopefully you have a better idea.

@Spritetm
Copy link
Member

@Spritetm Spritetm commented Jan 29, 2019

Fyi, we're working on this. For what it's worth, it seems to be caused by an interrupt (in your example, the FreeRTOS tick interrupt) firing while some cache activity is going on. We have our digital team running simulations to see what exactly is going on in the hardware; we hope to create a better workaround than the memw solution from that.

@ThomasRogg
Copy link
Author

@ThomasRogg ThomasRogg commented Jan 29, 2019

Good that you can reproduce this.
Interrupts are a good explanation why this happens only randomly..
Hoping for the best.

@markwj
Copy link

@markwj markwj commented Jan 30, 2019

We seem to be seeing this as a std::string memory corruption (all zeros, on a 4 byte boundary).

In our case, disabling the top 2MB of SPIRAM didn't seem to work. But pre-allocating 2MB (which we then never use) seemed to workaround the problem. Our code runs primarily on core #1.

This is impacting us quite badly. Lots of random corruptions and crashes with devices in the field.

@ThomasRogg
Copy link
Author

@ThomasRogg ThomasRogg commented Jan 30, 2019

Maybe whether the top or bottom of the RAM works depends on the core used.

@dexterbg
Copy link

@dexterbg dexterbg commented Jan 30, 2019

Confirmed: running our test project from #3006 with that 2 MB allocation and also starting the test task on core 0 shows the corruptions again. It seems core 0 can only work reliably with the lower 2 MB and core 1 only with the higher 2 MB.

@ThomasRogg
Copy link
Author

@ThomasRogg ThomasRogg commented Feb 1, 2019

@Spritetm or @Alvin1Zhang
As this issue does not happen in single core mode, do you know if the original PSRAM cache issue which is fixed with the flag and adds many nops and memws is also only in dual core mode?

If so, we will try to switch low.js to single core mode, this might even be faster at the end, because the JavaScript itself is single core anyhow and has the most load.

Also, how is the progress going? I'd think the chances are to get this fixed by modifying the interrupt handlers or the cache fetchers and savers (they are part of the ROM?).

@xbary
Copy link

@xbary xbary commented Feb 11, 2019

Hello, I wanted to add to the subject the error I observed in my application using PSRAM. Random error while retrieving the amount of free PSRAM memory. In my application, I check the amount of free PSRAM memory in the main loop, and differently I received in reply that, for example, 16 bytes of free memory, but at the next check, it actually answered ~ 4mb.

In my opinion, there must have been an erroneous random reading from PSRAM.

@ThomasRogg
Copy link
Author

@ThomasRogg ThomasRogg commented Feb 19, 2019

Our current status:

Load to cache/Write from cache does not seem to be interrupt-based.. Might be 100% hardware-based?

Added memw to interrupt handlers does not change anything.

Currently we believe Dual Core + PSRAM is a broken combination.

So we will completly switch to Unicore now.

Please answer:
Do you know if the original PSRAM cache issue also exist in unicore mode? Would be great if we can get rid of the nops and memws with this once and for all...

@xbary
Copy link

@xbary xbary commented Feb 21, 2019

I made such an experience, I rewrote the String class from the arduino project to use the PSRAM memory. I changed the name and changed the realloc in the changebuffer function. Suddenly, it turned out that my application did not regularly show 0x00 in one cell.
here is this changed String class: https://github.com/xbary/xb_StringPSRAM
I would like you to be able to replace the class with StringSRAM as part of the tests, you may be able to reproduce the repeatability of the error.

ThomasRogg added a commit to neonious/lowjs that referenced this issue Feb 22, 2019
@ThomasRogg
Copy link
Author

@ThomasRogg ThomasRogg commented Feb 22, 2019

I have to confirm that the original PSRAM workaround is still required in Unicore mode.

So Workaround + Unicore is the only combination which works reliably with PSRAM. If I am wrong, I hope somebody will post. Otherwise we have to take this as a fact ...

@me21
Copy link

@me21 me21 commented Feb 23, 2019

Can dual core chip be switched to unicore mode?
Does this error manifest itself in Arduino framework? As far as I know, Arduino task is pinned to core 0, therefore, it can be effectively viewed as unicore. Am I right?

@ThomasRogg
Copy link
Author

@ThomasRogg ThomasRogg commented Feb 23, 2019

@Spritetm
Copy link
Member

@Spritetm Spritetm commented Feb 26, 2019

FWIW, we have a tentative solution for this; the existing workaround solution does actually seem to work but doesn't take calls/returns into account properly. We'll ship a toolchain with improved workaround code soon, but we want to have this fairly well tested so we don't have any other edge cases sneaking past us. I'll see if I can post a preliminary patch as soon as I have something halfway stable,

@markwj
Copy link

@markwj markwj commented Mar 4, 2019

Do have any idea of schedule for this, or an ability to get us a pre-release toolchain?

This is impacting us quite badly. The 2MB pre-allocation solves the problem for our code, but just shifts the problem to wifi running on the first core (which now experiences random errors and throughput problems).

@xbary
Copy link

@xbary xbary commented Mar 4, 2019

I confirm, the error still occurs at random moments, even hangs completely.

@projectgus projectgus changed the title [TW#28180] PSRAM Cache Issue stills exist PSRAM Cache Issue stills exist (IDFGH-31) Mar 12, 2019
@markwj
Copy link

@markwj markwj commented Apr 25, 2019

Do have any idea of schedule for this, or an ability to get us a pre-release toolchain?

@dexterbg
Copy link

@dexterbg dexterbg commented May 25, 2019

@Spritetm We do appreciate your efforts in making sure your patch is perfect. But meanwhile our system has to bear a huge performance hit by the workaround, while the stability is still impacted by the bug. We're more than willing to help you in beta testing your patch by using it on our project. Please do a pre-release or share some update on the status. Thanks!

@Patrik-Berglund
Copy link

@Patrik-Berglund Patrik-Berglund commented May 26, 2019

Also think an update is in place, we are awaiting to see if you are able to fix this bug or if it makes the PSRAM feature unusable.

We need more RAM than internal available in the ESP32, so this is a deal breaker for our product.

@negativekelvin
Copy link
Contributor

@negativekelvin negativekelvin commented May 26, 2019

Just wondering why if the original workaround should work in this case that forcing nops does not resolve it.

400d4b3c:	1047a5        	call8	400e4fb8 <crash_set_both>
400d4b3f:	f03d      	nop.n
400d4b41:	f03d      	nop.n
400d4b43:	f03d      	nop.n
400d4b45:	f03d      	nop.n
400d4b47:	0228      	l32i.n	a2, a2, 0
400e4fb8 <crash_set_both>:
400e4fb8:	004136        	entry	a1, 32
400e4fbb:	0249      	s32i.n	a4, a2, 0
400e4fbd:	0349      	s32i.n	a4, a3, 0
400e4fbf:	f03d      	nop.n
400e4fc1:	f03d      	nop.n
400e4fc3:	f03d      	nop.n
400e4fc5:	f03d      	nop.n
400e4fc7:	f01d      	retw.n
400e4fc9:	000000        	ill

Also I noticed the workaround will add the nops even when there is already a memw barrier.

400d4e86:	03a9      	s32i.n	a10, a3, 0
400d4e88:	0020c0        	memw
400d4e8b:	01a9      	s32i.n	a10, a1, 0
400d4e8d:	f03d      	nop.n
400d4e8f:	f03d      	nop.n
400d4e91:	f03d      	nop.n
400d4e93:	f03d      	nop.n
400d4e95:	0020c0        	memw
400d4e98:	0138      	l32i.n	a3, a1, 0

@Spritetm
Copy link
Member

@Spritetm Spritetm commented May 29, 2019

From what I can see, the load-store inversion doesn't occur there exactly... the issue has something to do with a cache miss around that time that has delayed effects later. Because a cache miss takes a while to resolve, you can fix it with nops but you'd need to put a gazillion of them there.

I also noticed the nop/memw interaction... will see if I can get rid of that as well, inasfar gcc marks it. (As in: I can probably detect volatiles that cause an implicit memw, but a literal asm("memw") is harder to spot.)

@negativekelvin
Copy link
Contributor

@negativekelvin negativekelvin commented May 29, 2019

Ok thanks. Other things I noticed when playing with the memcrash-esp32 example:

  • If running on core 0 error will occur with mem2 (lower) in HIGH-LOW mode
  • If running on core 1 error will occur with mem1 (upper) in HIGH-LOW mode
  • If running on core 0 error will occur with both mem1 & mem2 even in EVEN-ODD mode
  • If running on core 1 error will occur with both mem1 & mem2 odd in EVEN-ODD mode
  • Error does not seem to happen in NORMAL mode on either core ( I don't know if this is supported or just giving a false result)

Assuming running NORMAL mode is valid workaround with a performance trade-off, how will it compare to performance of the planned workaround?

@negativekelvin
Copy link
Contributor

@negativekelvin negativekelvin commented May 30, 2019

More info: the memw in the example actually does not prevent the error when the routine is running in parallel on both cores. It is much more infrequent but still happens.

Normal mode does have a performance cost as it is around 24% fewer tries/ms with the example on both cores, but no errors.

@ThomasRogg
Copy link
Author

@ThomasRogg ThomasRogg commented Jun 1, 2019

Normal mode has cache coherency issues according to documentation, so not an option.

@stuarthatchbaby
Copy link

@stuarthatchbaby stuarthatchbaby commented Aug 4, 2020

@igrr do you have an ETA for the toolchain release?

@igrr
Copy link
Member

@igrr igrr commented Aug 5, 2020

@stuarthatchbaby Which IDF version are you using at the moment? If it is v4.0 and later, we recommend using the 2020r1 release for now. For v3.3, we can prepare a release early next week.

@stuarthatchbaby
Copy link

@stuarthatchbaby stuarthatchbaby commented Aug 5, 2020

@igrr We're on 4.0.1 now, but seeing possible VFS and select errors per #5423 . 2020r2 solves these issues in our testing but has the PSRAM regression mentioned above and causes us the tiT aborts referenced.

@igrr
Copy link
Member

@igrr igrr commented Aug 6, 2020

I see, in that case I suggest temporarily going back to the 2020r1 release, as that is going to fix both the issue with 8-bit load/store (tiT abort) 16-bit load/store (VFS/select errors).

@dexterbg
Copy link

@dexterbg dexterbg commented Aug 6, 2020

@igrr We're using crosstool-ng-1.22.0-96 currently and have had a few strange issues we haven't been able to track down yet. Would you recommend switching back to the crosstool-ng-1.22.0-98 pre-release until the fix is available?

@wanglei-esp
Copy link
Contributor

@wanglei-esp wanglei-esp commented Aug 19, 2020

@igrr , Hi, I have tested three toolchains, including: 2020-r2(has the commit of szmodz), 2020-r1(up to Jereon's commit) , and 2020-r1.5 (like 2020-r2, but except szmodz's commit). The test result shows that, 2020-r2 will reproduce psram bug, 2020-r1 and 2020-r1.5 both solve the psram bug. Besides, as for the bug szmodz memtioned, I cannot reproduce. So I think now we can switch 2020-r1.5.

@AxelLin
Copy link
Contributor

@AxelLin AxelLin commented Aug 19, 2020

Hi @wanglei-esp
For most users here, it's difficult to understand your reply without knowing the commit you mentioned.
Could you provide link to the szmodz's commit.

@wanglei-esp
Copy link
Contributor

@wanglei-esp wanglei-esp commented Aug 19, 2020

@AxelLin , sorry for that. szmodz's commits are, espressif/gcc@f015747
espressif/gcc@71e4206
espressif/gcc@1a58a05
i think these three commits introduced a regression in PSRAM fixes. So have build a toolchain based on 2020-r2 but revert these three commits. Have tested and solved the psram fix

@igrr
Copy link
Member

@igrr igrr commented Sep 11, 2020

2020r3 toolchain has been released, which has above mentioned commits reverted. Release notes are here: https://github.com/espressif/crosstool-NG/releases/tag/esp-2020r3.

release/v4.0 branch has been updated to use the new toolchain: 6093407.

Similar updates to master, v4.1, v4.2, v3.3 are in progress.

@igrr
Copy link
Member

@igrr igrr commented Sep 22, 2020

Toolchain updated in master branch with 439f4e4.

@igrr
Copy link
Member

@igrr igrr commented Sep 22, 2020

Toolchain updated in release/v4.1 branch with c7ba54e.

@Curclamas
Copy link
Contributor

@Curclamas Curclamas commented Oct 14, 2020

Hello @igrr we're working with v3.3 but also experience issues with regards to PSRAM. (Interestingly enough with ant without a PSRAM chip attached ) It only happens in release mode, debug build works just fine.
Could this be the same issue? How is the roadmap for updating the toolchain/fix for v3.3?

@AxelLin
Copy link
Contributor

@AxelLin AxelLin commented Nov 8, 2020

v4.2 9f0c564
v3.3 81da2ba @Curclamas, does this fix the issue in v3.3?

@tmihovm2m
Copy link

@tmihovm2m tmihovm2m commented Nov 11, 2020

@AxelLin I started using the PSRAM a couple days ago and started having weird issues with MQTT (from the IDF) on version v3.3. Tried updating the compiler to the one from the linked commit, but that didn't help... after that I tried forcing the MQTT allocation to use the DMA ram and the issue disappeared. So my guess is the issues is not fixed :(

@dexterbg
Copy link

@dexterbg dexterbg commented Nov 11, 2020

@tmihovm2m Did you enable CONFIG_SPIRAM_CACHE_WORKAROUND and do a full rebuild?

@tmihovm2m
Copy link

@tmihovm2m tmihovm2m commented Nov 13, 2020

@dexterbg Hmm, I though it was enabled, but I guess I have accidentaly reverted the change when testing with the updated toolchain.
Sorry guys :(

@igrr
Copy link
Member

@igrr igrr commented Nov 13, 2020

Given that the toolchain has been updated in all currently maintained releases, I will close this issue. Please open a new issue if you are seeing a PSRAM-related problem.
Thanks everyone for the all the help reproducing the issue and great deal of patience while we were releasing the fixes.

@igrr igrr closed this Nov 13, 2020
@vonnieda
Copy link
Contributor

@vonnieda vonnieda commented Nov 13, 2020

@igrr It seems premature to close this when the toolchain released contains a critical bug as described at https://github.com/espressif/esp-idf/releases/tag/v3.3.4. I will note that the Known Issue says "difficult to reproduce", but in my use case, which is heavy WiFi and BLE, it crashes within minutes and usually under a minute. I have had to revert to a prior revision to keep my app stable.

@dexterbg
Copy link

@dexterbg dexterbg commented Nov 13, 2020

@vonnieda Do you still see this with toolchain 1.22.0-97-gc752ad5?

@vonnieda
Copy link
Contributor

@vonnieda vonnieda commented Nov 13, 2020

@dexterbg I hadn't seen 97 yet, I will try it next week. On the commit, though, it says "Revert a part of PSRAM workaround because of regression"; so, does this mean the PSRAM issue is still not fixed in this version?

@dexterbg
Copy link

@dexterbg dexterbg commented Nov 14, 2020

No, that means toolchain 97-gc752ad5 reverts the regression of the fix introduced in toolchain 96-g2852398 (see above). The fix is now supposed to be fully functional.

@igrr
Copy link
Member

@igrr igrr commented Nov 15, 2020

Sorry that i didn't make it clear, all release branches now contain the fix, i.e. a commit which updates the toolchain version. These commits are:

  • master: 439f4e4.
  • release/v4.2: 9f0c564. This commit will be part of v4.2-rc, due to be released soon.
  • release/v4.1: c7ba54e. This commit will be part of the next v4.1.1 bugfix release.
  • release/v4.0: 6093407. This commit is part of v4.0.2 release.
  • release/v3.3: 81da2ba. This commit will be part of the next v3.3.5 bugfix release.

The toolchain versions which contain the fix are esp-2020r3 based on GCC 8.4 (used in 4.x releases), and 1.22.0-97-gc752ad5 based on GCC 5.2.0 (used in 3.3 release).

@Curclamas
Copy link
Contributor

@Curclamas Curclamas commented Dec 18, 2020

@igrr do you happen to have any timeline when the v3.3.5 bugfix release is scheduled?

@igrr
Copy link
Member

@igrr igrr commented Dec 18, 2020

At the moment QA is testing two bugfix releases: 4.1.1 and 3.3.5. Testing will be finished around Dec 25, if no issues are found we will proceed with the release. 4.1.1 currently has higher priority for us, so if the issues are found we will work on fixing them in release/v4.1 first. Early January is probably viable for the release. We'll try to keep you updated (cc @Alvin1Zhang).

@etherfi
Copy link

@etherfi etherfi commented Jan 5, 2021

So, this has been a problem FOR TWO YEARS. How certain are you that the early January release you speak of will actually fix the problem permanently?

It's early January now, by the way.

@igrr
Copy link
Member

@igrr igrr commented Jan 5, 2021

Hi @etherfi, some issues (unrelated to PSRAM) have been found while testing v4.1.1 release candidate, so the v3.3.5 release is still pending while we are working on fixing the issues in v4.1.1. To the best of our knowledge, no new PSRAM issue reports appeared since we have switched to this version of the compiler. That said, for new designs it is recommended to use ESP32 silicon revision 3 as it fixes the PSRAM cache issue in hardware.

@etherfi
Copy link

@etherfi etherfi commented Jan 9, 2021

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet