Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using ActiveLowChipSelect with queued transfers causes a hang #15

Closed
macaba opened this issue Mar 23, 2016 · 4 comments
Closed

Using ActiveLowChipSelect with queued transfers causes a hang #15

macaba opened this issue Mar 23, 2016 · 4 comments

Comments

@macaba
Copy link

macaba commented Mar 23, 2016

Hello,

I have put a GitHub Gist of a sketch that reproduces the issue here.

Essentially I want a continuous stream of SPI data from a large frame buffer. My buffer is 32768 bytes long so I send it as 2 separate 16384 byte transfers due to the 32767 byte limit.

If using
trxA = DmaSpi::Transfer((uint8_t*)buffer, DMASIZE, nullptr);
trxB = DmaSpi::Transfer((uint8_t*)buffer + DMASIZE, DMASIZE, nullptr);

Then it works fine, but when using
trxA = DmaSpi::Transfer((uint8_t*)buffer, DMASIZE, nullptr, 0, &cs);
trxB = DmaSpi::Transfer((uint8_t*)buffer + DMASIZE, DMASIZE, nullptr, 0, &cs);

Then the Teensy 3.1/3.2 will hang at some point (sometimes almost immediately and other times a few minutes). This smells like a race condition and I'm probably not using the library correctly either. I suspect the source of the hang is the continuous checking of the DMASPI0.busy() flag and/or Transfer.busy() flag (checking either will eventually result in a hang).

As an aside:
The main reason for using the ActiveLowChipSelect object is to set the SPISettings to something other than default. I would ideally like the CS line to only go low before trxA and high after trxB, but my device doesn't seem to mind the mid-transfer CS change.
A DmaSPI::TransferContinued method could be nice for streaming data larger than 32767 bytes without toggling the CS line?
I've also seen a Callback pull request (#14) that might be a nice clean way for me to initiate a new transfer instead of checking various busy() flags.

@crteensy
Copy link
Owner

https://gist.github.com/macaba/b753612db3b5b1b975b8#file-gistfile1-txt-L42

You're allocating a chip select object on the stack, which will be destroyed when transfer() returns. The data that made up the CS object will only be there until it is overwritten by some other routine, and only then will using the CS object fail. This can be at any point in time. Make it static or move it into another context where it can survive.

About selecting your chip:

  • You can create two chip select object classes: One that select a slave and one that deselect a slave. Use a chip select of the one kind in your first transfer, and one of the second kind in your second transfer. There's one problem though even with this split chip select idea:
  • There's no transferContinued method because I wanted to make sure that even ISRs can add transfers to DMASPI. So two transfers which are added consecutively might be interrupted and split in such a situation, which would be bad. This would probably need some more experimentation, though, because I've never actually done this.

So if you can make sure that no ISR can add a transfer to your DMASPI, creating two chip select classes (one to select, one to deselect) might be the cleanest way of getting the desired behaviour.

@macaba
Copy link
Author

macaba commented Mar 24, 2016

Thank you for your response; I really managed to break my code very subtly!
I made the change you suggested and all is working.

What you said about the chip select and the downfalls of my suggested approach makes complete sense. Would it make more sense to have a transferMultiple method that is safe from ISRs? It would somehow take in multiple buffer references and sizes and safely add them to the transfer queue.
Better still, allow transfer to take in buffers larger than 32767 and chunk it into multiple transfers inside the library in some kind of ISR safe way?

If you think either of these approaches have merit, please let me know and I'll have an attempt at implementation.

@crteensy
Copy link
Owner

I could add such a method, but it would take some management code to make sure that no other part of the application can zip in another chain of transfers (basically the same problem as above). What could work without much hassle is a method that takes an already prepared chain of transfers and adds them as a block. The user would have to make sure that it is null-terminated, though, and doesn't contain any unwanted loops. The ability to add loops on purpose would even be a feature.

Splitting a transfer into multiple 32767-or-less-bytes transfers automatically could be useful, I'll consider that.

@crteensy
Copy link
Owner

Closing this issue for now, because the original problem was solved. Added enhancement issue: #20

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants