You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jan 6, 2022. It is now read-only.
From my tests with the DMA it seems that waiting for a job ID does not equate to waiting for every preceding job ID. Instead a call to wait has to be done for each one in the sequence. If this is the intended behaviour of the DMA, the assumption made by this line of code in hero_dma_memcpy_async is incorrect.
Furthermore, if the DMA job buffer filled up, any further jobs stalled the core indefinitely. Therefore, if more than 16 pages are transferred, the function would stall in the same way.
Edit: The IDs are not dropped on page boundaries, but when the transfer size is larger than PULP_DMA_MAX_XFER_SIZE_B, which causes the transfer to be split into multiple DMA jobs.
The text was updated successfully, but these errors were encountered:
To resolve the issue with the dropped job IDs there are two potential solutions:
counters can be used to bundle multiple DMA jobs into a single wait operation
hero_job_t could be used as a bit vector, where the i-th bit represents whether the DMA job with id i is part of this transfer
To avoid exceeding the limit of 16 transfers there is no good solution that I know of. I could not find any runtime function that tells us how many DMA jobs are ongoing, so the only solution we currently have is to store it in a global. This would mean that any DMA job issued outside of libhero-target would not be recorded and thus we could still exceed the job limit.
## Changelog: v1.2.0 - 2019-08-02
### Fixed
- Fix [#8](#8). Fixed `hero_dma_memcpy_async` API. In case of big memory transfers, some DMA job were leacked, resulting on the termination of DMA channels available.
### Changed
- Added API to access HW cycles counters.
From my tests with the DMA it seems that waiting for a job ID does not equate to waiting for every preceding job ID. Instead a call to wait has to be done for each one in the sequence. If this is the intended behaviour of the DMA, the assumption made by this line of code in
hero_dma_memcpy_async
is incorrect.Furthermore, if the DMA job buffer filled up, any further jobs stalled the core indefinitely. Therefore, if more than 16 pages are transferred, the function would stall in the same way.
Edit: The IDs are not dropped on page boundaries, but when the transfer size is larger than
PULP_DMA_MAX_XFER_SIZE_B
, which causes the transfer to be split into multiple DMA jobs.The text was updated successfully, but these errors were encountered: