Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory caching (HAST-249) #28

Open
Piedone opened this issue Mar 2, 2018 · 0 comments
Open

Memory caching (HAST-249) #28

Piedone opened this issue Mar 2, 2018 · 0 comments

Comments

@Piedone
Copy link
Member

Piedone commented Mar 2, 2018

Improve memory access latency with some form of caching, similar to how L1 and L2 caches in CPUs work.

Some ideas:

  • In case of larger-than-32b memory channels keep the contents of the DataIn signal. Then if subsequent reads are issued to cells that are under the same physical address then no read should actually happen but another part of the ready DataIn content (which is e.g. 512b) can be used.
  • Speculatively prefetch the next few cells, so if those will be indeed used they'll be already there? What makes this complicated is that if such a prefetch is executing then any manual memory operation needs to wait for it to finish; so it must happen when there won't be any other memory operation for sure.
  • Have async versions of SimpleMemory operations. This way e.g. a memory read can be started ASAP and then awaited only when the result is actually needed. Thus the operation can happen in the background while something else is executing, so no waiting is needed. However, still one such operation would be possible at one time.

This is already done for Vitis.

Jira issue

@github-actions github-actions bot changed the title Memory caching Memory caching (HAST-249) Sep 18, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant