Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CUDA/GPU support #2171

Open
moinakb001 opened this issue Feb 21, 2020 · 1 comment
Open

CUDA/GPU support #2171

moinakb001 opened this issue Feb 21, 2020 · 1 comment

Comments

@moinakb001
Copy link

@moinakb001 moinakb001 commented Feb 21, 2020

Hey - I am potentially interested in getting a CUDA backend running for verilator. I had some ideas on how to proceed in terms of grouping of signals, but was wondering if you guys had already dealt with something like this before so I could lean on prior art.

@moinakb001 moinakb001 added the new label Feb 21, 2020
@toddstrader

This comment has been minimized.

Copy link
Member

@toddstrader toddstrader commented Feb 21, 2020

Do you imagine having your CUDA kernel perform significant amounts of work within each eval() call? I can imagine doing GPU-amenable things in Verilog but they'd involve a clock and state, which which would require a numerous eval()s to execute the kernel in the verilated C++.

If you have a round trip to/from the GPU in eval() my guess is that the latency will burn through any performance gains you got. I suspect (please correct me if I'm wrong) you'd want a way to fork off the GPU bit of your verilated design while you continued eval()ing the rest of it and pick up the results from the GPU at some later point. Verilator does emit multi-threaded eval() code, however it's still just dividing up the work of the single-threaded eval(). What I'm imagining you'd need here is inter-eval() multi-threading which does not exist today. Nor has (to the best of my knowledge) there been any other work done on emitting CUDA code.

Perhaps I don't fully understand what you're asking. If you could expand on your thoughts here that may be helpful in discussing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
3 participants
You can’t perform that action at this time.