Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Start DX12 backend #80

Merged
merged 13 commits into from
May 25, 2021
Merged

Start DX12 backend #80

merged 13 commits into from
May 25, 2021

Conversation

raphlinus
Copy link
Contributor

Very early so far, but uploading the progress I made.

Much of the code is adapted from Brian Merchant's https://github.com/bzm3r/piet-dx12 codebase.

Very early so far, but cool to have a branch for it.
Chipping away at the dx12 backend. This should more or less do the
signalling to the CPU that the command buffer is done (ie wire up the
fence). It also creates buffer objects.
This brings the signature current so it compiles, but the
implementations are just stubs for now.
Create compute pipelines from shader source and descriptor sets. This
gets it to the point where it can run the collatz example.

Still WIP and with rough edges, of course.
@eliasnaur
Copy link
Collaborator

Is there a reason you're using D3D12 and not D3D11? Since last we spoke, I figured out why the shaders wouldn't compile for the D3D11 cs_5_0 profile (I had introduced non-uniform barriers) and now they compile fine. Note that I haven't actually run the shaders in D3D11 yet.

@raphlinus
Copy link
Contributor Author

It's a good question, ultimately I want to support both. I go back and forth about whether I want to target "advanced features" including subgroups and descriptor indexing, and those are DX12 only. I think the current plan is to get SM5 working first, then explore those advanced features later (with runtime query).

I had this code lying around for a while, based on Brian's piet-dx12 prototype, so it was handy.

These function, but can use some work.

First, the buffer situation is worse than it should be. It should be
possible to create a single readback buffer rather then copy from
gpu-local to host-coherent.

Second, the command buffer `finish_timestamps` call doesn't correlate to
anything in Vulkan, so needs plumbing up through the hub in one form or
other when that happens. I'm inclined to make it ergonomic by doing a
bit of resource tracking that will trigger the appropriate call (and
subsequent host barrier) in the `finish` method on the command buffer.
Rework the entire mechanism for specifying memory for creating
resources, inferring the correct options from the new usage flags.
Adds image data types and operations. At this point, lightly tested.
@raphlinus raphlinus mentioned this pull request May 24, 2021
@raphlinus raphlinus marked this pull request as ready for review May 25, 2021 22:09
@raphlinus
Copy link
Contributor Author

Merging this now even though it's in rough state, as I'm working concurrently on a number of subproblems (I've had to change the signatures of a few Device methods, mostly because outside Vulkan it's not valid to assume Fence and Semaphore are Copy). I'll continue to refine this as part of the #95 work.

@raphlinus raphlinus merged commit 125f6f9 into master May 25, 2021
@raphlinus raphlinus deleted the dx12 branch May 25, 2021 22:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants