Skip to content
This repository has been archived by the owner on Aug 1, 2019. It is now read-only.

API features roadmap #35

Open
26 of 48 tasks
Kangz opened this issue Jun 7, 2017 · 8 comments
Open
26 of 48 tasks

API features roadmap #35

Kangz opened this issue Jun 7, 2017 · 8 comments

Comments

@Kangz
Copy link
Contributor

Kangz commented Jun 7, 2017

In bold the most important features to write cool demos. The shading language features are explicitly left out. A feature can be checked is checked if investigation have been done, and it is implemented on at least one backend.

  • Command buffer operations
  • Resource bindings
    • Binding Model
    • UBO
    • SSBO
    • Samplers
    • Textures
    • Push constants
  • Fixed function state
    • Vertex input
    • Depth stencil state Depth Stencil State roadmap #29
    • Blend state
    • Primitive topology
    • Cull mode, front face
    • Scissor, viewport
    • Other rasterization state (line width, depth bias, depth clamp)
    • Multisample state
  • Renderpass features Introduce render passes #7
    • Color attachments
    • Clears
    • Depth-stencil attachments
    • Barriers
    • Input attachments
    • Resolves
  • Buffer mapping
  • Textures
    • Formats
      • More formats, including float ones
      • Depth-stencil formats
    • Dimensions
      • Cube / Arrays
      • 1D
      • 3D
    • Sampler state
      • Filtering modes
      • Clamp mode
      • Anisotropic filtering
    • Multisample textures
  • Barriers
    • Resource usage transitions
    • Other barriers (for example between dispatches, when usage stays the same)
    • GPU->CPU sync
  • Extras
    • Queries
    • Other shader stages?
  • WSI
@grovesNL
Copy link

grovesNL commented Jun 12, 2017

@Kangz Where does interaction with multiple physical devices fit into this feature list? For example, in the Obsidian proposal, querying with getPhysicalDevices and getQueueFamilies and using the results to create a logical device.

I noticed that in the design document that canvas/WSI hasn't been considered in detail yet. I see that the NXT has both a rendering context (from getContext) and device (from context.getDevice), so I assume something is planned in this area. Will multiple device interaction be considered as part of WSI, or is it planned that this will be implicit somehow?

I know it's all very early - just trying to understand the differences between the three proposals as they exist currently.

@Kangz
Copy link
Contributor Author

Kangz commented Jun 12, 2017

Right now we are focusing on things that happen after the equivalent of vkCreateDevice which is why devices are created out of thin air. We didn't look at what happens before that at all, but our general philosophy is, contrary to Vulkan, that we are not going to expose the whole GPU geometry to the application.

We have some ideas on how to do canvas/WSI in the doc, but haven't done enough investigation to choose a direction yet. Also this part of the API is somewhat orthogonal to the rest so it can be designed later. One thing we want for sure, is to have a single device be able to render to multiple canvases.

As for multiple devices interacting with each others, IMHO it is a niche and very complex feature. I don't think it will make it in any Web facing API. However the structure of the API should make it easy for multiple "modules" to share one device.

@grovesNL
Copy link

Thanks for the clarification, I agree that canvas/WSI is mostly orthogonal to the rest of the API and can be designed later.

Is it true that expected clients (i.e. game engines) do not plan to use multiple devices at once with Vulkan 1.1 or D3D12? This is anecdotal of course, but it seemed as though there was a lot of interest in multi-GPU resource sharing. If there is a possibility that multiple physical device support could be added beyond the MVP), it would be good to keep it in mind for logical device creation.

@Kangz
Copy link
Contributor Author

Kangz commented Jun 12, 2017

Vulkan has extensions like VK_KHX_device_group to do device sharing, and D3D12 has the concept of node. I believe high-end game engines are looking at using multi-GPU functionality. However in my opinion this feature is too fresh and we should wait for APIs to settle and use-cases to emerge before we start looking at multi-GPU. We can mention it in the CG, but I believe most people will agree it is a post 1.0 feature. If you want to discuss this you can create a bug on the gpuweb issue tracker.

@grovesNL
Copy link

Sounds good. I agree about the freshness concern and that it would likely be post MVP/1.0 if/when it's added. My thought was that it may still have implications on the design of the device API in the prototypes.

Although it's not as important as the rest of the API details, so the discussion can probably wait until the prototypes are a bit further along.

@amerkoleci
Copy link

What about swap chain? At the moment seams there is single swap chain and automatic Present call.

@Kangz
Copy link
Contributor Author

Kangz commented Jul 18, 2017

The "WSI" item stands for Window System Integration and includes the swapchain objects. We are currently looking at implementing some sort of swapchain but it will mostly be an empty shell for the application to fill with internal NXT structures.

@amerkoleci
Copy link

I see,
I was asking because i'm working on nextgen engine and while creating a multilayer engine using DirectX12, Vulkan and OpenGL I'm having issues to create the common logic, Vulkan uses semaphore and the wait stage flags + signal semaphore, while DirectX12 uses only fence, and OpenGL currently have no idea :)

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants