You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First, take a look at https://developer.microsoft.com/en-us/windows/holographic/rendering_in_directx and the section on processing camera updates with respect to the back buffer, Back buffers can change from frame to frame. Your app needs to validate the back buffer for each camera, and release and recreate resource views and depth buffers as needed.
I think the current specification allows for an indirect ability to optimize rendering through to the device, but the intricacies of various devices mean that each will have to jump through a different set of hoops to make it happen in a compatible and interoperable way.
We should discuss mechanisms for allowing devices to create optimized surfaces that don't require intermediate copies and perhaps further optimizations such as disabling or denying any sort of texture read-back. For this I think we want to continue using the "canvas" as the currency and then allowing a developer to get a rendering context back from said canvas.
enumCanvasThreading {
"default",
"threaded"; // "offscreen" ?
}
partial interfaceVRDevice {
// Option #1 - device creation// More flexible since allows binding to go through device specific paths// Enables creation of devices and surfaces optimized to cross process rendering, etc...
VRSource? createDeviceLayer(CanvasThreading canvasType = "default");
}
// Option #2 - device replacement of back-end resources// Challenging depending on the current state of the VRSource which may already be part// of a normal rendering pipeline.
dictionary VRLayer {
bool? allowDeviceOptimizations = false;
}
The text was updated successfully, but these errors were encountered:
First, take a look at https://developer.microsoft.com/en-us/windows/holographic/rendering_in_directx and the section on processing camera updates with respect to the back buffer, Back buffers can change from frame to frame. Your app needs to validate the back buffer for each camera, and release and recreate resource views and depth buffers as needed.
I think the current specification allows for an indirect ability to optimize rendering through to the device, but the intricacies of various devices mean that each will have to jump through a different set of hoops to make it happen in a compatible and interoperable way.
We should discuss mechanisms for allowing devices to create optimized surfaces that don't require intermediate copies and perhaps further optimizations such as disabling or denying any sort of texture read-back. For this I think we want to continue using the "canvas" as the currency and then allowing a developer to get a rendering context back from said canvas.
The text was updated successfully, but these errors were encountered: