Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WinML as OpenVX Extension #150

Closed
kiritigowda opened this issue Feb 5, 2019 · 5 comments
Closed

WinML as OpenVX Extension #150

kiritigowda opened this issue Feb 5, 2019 · 5 comments

Comments

@kiritigowda
Copy link

I have created a WinML extension for OpenVX, to use to WinML functionality from with an OpenVX graph - https://github.com/GPUOpen-ProfessionalCompute-Libraries/MIVisionX/tree/master/amd_openvx_extensions/amd_winml#amd-winml-extension

I am able to use WinML as a node and process my neural net model. I was not sure if I was doing it the most efficient way possible. I had a few problems trying to create a TensorFloat object for the model input binding, with these in mind I had a few questions. I was hoping somebody in your team could point me in the right direction.

  • Is binding.Bind( Model Input Tensor Name, input tensor) needs to be done only once, or can I change it frame to frame as I am creating a new input TensorFloat element from CreateFromIterable.
  • Is there a way I can pass GPU memory (OpenCL mem object) into the TensorFloat object if I am using the DirectXHighPerformance mode.

Thanks for your help.

@kiritigowda
Copy link
Author

Any updates on this issue?

@ryanlai2
Copy link
Contributor

ryanlai2 commented May 7, 2019

Hi @kiritigowda , deep apologies for the late response. That is amazing that WinML is in progress to be an extension in OpenVX!

To answer your questions:

Is binding.Bind( Model Input Tensor Name, input tensor) needs to be done only once, or can I change it frame to frame as I am creating a new input TensorFloat element from CreateFromIterable.

The same LearningModelBinding object and the same memory backing the tensor can be reused from frame to frame. However, the call to "Bind" method must be called from frame to frame.

Is there a way I can pass GPU memory (OpenCL mem object) into the TensorFloat object if I am using the DirectXHighPerformance mode.

To create a TensorFloat backed by GPU memory, the currency that WinML accepts is with using a D3D12 resource. See WinMLRunner's example here. Also see ITensorStaticsNative.CreateFromD3D12Resource method documentation. If you are able to convert your OpenCL mem object into a D3D12 resource then it should work :)
DirectXHighPerformance is simply one of the LearningModelDeviceKind enumerations that can be chosen to create a LearningModelSession. Please refer to documentation here.

LearningModelDeviceKind Value Description
Cpu 1 Use the CPU to evaluate the model.
Default 0 Let the system decide which device to use.
DirectX 2 Use a GPU or other DirectX device to evaluate the model.
DirectXHighPerformance 3 Use the system policy-defined device for high performance.
DirectXMinPower 4 Use the system policy-defined device for minimum power.

Please let me know if you have any more questions!

@kiritigowda
Copy link
Author

@ryanlai2 Thanks for your reply, I will go over these documents and get back to you if I have any questions. Thanks again!

@ryanlai2
Copy link
Contributor

I'm closing this issue, but feel free to reopen if more assistance is needed.

@kiritigowda
Copy link
Author

@ryanlai2 Thanks, I am working on getting the GPU support for OpenVX WinML. I will let you know how it goes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants