-
Notifications
You must be signed in to change notification settings - Fork 700
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
端侧推理,https://github.com/mindspore-ai/mindspore/tree/master/predict目录下代码只是支持端侧的推理吧? #23
Comments
Hi @zjd1988, as you said, the predict module is designed to perform model prediction on mobile device. If you want to predict model on GPU, currently you can export model to ONNX (see export API usage), then perform ONNX model with GPU runtimes (such as TensorRT). |
@leonwanghui ok, thanks for reply. I check mindspore source code,and find cuda kernels.Is there any c++ examples about inference network with these cuda kernels? |
@zjd1988 Sorry for the delay, currently only training and evaluation are supported using these GPU kernels you mentioned, so ONNX is suggested to convert when you want to use TensorRT to inference MindSpore model. |
@leonwanghui appreciate it. Does Mindspore has plan to support predict with GPU? |
@zjd1988 Could you explain a bit for what predict runtime you want to use? If you are using TensorRT for model predict, then it's not supported for the short term; But if you just want to test predict function on MindSpore, you could call |
Hi @leonwanghui ,thanks for your reply. I just want to predict with mindspore c++ api (using GPU kernels) , not with python api. and will Mindspore plan to inference with GPU by self c++ api, not using tensorrt. |
@zjd1988 I see, if you are referring the C++ API support of MindSpore, then we are already planning to work on it, we will keep you informed when the design plan is published. |
@leonwanghui looking forward it!!! |
looking forward to the inference engine too !!!!!!!! gpu & cpu just like openvino. for some reason you know, I just want to deploy mindspore on my production devices. |
如果要看GPU的推理,可以看哪部分代码呢?暂时还没看到如何自定义增加op的文档。
The text was updated successfully, but these errors were encountered: