-
Notifications
You must be signed in to change notification settings - Fork 618
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Real-time inference in the browser without GPU? #71
Comments
Hi, thanks for your attention. |
I saw that comment and it made me wonder whether I should even try it on my device, but I'm glad to hear you can get it to work yourself. 20 FPS might be enough. I tried to run the sample in Colab without a GPU but I get an error that there's no CUDA support. Can you explain what I need to change so that it runs? |
If you want to run our matting Colab demo based on Google CPU, please try to delete all Please try the offline demo to get higher FPS. And, waiting for your feedback of FPS on CPU. |
Hello, would you tell me the FPS of this model on the CPU? Thank you. |
@LebronJames0423 |
ok,thanks a lot.
…------------------ 原始邮件 ------------------
发件人: "ZHKKKe/MODNet" ***@***.***>;
发送时间: 2021年4月14日(星期三) 晚上10:45
***@***.***>;
***@***.******@***.***>;
主题: Re: [ZHKKKe/MODNet] Real-time inference in the browser without GPU? (#71)
@LebronJames0423
We export the model to the ONNX format and call it by C++ with low resolution inputs.
In this case, we got 15-20 FPS.
However, if you call the model by using PyTorch, the FPS will be lower.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
Hello, can you provide C++ code for inference?Thank you. @ZHKKKe |
@StarkerRegen |
OK, thank you very much. |
Hi, thanks for sharing your work! This is really great. I want to be able to use this for real-time inference in a browser without a GPU. Is this at all feasible for this model?
The text was updated successfully, but these errors were encountered: