Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Real-time inference in the browser without GPU? #71

Closed
dpalbrecht opened this issue Feb 23, 2021 · 9 comments
Closed

Real-time inference in the browser without GPU? #71

dpalbrecht opened this issue Feb 23, 2021 · 9 comments

Comments

@dpalbrecht
Copy link

Hi, thanks for sharing your work! This is really great. I want to be able to use this for real-time inference in a browser without a GPU. Is this at all feasible for this model?

@ZHKKKe
Copy link
Owner

ZHKKKe commented Feb 23, 2021

Hi, thanks for your attention.
I think it is based on your CPU device.
In my case, I tried to run the model with input frames of size 640*480 on the i7-4970MQ CPU. The FPS is higher than 20. However, I did not try to integrate the model with the browser. So I have no idea about it.
BTW, I have also received rare reports ( #47 ) that the model got very low FPS on their CPU device, but I cannot reproduce the problem.

@dpalbrecht
Copy link
Author

I saw that comment and it made me wonder whether I should even try it on my device, but I'm glad to hear you can get it to work yourself. 20 FPS might be enough. I tried to run the sample in Colab without a GPU but I get an error that there's no CUDA support. Can you explain what I need to change so that it runs?

@ZHKKKe
Copy link
Owner

ZHKKKe commented Feb 24, 2021

If you want to run our matting Colab demo based on Google CPU, please try to delete all .cuda() in the Colab demo. BTW, FPS of the online video matting Colab demo is very low because all frames need to be sent to Google's server and processed remotely.

Please try the offline demo to get higher FPS. And, waiting for your feedback of FPS on CPU.

@LebronJames0423
Copy link

I saw that comment and it made me wonder whether I should even try it on my device, but I'm glad to hear you can get it to work yourself. 20 FPS might be enough. I tried to run the sample in Colab without a GPU but I get an error that there's no CUDA support. Can you explain what I need to change so that it runs?

Hello, would you tell me the FPS of this model on the CPU? Thank you.

@ZHKKKe
Copy link
Owner

ZHKKKe commented Apr 14, 2021

@LebronJames0423
We export the model to the ONNX format and call it by C++ with low resolution inputs.
In this case, we got 15-20 FPS.
However, if you call the model by using PyTorch, the FPS will be lower.

@LebronJames0423
Copy link

LebronJames0423 commented Apr 14, 2021 via email

@StarkerRegen
Copy link

@LebronJames0423
We export the model to the ONNX format and call it by C++ with low resolution inputs.
In this case, we got 15-20 FPS.
However, if you call the model by using PyTorch, the FPS will be lower.

Hello, can you provide C++ code for inference?Thank you. @ZHKKKe

@ZHKKKe
Copy link
Owner

ZHKKKe commented Apr 29, 2021

@StarkerRegen
You may can refer this issue: #101

@StarkerRegen
Copy link

@StarkerRegen
您可以参考以下问题:#101

OK, thank you very much.

@ZHKKKe ZHKKKe closed this as completed Jun 21, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants