New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add device mode for Predictor interface? #724

Open
ashwin opened this Issue Jun 2, 2017 · 2 comments

Comments

Projects
None yet
3 participants
@ashwin

ashwin commented Jun 2, 2017

One of the best things about Caffe was that it was extremely easy to load a pre-trained model in Python, set CPU/GPU and get output for an image.

It seems like the Predictor interface, which is meant for such ease-of-use, does not have any way to set GPU. It always runs the network in CPU mode. This is a big pain point as seen in #323 and #503 .

Could a simple device mode be added for Predictor for folks who just want to load a model on a device and run?

@jnulzl

This comment has been minimized.

Show comment
Hide comment
@jnulzl

jnulzl Jun 25, 2017

@ashwin Hello ,I also meet this problem. I want to load pretrained model in gpu mode and then predict a image by Predictor interface,can you do it????Can you give me a simple example?? Thank you so much!

jnulzl commented Jun 25, 2017

@ashwin Hello ,I also meet this problem. I want to load pretrained model in gpu mode and then predict a image by Predictor interface,can you do it????Can you give me a simple example?? Thank you so much!

@PauliusPoc

This comment has been minimized.

Show comment
Hide comment
@PauliusPoc

PauliusPoc May 29, 2018

i know this is old, but did you figure it out?

PauliusPoc commented May 29, 2018

i know this is old, but did you figure it out?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment